id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.13024
On the mechanism of polaritonic rate suppression from quantum transition paths
Polariton chemistry holds promise for facilitating mode-selective chemical reactions, but the underlying mechanism behind the rate modifications observed under vibrational strong coupling is not well understood. Using the recently developed quantum transition path theory, we have uncovered a mechanism of resonant suppression of a thermal reaction rate in a simple model polaritonic system, consisting of a reactive mode in a bath confined to a lossless microcavity with a single photon mode. This mechanism was uncovered by resolving the quantum dynamical reactive pathways and identifying their rate limiting transitions. Upon inspecting the wavefunctions associated with the rate limiting transition, we observed the formation of a polariton and identified the concomitant rate suppression as due to hybridization between the reactive mode and the cavity mode, which inhibits bath-mediated tunneling during the reaction. The transition probabilities that define the quantum master equation can be directly translated into a visualisation of the corresponding polariton energy landscape. This landscape exhibits a double funnel structure, with a large barrier between the initial and final states. This mechanism of resonant rate suppression is found to be robust to model parameters and computational details, and thus expected to be general.
Michelle C. Anderson, Esmae J. Woods, Thomas P. Fay, David J. Wales, David T. Limmer
2023-04-25T17:52:46Z
http://arxiv.org/abs/2304.13024v3
# On the mechanism of polaritonic rate suppression from ###### Abstract Polariton chemistry holds promise for facilitating mode-selective chemical reactions, but the underlying mechanism behind the rate modifications observed under vibrational strong coupling is not well understood. Using the recently developed quantum transition path theory, we have uncovered a mechanism of resonant suppression of a thermal reaction rate in a simple model polaritonic system, consisting of a reactive mode in a bath confined to a lossless microcavity with a single photon mode. This mechanism was uncovered by resolving the quantum dynamical reactive pathways and identifying their rate limiting transitions. Upon inspecting the wavefunctions associated with the rate limiting transition, we observed the formation of a polariton and identified the concomitant rate suppression as due to hybridization between the reactive mode and the cavity mode, which inhibits bath-mediated tunneling during the reaction. The transition probabilities that define the quantum master equation can be directly translated into a visualisation of the corresponding polariton energy landscape. This landscape exhibits a double funnel structure, with a large barrier between the initial and final states. This mechanism of resonant rate suppression is found to be robust to model parameters and computational details, and thus expected to be general. When molecules are confined to a dark microcavity a vibrational polariton can form from the hybridization between a molecular vibrational mode and the vacuum photon mode of the cavity. The rates of ground state bond breaking reactions of molecules in such cavities have been shown to be significantly modulated from their values outside of the cavity. [1; 2; 3] This phenomenon of polaritonic chemistry holds promise for selective catalysis, however the mechanisms of rate enhancement or suppression remain unclear. Here, we have applied quantum transition path theory (QTPT) [4] to a simple model in order to elucidate the underlying quantum mechanical source of the rate modification. We find that the origin of sharp rate decreases under cavity resonance conditions is due to poor bath-mediated tunneling between polariton wavefunctions. In experimental and theoretical studies, polariton formation has been shown to affect reactive behavior in both the excited and ground state. [1; 3; 5; 6; 7; 8; 9] Of particular interest is the change in rates observed in ground state barrier crossing reactions, when molecules are confined to a microcavity with a vacuum mode in resonance with key molecular vibrations. [1; 2; 3] Attempts to explain the experimentally observed sharp changes in behavior under resonance conditions have met with mixed success and different interpretations. Classical transition state theories do not support resonance effects. [10; 11; 12; 13] Other work has used extensions to transition state theory [14; 15; 16; 17] to explain the rate suppression on resonance in terms of dynamical caging effects [18] or tunneling effects. [16; 17] Although these studies have reproduced rate modifications, they have tended to reveal broad, shallow rate suppression under resonance, which does not agree with experimental observations that indicate sharp rate modifications. [18; 19; 20] A full quantum dynamical study using the hierarchical equations of motion [21] carried out by Lindoy and coworkers has recently identified sharp rate modifications under resonance conditions. [20] This result lends credence to the notion that intrinsically quantum mechanical effects must be modeled in order to observe polaritonic rate enhancement or suppression. [12; 20] To elucidate the source of resonant effects in polaritonic systems, we have employed a simple Pauli-Fierz [22] quantum electrodynamics Hamiltonian [18; 23] for a single photon mode coupled to a reactive proton coordinate solvated in a bath, and employed QTPT to extract barrier crossing rates and mechanisms. QTPT and related quantum path sampling techniques have been recently developed and used to extract mechanistic information from quantum dynamical processes, including energy transfer [24; 25] and nonadiabatic relaxation through conical intersections. [4; 26] Here we have used QTPT to extract the dominant reactive pathways of a thermally induced proton transfer event under conditions where the proton was resonantly coupled to a cavity photon mode, whose natural frequency we could adjust. These pathways are given by a series of jumps through energy eigenstates of the combined proton-cavity system. After analyzing this dominant reactive pathway to determine the committor eigenstates, which correlate with the classical transition state, we find the fall in rate is caused by reduced tunneling matrix elements between the committor eigenstates, caused by the formation of polaritons. Our key result is that the poor overlap is due to the formation of polaritonic wavefunctions under resonance conditions. We address a model similar to the Shin-Metiu formulation [27] employed in several previous studies, [18; 19] under the Pauli-Fierz Hamiltonian [22; 28] in which light and matter are treated quantum mechanically. In this model, the long wavelength approximation in the dipole gauge is followed by the Power-Zienau-Wooley transformation. The resulting Hamiltonian includes a dipole self-energy term which, if neglected, will result in an incorrect potential. [29; 30] Since it is convenient for QTPT to have localized eigenstate wavefunctions on either side of a barrier to define the reactant and product states, we added a small linear bias to the remove bistability of the original Shin-Metiu model. Due to the relatively high mass of the proton coordinate, the bias magnitude necessary to localize wavefunctions on either side of the barrier was very small. The resulting system Hamiltonian, \(H_{s}\) is, \[\begin{split} H_{s}&=P^{2}/(2M)+U(R)+p_{c}^{2}/2\\ &+\omega_{c}^{2}/2\left(q_{c}+\sqrt{2/(\hbar\omega_{c}^{3})} \chi\mu(R)\right)^{2},\end{split} \tag{1}\] where \(P\) and \(R\) are the proton momentum and position, \(p_{c}\) and \(q_{c}\) are the corresponding photon coordinates, \(\omega_{c}\) is the photon frequency, \(\hbar\) is Plank's constant, \(\mu(R)\) is the proton dipole operator, \(U(R)\) is the potential energy of the proton coordinate, and \(\chi\) is a parameter which controls the coupling strength between light and matter. The coupling of the cavity to the system dipole should be interpreted as being dependent on the number of reactive molecules in the cavity, which under the assumption that the dipolar molecules' motion is independent and isotropic, can be decoupled. [28] The resultant Born-Oppenheimer surface is given by \(E(R,q_{c})=H_{s}-P^{2}/(2M)-p_{c}^{2}/2\). The functions used for \(E(R,q_{c})\), \(U(R)\) and \(\mu(R)\) are illustrated in Fig. 1. These potentials are similar to those employed by Li and coworkers, [18] with explicit forms given in the supporting information (SI). The potential energy surface, \(E(R,q_{c})\) is shown in Fig. 1 a) for the case where the system is in resonance and polariton formation occurs. Note that the bottoms of the wells illustrated in \(E(R,q_{c})\) are not centered at \(q_{c}=0\) but displaced to either side, whereas the surface remains effectively symmetric about 0 in \(R\). The proton potential in Fig. 1 b) is a simple double well reflecting two distinct covalently bonded states of the proton. The form of the position dependent dipole shown in Fig. 1 c) is consistent with the notion that a positively charged proton is moving between the two metastable states. The resultant eigenstates are more clearly shown by Fig. 1 d) which displays a free energy disconnectivity graph [31; 32] for the quantum master equation corresponding to an on resonance system. The symmetric bifurcation corresponds to bistability, where the resonant states lie at the bottom of distinct funnels separated by a high barrier. To address the dynamics of the system, QTPT was applied to an open system description with full Hamiltonian, [33; 34] \[H=H_{s}+H_{B}+R\otimes B, \tag{2}\] in which the total Hamiltonian is broken down into \(H_{s}\) which operates only on the system, \(H_{B}\), which operates only on the bath, and a coupling operator \(R\otimes B\). The bath is envisioned to include all non-reactive modes of the system, including molecular and solvent modes. Additionally, the bath captures interactions between the reactive mode of the molecule and other reactive molecules whose dipoles are aligned with the cavity. [28] The bath is approximated by an infinite set of harmonic oscillators that relax quickly in comparison to the system dynamics, which allows the bath effects to be addressed perturbatively via the Born-Markov approximation. [33; 34; 35] The coupling operator involves \(R\), the position operator of Figure 1: a) Potential energy surface of the proton-photon system when \(\omega_{c}=0.925\omega_{s}\) and \(\eta_{c}=\eta_{s}\) and a polariton is expected to form. b) The potential energy surface of the proton coordinate. c) The proton dipole operator. d) Free energy disconnectivity graph [31; 32] corresponding to the quantum master equation that governs the dynamics. The resonant states lie at the bottom of distinct funnels separated by a high effective barrier. The red line indicates a free energy scale of 10 \(k_{B}T\). the proton, via the tensor product with \(B=\sum_{k}c_{k}R_{k}\), a sum over position coordinates, \(R_{k}\), of the bath harmonic oscillators, with coupling strength parameters, \(c_{k}\) determined by the spectral density, \[J(\omega)=\pi/2\sum_{k}\frac{c_{k}^{2}}{\omega_{k}}\delta(\omega-\omega_{k})=\eta \omega e^{-\omega/\omega_{b}}, \tag{3}\] in which \(\omega_{k}\) is the frequency of bath oscillator \(k\), \(\omega_{b}\) is the bath cut-off frequency, and \(\eta\) is the system-bath coupling strength. To employ QTPT, a further approximation must be made to obtain secular dynamics, in which the populations and coherences of the system density matrix are independent. This approximation is justified when coherences oscillate quickly in comparison to the timescale of population dynamics, leading to their effects averaging out.[34] Comparisons with non-secular and numerically exact quantum calculations to confirm that the Born-Markov and secular approximations were appropriate, are found in the SI. The population dynamics from the quantum master equations are assembled into a finite time Markov state model for QTPT. The transition rates between eigenstates in QTPT are equivalent to a jump process, which provides a physical interpretation for the treatment of the eigenstates as distinct elements in a Markov process. The dynamics modeled are those that would be observed in the case that the energy of the bath in contact with the system was continuously monitored.[34; 36; 37; 38; 39] Within the quantum master equation, the tensor element describing the contribution of eigenstate \(j\) to the change in population of eigenstate \(i\) is[40] \[\begin{split} D_{iijj}&=|R_{i,j}|^{2}\int_{0}^{ \infty}dt\,e^{-i\omega_{i,j}}\langle B(0)B(t)\rangle_{B}\\ &+|R_{i,j}|^{2}\int_{0}^{\infty}dt\,e^{-i\omega_{j,i}}\langle B( t)B(0)\rangle_{B},\end{split} \tag{4}\] where \(\langle...\rangle_{B}\) indicates the average of an operator over the bath degrees of freedom in equilibrium and \(|R_{i,j}|^{2}=|\langle i|R|j\rangle|^{2}\). The states \(|j\rangle\) and energies \(E_{i}\) used to calculate frequencies, \(\omega_{i,j}=(E_{i}-E_{j})/\hbar\), correspond to the eigenstates and eigenvalues of \(H_{s}\) in the absence of coupling to the bath. The rate of population transfer depends on the system coupling operator element and the one-sided Fourier transform of the bath correlation functions. The population dynamics define the transition matrix with elements \[T_{ij}=(e^{\tau D}\sigma_{ii})_{jj}, \tag{5}\] meaning population \(j\) following propagation under the operator \(D\) for time \(\tau\), taken small, given the system was initialized in \(\sigma_{ii}\), a density matrix in which all population is in energy eigenstate \(i\). The above formulation of the quantum master equation can be used to visualise the polariton energy landscape directly, by translating the equilibrium occupation probabilities and transition matrix into the equivalent relative free energies.[41] Hence we obtain the free energy disconnectivity graph[31; 32] in Fig. 1d. In this representation the vertical scale is the effective free energy, the bottom of each line corresponds to an eigenstate, and the eigenstates are connected together at a regular series of free energy thresholds when they can interconvert by any sequence of transition states that lies below the threshold. Hence this construction provides a faithful account of the effective barriers and the organisation of the landscape. The central quantity of transition path theory is the committor probability, \(P_{b|a}(i)\), derived from the system of equations[42; 43] \[P_{b|a}(i)-\sum_{j\epsilon I}T_{ij}P_{b|a}(j)=\sum_{j\epsilon b}T_{ij}, \tag{6}\] which gives the probability for a system in eigenstate \(i\) to visit eigenstate \(b\) (the product state) before eigenstate \(a\) (the reactant state) where \(I\) is the set of all states that are neither \(a\) nor \(b\). Note that \(P_{b|a}(b)=1\) and \(P_{b|a}(a)=0\).[42; 43] Here we take the reactant and product states to be the lowest energy eigenstates localized on either side of the double well potential. Classically, the phenomenological transition state of a reaction is defined by a committor value of \(1/2\).[44; 45] However, for complex kinetic transition networks the productive paths and reactive visitation probabilities need to be considered together with committor values to diagnose the key dynamical bottlenecks.[46; 47] In QTPT we determine the pair of energy eigenstates where the probability changes from greater than to less than \(1/2\), defining a separatrix, as committor eigenstates or transition eigenstates, whose role is analogous to a classical transition state in that they indicate a change of likely fate for the reactive pathway and are generally the bottle-neck states that limit the reactive flux. From the committors, we found the barrier crossing rate, \(k\),[42; 43; 4] as a function of \(\omega_{c}/\omega_{s}\), where \(\omega_{s}\) is the approximate harmonic frequency of the proton, to look for resonance rate modification effects. In Fig. 2 a), the barrier crossing rate relative to a reference \(k_{0}(\eta_{c})\) is provided for three different light-matter coupling strengths, \(\eta_{c}\). The light matter coupling strength is defined as,[18] \[\eta_{c}=\frac{\partial\mu(R)}{\partial R}\Big{|}_{R_{0}}\sqrt{\frac{\hbar}{ 2\omega_{s}M}}\frac{\chi}{\hbar\omega_{c}}, \tag{7}\] where \(R_{0}\) is the equilibrium position of \(R\) in the reactant well. The coupling strength is held constant by modifying \(\chi\) in proportion to \(\omega_{c}\). In this work, the default light-matter coupling is \(\eta_{s}=0.02\). This coupling strength is similar to that addressed by Lindoy and coworkers in their recent work.[20] It is much smaller than the coupling strengths often employed in classical theoretical treatments, however, it is still relatively very strong coupling compared to experiments. Note that we are considering a single molecule coupled to the cavity, and thus the relevant coupling strength is related to the polaritonic splitting by a factor dependent on the number of solutes.[28] Although the rates have been normalized for visual comparison, the absolute barrier crossing rates in Fig. 2 a) decrease with increasing \(\eta_{c}\). This change can be explained by regarding the photon coordinate as an extra degree of freedom imposing friction on the proton coordinate and indicates the system is in the high friction limit. This interpretation is qualitatively consistent with previous classical theories.[48; 49] However, we find a clear resonance rate suppression near \(\omega_{c}/\omega_{s}=0.9\). The observed resonant rate suppression does not occur exactly at \(\omega_{c}/\omega_{s}=1\) because the system is anharmonic and the energy gap between proton vibrational states prior to coupling to the photon coordinate is below \(\omega_{s}\) for the higher energy states involved in barrier crossing reactions. Higher \(\eta_{c}\) values result in stronger resonances with multiple peaks. To identify reactive pathways and glean mechanistic insight into the rate suppression, we first calculated the reactive flux between eigenstates. In a detailed balance system, the reactive flux between any two eigenstates for the reaction \(a\to b\) is given by \[f_{i,j}^{a,b}=\pi_{i}P_{a|b}(i)T_{i,j}P_{b|a}(j)\ \ \ i\neq j, \tag{8}\] where \(\pi_{i}\) is the equilibrium population of \(i\) and \(P_{a|b}(i)=1-P_{b|a}(i)\). The net fluxes between eigenstates are treated as edge weights in a graph with all of the eigenstates as vertices. The maximum flux pathway between \(a\) and \(b\) is then extracted with Dijkstra's algorithm.[43; 50] This procedure is repeated to obtain a reactive path ensemble. The committor eigenstates are defined as the last state along a reactive pathway with \(P_{a|b}(i)<1/2\) and the first state along a reactive pathway with \(P_{a|b}>1/2\). These states characterize the system immediately before and immediately after it has committed itself to completing the reaction. We inspected the dominant barrier crossing pathways extracted by QTPT as a function of photon frequency and studied their effectiveness in the vicinity of resonant rate suppression. A useful decomposition of the rate is given by defining \(\kappa=k_{1}/\exp[-\beta\Delta E^{\dagger}]\), where \(k_{1}\) is the rate associated with the dominant reactive path and \(\Delta E^{\dagger}\) is the activation energy, computed as a difference between the ground state energy and the largest energy visited on the dominant path. This is smaller than the classical barrier height due to zero point energy and tunneling through the barrier. The decrease in \(\kappa\) in Fig. 2 b) corresponds closely to the observed resonant fall in rates in Fig. 2 a) indicating that a lack of transmission rather than a change in activation energy in the dominant pathway is at least partially responsible for the resonant rate decrease. Indeed, for the range of \(\omega_{c}\) considered, the activation energy of the dominant path varies rather little. This behavior is consistent with observations that the change in the barrier height due to coupling to the cavity is not the primary mechanism for rate suppression.[2] At the frequencies where we observe a suppression in the rate, we also find that the number of reactive pathways participating in the reaction increases. This effect is quantified by the path entropy, \(S=-\sum_{\alpha}f_{\alpha}\ln(f_{\alpha})\), where \(f_{\alpha}=\min_{i,j}[f_{i,j}^{a,b}]\) for \(i\) and \(j\) along unique reactive paths. The spikes of \(S\) seen in Fig. 2 c) correspond with the resonant rate decreases, revealing that a larger variety of reactive pathways contribute to the ensemble under resonance conditions. This result indicates that the dominant pathway is being rendered less effective and other pathways are more important. The origin of the resonant effect that decreases the effectiveness of the dominant pathway is apparent when we inspect the jump between the pre and post-committor eigenstates for the dominant pathway under resonance conditions. The dominant pathway, in all cases, is a tunneling pathway as both pre and post-committor eigenstates have energies below the potential barrier. The square coupling operator elements, \(|R_{i,j}|^{2}\), linking the committor eigenstates of the dominant pathway for \(\eta_{c}/\eta_{s}=0.5\) in Fig. 3 a), are directly proportional to inter-eigenstate transfer rates in the quantum master Figure 2: a) Rates of barrier crossing from eigenstate 1 to 2 at three light-matter coupling strengths as a function of \(\omega_{c}/\omega_{s}\). See the Table 2 in the supporting information for values of \(k_{0}(\eta_{c})\). b) Transmission coefficient for the dominant reactive pathway as a function of \(\omega_{c}/\omega_{s}\). See Table 2 in the SI for values of \(\kappa_{0}(\eta_{c})\) c) Path entropy for the three light-matter coupling strengths as a function of \(\omega_{c}/\omega_{s}\). equations, and show the same double-peaked pattern of resonance suppression in the overall rates observed for this coupling strength in Fig. 2 a). Poor overlap at the committor jump results in resonant rate suppression. The resonance effect observed in \(|R_{i,j}|^{2}\) is stronger than that observed in the rates themselves, but this contribution to the rate is offset by the modest increase in other reactive pathways in the system which contribute to the rate in larger amounts on resonance. Interrogation of the spatial distribution of the committor eigenstate wavefunctions in Fig. 3 b) explains the source of the poor overlap for the dominant path under resonance conditions. At \(\omega_{c}/\omega_{s}=0.74\) and \(\omega_{c}/\omega_{s}=1.02\), below or above resonance, the wavefunctions of the pre and post-committor eigenstates closely resemble conventional harmonic oscillator wavefunctions. However, on resonance at \(\omega_{c}/\omega_{s}=0.916\), the committor wavefunctions do not resemble harmonic oscillator wavefunctions, instead exhibiting mode hybridization. These polaritonic wavefunctions appear to be rotated relative to the coordinate \(R\) by which the system is coupled to the bath, explaining the poor overlap. Resonance with the cavity resulting in the formation of polaritonic states along the critical reactive pathways in the barrier crossing reactions results in a sharp decrease in barrier crossing rates. This phenomenon agrees with the sharp resonance effects observed experimentally [1; 2; 3] and observed by Lindoy and coworkers using fully quantum dynamical simulations. [20] The simulations by Lindoy and coworkers indicated sharp rate increases or decreases depending on their choice of cavity loss and bath structure, an effect they similarly attributed to changes in bath interactions upon light-matter hybridization. As the nature of the system-bath coupling operator is critical to the observed resonant suppression effect, it is unsurprising that dynamics in the lossy cavity of Lindoy and coworkers differ from those we have observed. The resonant suppression observed here is a fundamentally open quantum system phenomenon that will disappear in the high temperature, classical limit, where tunneling mechanisms do not play a role, or when the cavity coupling becomes sufficiently weak as to no longer form a polariton with a single molecule. Any theory that is unable to explicitly account for the hybridization of light-matter states to form polaritons, or is unable to account for the interactions of the bath with polaritonic states consistently, will not reproduce these resonant effects. This observation explains the failure of Grote-Hynes theory to uncover the sharp resonance effects, [10; 11; 12; 13] even though it correctly identifies the origin of suppression being an altered reactive flux, rather than a change in activation energy. We suggest that further studies into the potential of polaritonic effects, in selective bond-breaking reactions and other applications, should make use of methods that explicitly address the formation of polaritons and their interaction with environmental fluctuations. **Acknowledgments**. We would like to thank David Manolopoulos for useful discussions. M.C.A., T. P. Fay and D.T.L. were supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, CPIMS Program Early Career Research Program under Award No. DE-FOA002019. E. J. Woods gratefully acknowledges support from EPSRC studentship grant no. EP/R513180/1.
2301.07857
Fatigue failure of amorphous alloys under cyclic shear deformation
The accumulation of plastic deformation and flow localization in amorphous alloys under periodic shear are investigated using molecular dynamics simulations. We study a well-annealed binary mixture of one million atoms subjected to oscillatory shear deformation with strain amplitudes slightly above a critical value. We find that upon approaching a critical strain amplitude from above, the number of shear cycles until the yielding transition is well described by a power-law function. Remarkably, the potential energy at the end of each cycle as a function of the normalized number of cycles is nearly independent of the strain amplitude, which allows for estimation of the fatigue lifetime at a given strain amplitude. The analysis on nonaffine displacements of atoms elucidates the process of strain localization, including irreversible rearrangements of small clusters until the formation of a system-spanning shear band.
Nikolai V. Priezjev
2023-01-19T02:57:04Z
http://arxiv.org/abs/2301.07857v1
# Fatigue failure of amorphous alloys under cyclic shear deformation ###### Abstract The accumulation of plastic deformation and flow localization in amorphous alloys under periodic shear are investigated using molecular dynamics simulations. We study a well-annealed binary mixture of one million atoms subjected to oscillatory shear deformation with strain amplitudes slightly above a critical value. We find that upon approaching a critical strain amplitude from above, the number of shear cycles until the yielding transition is well described by a power-law function. Remarkably, the potential energy at the end of each cycle as a function of the normalized number of cycles is nearly independent of the strain amplitude, which allows for estimation of the fatigue lifetime at a given strain amplitude. The analysis on nonaffine displacements of atoms elucidates the process of strain localization, including irreversible rearrangements of small clusters until the formation of a system-spanning shear band. Keywords: metallic glasses, fatigue, yielding transition, cyclic loading, molecular dynamics simulations Introduction The prediction of stability and lifetime of amorphous alloys under repeated stress or strain deformation is important for various structural applications [1; 2]. Although multicomponent alloys like metallic glasses possess a number of advantageous properties, such as high strength and large elastic strain limit, their resistance to fatigue damage is relatively poor [3; 4; 5; 6]. The failure mechanism in metallic glasses involves the formation of nanoscale shear bands where plastic strain becomes strongly localized, which in turn might lead to propagation of microscale cracks [7; 8; 9; 10]. At the atomic level, the elementary plastic deformation in amorphous solids consists of rapid rearrangement of a small cluster of particles or shear transformation [11; 12]. Notably, the results of numerical simulations of the fibre bundle model have shown that the fatigue failure under repeated loading of heterogeneous materials occurs after a number of cycles, and the fatigue lifetime has a power-law dependence on the loading amplitude [13]. More recently, using two models of elastoplastic rheology, it was demonstrated that cyclically sheared amorphous materials initially accumulate low levels of damage in the form of spatial strain heterogeneity, which is followed by a sudden catastrophic material failure via shear band formation [14]. However, in spite of the considerable modeling and experimental efforts, the precise determination of the critical loading amplitude and fatigue lifetime remains a challenging problem. During the last decade, the effect of cyclic loading on the yielding transition, structural relaxation, and flow localization in amorphous materials was extensively studied using atomistic simulations [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. Interestingly, it was demonstrated that in the athermal limit, amorphous solids under small-amplitude oscillatory shear evolve into the so-called limit cycles, where trajectories of atoms become exactly reversible after one or more periods, and the number of cycles to reach periodic behavior diverges upon approaching a critical strain amplitude from below [41; 18]. On the other hand, periodic deformation at strain amplitudes above a critical value leads to yielding and flow localization after a number of cycles [23; 24; 27; 19; 31]. In general, the number of cycles until the yielding transition depends on the degree of annealing, temperature, system size, strain amplitude and frequency. In particular, it was found that the number of cycles to failure increases upon increasing frequency [19] or glass stability [29; 32], and by decreasing strain amplitude towards a critical value [24; 19]. In addition, the number of fatigue cycles can be reduced by periodically alternating shear orientation in two or three spatial dimensions [31] or by occasionally increasing strain amplitude above a critical value [34]. Despite recent progress, however, the processes of damage accumulation and formation of shear bands during cyclic loading near a critical strain amplitude remain not fully understood. In this paper, the influence of repeated shear strain on plastic deformation and yielding transition in a disordered solid is studied via molecular dynamics (MD) simulations. We consider a well-annealed binary glass subjected to oscillatory shear deformation at strain amplitudes slightly above a critical value. It will be shown that the number of shear cycles to reach the yielding transition increases approximately as a power-law function when the strain amplitude approaches the critical value. Moreover, we find that the potential energy at zero strain for different strain amplitudes is well described by a single function of the normalized number of cycles. In turn, the appearance of local plastic events and the formation of a shear band at the yielding transition are quantified via the fraction of atoms with large nonaffine displacements during one shear cycle. The rest of this paper is organized as follows. The details of molecular dynamics simulations as well as the oscillatory shear deformation protocol are described in the next section. The analysis of shear stress, potential energy, and nonaffine displacements is presented in section III. A brief summary is provided in the last section. ## II Molecular dynamics simulations In this study, the amorphous alloy was modeled via the standard Kob-Andersen (KA) binary mixture composed of \(80\,\%\) of atoms of type A and \(20\,\%\) of type B [42]. The total number of atoms is \(10^{6}\). In this model, the interaction between atoms of types \(\alpha,\beta=A,B\) is defined via the Lennard-Jones (LJ) potential: \[V_{\alpha\beta}(r)=4\,\varepsilon_{\alpha\beta}\,\Big{[}\Big{(}\frac{\sigma_ {\alpha\beta}}{r}\Big{)}^{12}-\Big{(}\frac{\sigma_{\alpha\beta}}{r}\Big{)}^{6 }\,\Big{]}, \tag{1}\] where the parameters are set to \(\varepsilon_{AA}=1.0\), \(\varepsilon_{AB}=1.5\), \(\varepsilon_{BB}=0.5\), \(\sigma_{AA}=1.0\), \(\sigma_{AB}=0.8\), \(\sigma_{BB}=0.88\), and \(m_{A}=m_{B}\)[42]. A similar parametrization was used by Weber and Stillinger to study structure and dynamics of the amorphous metal-metalloid alloy Ni\({}_{80}\)P\({}_{20}\)[43]. All physical quantities are reported in the units of length, mass, energy, and time, as follows: \(\sigma=\sigma_{AA}\), \(m=m_{A}\), \(\varepsilon=\varepsilon_{AA}\), and \(\tau=\sigma\sqrt{m/\varepsilon}\). The MD simulations were carried out using the LAMMPS parallel code with the integration time step \(\triangle t_{MD}=0.005\,\tau\) and the cutoff radius \(r_{c}=2.5\,\sigma\)[44; 45]. The sample preparation procedure and the deformation protocol are similar to the ones reported in the previous MD study [29]. More specifically, the binary mixture was first placed in a cubic box of linear size \(L=94.10\,\sigma\) and equilibrated at the temperature \(T_{LJ}=1.0\,\varepsilon/k_{B}\) and density \(\rho=\rho_{A}+\rho_{B}=1.2\,\sigma^{-3}\) using the Nose-Hoover thermostat and periodic boundary conditions [44; 45]. For reference, the critical temperature of the KA model at the density \(\rho=1.2\,\sigma^{-3}\) is \(T_{g}=0.435\,\varepsilon/k_{B}\)[42]. Then, the sample was cooled with computationally slow rate of \(10^{-5}\varepsilon/k_{B}\tau\) from \(T_{LJ}=1.0\,\varepsilon/k_{B}\) to \(0.01\,\varepsilon/k_{B}\) at constant density \(\rho=1.2\,\sigma^{-3}\). Right after cooling, the glass was subjected to oscillatory shear deformation along the \(xz\) plane, as follows: \[\gamma_{xz}(t)=\gamma_{0}\sin(2\pi t/T), \tag{2}\] where \(\gamma_{0}\) is the strain amplitude and \(T=5000\,\tau\) is the oscillation period. The simulations were performed for strain amplitudes \(0.069\leqslant\gamma_{0}\leqslant 0.075\) at \(T_{LJ}=0.01\,\varepsilon/k_{B}\) and \(\rho=1.2\,\sigma^{-3}\). The results for the potential energy, shear stress, and nonaffine displacements of atoms are reported only for one realization of disorder because of the considerable computational burden. As an example, it took about 36 days to simulate 800 shear cycles at the strain amplitude \(\gamma_{0}=0.069\) using 400 processors in parallel. ## III Results Recent studies have shown that model glasses prepared by thermal annealing can yield after a certain number of cycles at strain amplitudes that are smaller than the yielding strain during uniform shear deformation [23; 29]. The precise value of the critical strain amplitude is difficult to determine numerically due to a large number of cycles needed to reach the yielding transition. Within the range of about three thousand cycles, it was found that rapidly quenched binary glasses under cyclic loading yield at the critical strain amplitude \(\gamma_{0}=0.067\), regardless of whether shear is applied along a single plane or periodically alternated in two or three spatial dimensions [31]. In the present study, we consider a relatively large system of one million atoms and subject a well-annealed KA glass to oscillatory shear deformation at strain amplitudes slightly above the critical value. We first report the variation of shear stress along the \(xz\) plane as a function of time in Fig. 1 for two values of the strain amplitude, i.e., \(\gamma_{0}=0.072\) and \(0.075\). It can be seen that in both cases, the amplitude of shear stress oscillations slightly decreases upon continued loading until a sudden drop during one shear cycle. Notice that the number of cycles until yielding becomes greater upon decreasing strain amplitude. Specifically, the yielding transition occurs during 218-th cycle for \(\gamma_{0}=0.072\) and during 56-th cycle for \(\gamma_{0}=0.075\). By contrast, after the yielding transition, the maximum shear stress is determined by the plastic flow within a shear band. These results are consistent with those previously reported for a well-annealed binary glass that was periodically deformed for only 40 shear cycles at larger strain amplitudes [24]. Along with shear stress, we plot in Fig. 2 the time dependence of the potential energy for the same strain amplitudes, \(\gamma_{0}=0.072\) and \(0.075\), as in Fig. 1. It is evident that the yielding transition is associated with an abrupt increase of the potential energy due to the formation of a shear band across the system. It should be emphasized that for each strain amplitude, the sudden change in shear stress and potential energy occur at the same cycle number. One can further realize that before yielding, the cyclic shear deformation results in a slow accumulation of plastic events. This is reflected in a gradual increase of the potential energy minima when strain is zero as a function of the cycle number. Next, the potential energy minima at the end of each cycle are presented in Fig. 3 for strain amplitudes in the range \(0.069\leqslant\gamma_{0}\leqslant 0.075\). Note that the data at zero strain for \(\gamma_{0}=0.072\) and \(0.075\) are the same as in Fig. 2. It can be clearly observed in Fig. 3 that upon reducing strain amplitude towards a critical value, the yielding transition becomes significantly delayed. The exception to this trend is the case of loading at the strain amplitude \(\gamma_{0}=0.071\), where the number of cycles until yielding is smaller than for \(\gamma_{0}=0.072\). In turn, the maximum number of cycles until the yielding transition is \(n_{Y}=685\) for the strain amplitude \(\gamma_{0}=0.069\). We comment that simulations at smaller strain amplitudes, \(\gamma_{0}<0.069\), were not carried out due to the high computational cost. The similarity of the functional form for potential energy minima shown in Fig. 3 suggests a possibility of rescaling the \(\hat{x}\)-coordinate by the number of cycles, \(n_{Y}\), required for the system to reach the yielding transition at a given strain amplitude. Fig. 4 shows the same potential energy curves as a function of the ratio \(n/n_{Y}\). Remarkably, the data for different \(\gamma_{0}\) nearly collapse onto a single curve when \(n<n_{Y}\). The master curve is approximately linear in the range \(0.2\lesssim n/n_{Y}\lesssim 0.8\), followed by a steep increase due to accumulation of plastic events within a narrow region that ultimately leads to flow localization when \(n=n_{Y}\). On the other end, the initial slope of the curve is determined by irreversible rearrangements of group of atoms that settled at relatively shallow energy minima after thermal annealing. In practice, the function \(U(n/n_{Y})\) can be used to estimate \(n_{Y}\) for a binary glass loaded for a number of cycles (\(n<n_{Y}\)) at a strain amplitude in the vicinity of the critical value. Furthermore, the variation of \(n_{Y}\) versus \(\gamma_{0}\) is shown in the inset of Fig. 4. It is readily apparent that the number of cycles until yielding increases significantly when the strain amplitude approaches a critical value from above. Moreover, the MD data are well described by the power-law function, as follows: \[n_{Y}=0.024\cdot(\gamma_{0}-0.067)^{-1.66}, \tag{3}\] where the critical strain amplitude is taken to be \(0.067\). This value was determined previously for a smaller system of \(60\,000\) atoms at \(T_{LJ}=0.01\,\varepsilon/k_{B}\) and \(\rho=1.2\,\sigma^{-3}\)[31]. These results imply that the number of cycles to reach the yielding transition might further increase at lower strain amplitudes and possibly diverge in the case of athermal systems [46]. The local plastic events in disordered solids can be accurately identified via the analysis of nonaffine displacements of atoms [47]. As a reminder, the nonaffine measure for displacement of the \(i\)-th atom from \(\mathbf{r}_{i}(t)\) to \(\mathbf{r}_{i}(t+\Delta t)\) is defined via the matrix \(\mathbf{J}_{i}\) that transforms positions of its neighboring atoms and minimizes the following expression: \[D^{2}(t,\Delta t)=\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\Big{\{}\mathbf{r}_{j}(t+ \Delta t)-\mathbf{r}_{i}(t+\Delta t)-\mathbf{J}_{i}\big{[}\mathbf{r}_{j}(t)- \mathbf{r}_{i}(t)\big{]}\Big{\}}^{2}, \tag{4}\] where the summation is performed over \(N_{i}\) atoms that are initially located within \(1.5\,\sigma\) from \(\mathbf{r}_{i}(t)\). It should be noted that the plastic rearrangement of neighboring atoms during the time interval \(\Delta t\) typically corresponds to values of the nonaffine measure \(D^{2}(t,\Delta t)\) greater than the cage size, which is about \(0.1\,\sigma\) for the KA binary glass at \(\rho=1.2\,\sigma^{-3}\)[42]. In Fig. 5 we show the fraction of atoms with relatively large nonaffine displacements during one cycle, \(D^{2}[(n-1)T,T]>0.04\,\sigma^{2}\), for the strain amplitudes \(0.069\leqslant\gamma_{0}\leqslant 0.075\). We comment that the nonaffine measure was evaluated only for selected cycles due to excessive computational cost for the large system. It is clearly observed that the shape of \(n_{f}\) is similar to the dependence of energy minima on the number of cycles shown in Fig. 3. Notice a small peak in \(n_{f}\) during the first cycle due to a number of atoms that become arranged in shallow energy minima upon thermal annealing, and, as a result, these atoms are prone to plastic rearrangement under shear deformation. As expected, the yielding transition is clearly marked by a sharp increase in the fraction \(n_{f}\), indicating extended plastic flow. In analogy with the potential energy minima shown in Fig. 4, we replot the same data for \(n_{f}\) as a function of the ratio \(n/n_{Y}\) in Fig. 6. It is evident that fractions \(n_{f}(n/n_{Y})\) for different values of the strain amplitude approximately follow a common curve. These results indicate that plastic rearrangements of only about \(1\,\%\) of atoms during the first \(n_{Y}/2\) cycles result in the increase of the potential energy reported in Fig. 4. A shear band forms when \(n_{f}\approx 0.14\) at \(n=n_{Y}\). We also note that both \(n_{f}\) and \(U\) increase and level out for \(n>n_{Y}\), which reflects widening of a shear band under cyclic shear. In addition, a closer inspection of the data in the inset to Fig. 6 reveals that, on average, the fraction \(n_{f}\) is slightly larger for cyclic loading at higher strain amplitudes. The spatial distribution of plastic rearrangements can be visualized by plotting positions of atoms with large nonaffine displacements during one shear cycle, i.e., \(\Delta t=T\) in Eq. (4). For example, atomic configurations for selected number of cycles are presented in Fig. 7 for the strain amplitude \(\gamma_{0}=0.072\) and in Fig. 8 for \(\gamma_{0}=0.069\). It can be seen in Fig. 7 (a) that before yielding, atoms with \(D^{2}(200\,T,T)>0.04\,\sigma^{2}\) are organized into small clusters that are homogeneously distributed. Upon further loading, the glass yields and a shear band forms along the \(xy\) plane during the 218-th cycle, as shown in Fig. 7 (c). During the next 7 cycles, the shear band becomes wider, which is consistent with the increase in \(n_{f}\) and \(U\) after yielding reported in Figs. 3-6. Similar trends can be observed in Fig. 8 for cyclic loading at \(\gamma_{0}=0.069\), except that the orientation of the shear band is along the \(yz\) plane. Also, the sequence of snapshots for the strain amplitude \(\gamma_{0}=0.075\) during the first 100 cycles was reported in the previous study [29]. Overall, the visualization of plastic events confirm our earlier conclusions regarding the appearance of small clusters of atoms that rearrange irreversibly after a full cycle, followed by the formation of a shear band at the yielding transition, and its subsequent widening upon continued loading. Conclusions In summary, the effect of oscillatory shear on the damage accumulation and yielding transition was investigated using molecular dynamics simulations. The binary glass was prepared by cooling with a computationally slow rate deep into the glass phase and then subjected to periodic shear deformation with strain amplitudes slightly greater than a critical value. It was found that the number of shear cycles until the yielding transition increases approximately as a power-law function of the difference between the strain amplitude and the critical value. We showed that the fatigue process proceeds via a sequence of irreversible rearrangements of small clusters of atoms until a sudden formation of a shear band at the yielding transition. This behavior is reflected in the gradual increase of the potential energy at the end of each cycle and a steep increase near the yielding point. Furthermore, the potential energy minima for different strain amplitudes closely follow a master curve when plotted versus the normalized number of cycles. The master curve can be used to estimate the fatigue lifetime for a binary glass periodically deformed for only a small number of cycles at a strain amplitude near the critical value. ###### Acknowledgements. Financial support from the National Science Foundation (CNS-1531923) is gratefully acknowledged. Molecular dynamics simulations were carried out at Wright State University's Computing Facility and the Ohio Supercomputer Center using the LAMMPS code [44].
2303.00799
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks
Motivated by applications such as machine repair, project monitoring, and anti-poaching patrol scheduling, we study intervention planning of stochastic processes under resource constraints. This planning problem has previously been modeled as restless multi-armed bandits (RMAB), where each arm is an intervention-dependent Markov Decision Process. However, the existing literature assumes all intervention resources belong to a single uniform pool, limiting their applicability to real-world settings where interventions are carried out by a set of workers, each with their own costs, budgets, and intervention effects. In this work, we consider a novel RMAB setting, called multi-worker restless bandits (MWRMAB) with heterogeneous workers. The goal is to plan an intervention schedule that maximizes the expected reward while satisfying budget constraints on each worker as well as fairness in terms of the load assigned to each worker. Our contributions are two-fold: (1) we provide a multi-worker extension of the Whittle index to tackle heterogeneous costs and per-worker budget and (2) we develop an index-based scheduling policy to achieve fairness. Further, we evaluate our method on various cost structures and show that our method significantly outperforms other baselines in terms of fairness without sacrificing much in reward accumulated.
Arpita Biswas, Jackson A. Killian, Paula Rodriguez Diaz, Susobhan Ghosh, Milind Tambe
2023-03-01T19:59:42Z
http://arxiv.org/abs/2303.00799v1
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks ###### Abstract. Motivated by applications such as machine repair, project monitoring, and anti-poaching patrol scheduling, we study intervention planning of stochastic processes under resource constraints. This planning problem has previously been modeled as restless multi-armed bandits (RMB), where each arm is an intervention-dependent Markov Decision Process. However, the existing literature assumes all intervention resources belong to a single uniform pool, limiting their applicability to real-world settings where interventions are carried out by a set of workers, each with their own costs, budgets, and intervention effects. In this work, we consider a novel RMB setting, called multi-worker restless bandits (MWRMAB) with heterogeneous workers. The goal is to plan an intervention schedule that maximizes the expected reward while satisfying budget constraints on each worker as well as fairness in terms of the load assigned to each worker. Our contributions are two-fold: (1) we provide a multi-worker extension of the Whittle index to tackle heterogeneous costs and per-worker budget and (2) we develop an index-based scheduling policy to achieve fairness. Further, we evaluate our method on various cost structures and show that our method significantly outperforms other baselines in terms of fairness without sacrificing much in reward accumulated. S eurms, fairness, fairness, budget, fairness, budget, fairness ## 1. Introduction Restless multi-armed bandits (RMABs) (Krishnan, 2015) have been used for sequential planning, where a planner allocates a limited set of \(M\)_intervention resources_ across \(N\)_independent heterogeneous arms_ (Markov Decision processes) at each time step in order to maximize the long-term expected reward. The term _restless_ denotes that the arms undergo state-transitions even when they are not acted upon (with a different probability than when they are acted upon). RMABs have been receiving increasing attention across a wide range of applications such as maintenance (Bowdhan et al., 2016), recommendation systems (Krishnan, 2015), anti-poaching patrolling (Krishnan, 2015), adherence monitoring (Bowdhan et al., 2016; Bowdhan et al., 2016), and intervention planning (Bowdhan et al., 2016; Bowdhan et al., 2016; Bowdhan et al., 2016). Although, _rangers_ in anti-poaching, _healthcare workers_ in health intervention planning, and _supervisors_ in machine maintenance are all commonly cited examples of human workforce used as intervention resources, the literature has so far ignored one key reality that the human workforce is heterogeneous--each worker has their own workload constraints and needs to commit a dedicated time duration for intervening on an arm. Thus, it is critical to restrict intervention workload for each worker and balance the workload across them, while also ensuring high effectiveness (reward) of the planning policy. RMAB literature does not consider this heterogeneity and mostly focuses on selecting best arms assuming that all intervention resources (workers) are interchangeable, i.e., as from a single pool (homogeneous). However, planning with human workforce requires more expressiveness in the model, including heterogeneity in costs and intervention effects, worker-specific load constraints, and balanced work allocation. One concrete example is _anti-poaching intervention planning_(Krishnan, 2015) with \(N\) areas in a national park where timely interventions (patrols) are required to detect as many snares as possible across all the areas. These interventions are carried out by a small set of \(M\) ranger. The problem of selecting a subset of areas at each time step (say, daily) has been modeled as an RMB problem. However, each ranger may incur heterogeneous cost (e.g., distance travelled, when assigned to intervene on a particular area) and the total cost incurred by any ranger (e.g., _total_ distance traveled) must not exceed a given budget. Additionally, it is important to ensure that tasks are allocated fairly across rangers so that, for e.g., some rangers are not required to walk far greater distances than others. Adding this level of expressiveness to existing RMAB models is non-trivial. To address this, we introduce the _multi-worker restless multi-armed bandits_ (MWRMAB) problem. Since MWRMABs are more general than the classical RMABs, they are at least PSPACE hard to solve optimally (Krishnan, 2015). RMABs with \(k\)-state arms require solving a combined MDP with \(k^{N}\) states and \(|M+1|^{N}\) actions constrained by a budget, and thus suffers from the curse of dimensionality. A typical approach is to compute Whittle indices (Srivastava et al., 2017) for each arm and choose \(M\) arms with highest index values--an asymptotically optimal solution under the technical condition _indexability_(Srivastava et al., 2017). However, this approach is limited to instances a single type of intervention resource incurring one unit cost upon intervention. A few papers on RMABs (Krause et al., 2017; Krause et al., 2017) study multiple interventions and non-unitary costs but assumes one global budget (instead of per-worker budget). Existing solutions aim at maximizing reward by selecting arms with highest index values that may not guarantee fairness towards the workers who are in charge of providing interventions. _Our contributions._ To the best of our knowledge, we are the first to introduce and formalize the multi-worker restless multi-armed bandit (MWRMAB) problem and a related worker-centric fairness constraint. We develop a novel framework for solving the MWRMAB problem. Further, we empirically evaluate our algorithm to show that it is fair and scalable across a range of experimental settings. ## 2. Related Work _Multi-Action RMABs and Weakly Coupled MDPs._(Krause et al., 2017) develop closed-form solutions for multi-action RMABs using Lagrangian relaxation. (Krause et al., 2017) build simulation-based policies that rely on monte-carlo estimation of state-action values. However, critically, these approaches rely on actions being constrained by a single budget, failing to capture the heterogeneity of the workforce. On the other hand, weakly coupled MDPs (WCMDPs) (Krause et al., 2017) allow for such multiple budget constraints; this is the baseline we compare against. Other theoretical works (Krause et al., 2017; Krause et al., 2017) have developed solutions in terms of the reward accumulated, but may not scale well with increasing problem size. These papers do not consider fairness, a crucial component of MWRMABs, which our algorithm addresses. _Fairness._ in stochastic and contextual bandits (Krause et al., 2017; Krause et al., 2017; Srivastava et al., 2017) has been receiving significant attention. However, fairness in RMABs has been less explored. Recent works (Krause et al., 2017; Krause et al., 2017) considered quota-based fairness of RMAB arms assuming that arms correspond to human beneficiaries (for example, patients). However, in our work, we consider an orthogonal problem of satisfying the fairness among intervention resources (workers) instead of arms (tasks). _Fair allocation._ of discrete items among a set of agents has been a well-studied topic (Krause et al., 2017). Fairness notions such as envy-freeness up to one item (Krause et al., 2017) and their budgeted settings (Krause et al., 2017; Krause et al., 2017) align with the fairness notion we consider. However, these papers do not consider non-stationary (MDP) items. Moreover, these papers assume that each agent has a value for every item; both fairness and efficiency are defined with respect to this valuation. In contrast, in MWRMAB, efficiency is defined based on reward accumulated, and fairness and budget feasibility are defined based on the cost incurred. ## 3. The Model There are \(M\) workers for providing interventions on \(N\) independent arms that follow Markov Decision Processes (MDPs). Each MDP \(i\in[N]\) is a tuple \(\langle S_{i},A_{i},C_{i},P_{i},R_{i}\rangle\), where \(S_{i}\) is a finite set of states. We represent each worker as an action, along with an additional action called _no-intervention._ Thus, action set is \(A_{i}\subseteq[M]\cup\{\emptyset\}\). \(C_{i}\) is a vector of costs \(c_{ij}\) incurred when an action \(j\in[A_{i}]\) is taken on an arm \(i\in[N]\), and \(c_{ij}=0\) when \(j=0\). \(P_{ij}^{ssj}\) is the probability of transitioning from state \(s\) to state \(s^{\prime}\) when arm \(i\) is allocated to worker \(j\). \(R_{i}(s)\) is the reward obtained in state \(s\in S_{i}\). The goal (Eq. 1) is to allocate a subset of arms to each worker such that the expected reward is maximized while ensuring that each worker incurs a cost of at most a fixed value \(B\). Additionally, the disparity in the costs incurred between any pair of workers does not exceed a _fairness threshold_\(\epsilon\) at a given time step. Let us denote a policy \(\pi:\varkappa_{i}S_{i}\mapsto\varkappa_{i}A_{i}\) that maps the current state profile of arms to an action profile. \(x_{ij}^{\pi}(s)\in\{0,1\}\) indicates whether worker \(j\) intervenes on arm \(i\) at state \(s\) under policy \(\pi\). The total cost incurred by \(j\) at a time step \(t\) is given by \(\overline{C}_{j}^{\pi}(t):=\sum_{i\in N}c_{ij}x_{ij}^{\pi}(s_{i}(t))\), where \(s_{i}(t)\) is the current state. \(\epsilon\geq\epsilon^{m}:=\max_{ij}c_{ij}\) ensures feasibility of the fairness constraints. \[\begin{split}&\max_{\pi}\limsup_{T\to\infty}\frac{1}{T}\sum_{i \in[N]}\mathbb{E}\left[\sum_{t=1}^{T}R_{i}(s_{i}(t))\;x_{ij}^{\pi}(s_{i}(t)) \right]\\ &\mathrm{s.t.}\sum_{i\in N}x_{ij}^{\pi}(s_{i}(t))\;c_{ij}\leq B, \forall\;j\in[M],\;\forall\;t\in\{1,2,\ldots\}\\ &\sum_{j\in A_{i}}x_{ij}^{\pi}(s_{i}(t))=1,\forall\;i\in[N],\; \forall\;t\in\{1,2,\ldots\}\\ &\max_{j}\overline{C}_{j}^{\pi}(t)-\min_{j}\overline{C}_{j}^{\pi} (t)\leq\epsilon,\quad\forall\;t\in\{1,2,\ldots\}\\ & x_{ij}^{\pi}(s_{i}(t))\in\{0,1\},\forall i,\;\forall j,\; \forall t.\end{split} \tag{1}\] When \(M=1\) and \(c_{i1}=1\), Problem (1) becomes classical RMAB problem (with two actions, _active_ and _passive_) that can be solved via Whittle Index method (Srivastava et al., 2017) by considering a time-averaged relaxed version of the budget constraint and then decomposing the problem into \(N\) subproblems--each subproblem finds a **charge**\(\lambda_{i}(s)\) on active action that makes passive action as valuable as the active action at state \(s\). It then selects top \(B\) arms according to \(\lambda_{i}\) values at their current states. However, the challenges involved in solving a general MWRMAB (Eq. 1) are (i) index computation becomes non-trivial with \(M>1\) workers and (ii) selecting top arms based on indices may not satisfy fairness. To tackle these challenges, we propose a framework in the next section. ## 4. Methodology **Step 1**: Decompose the combinatorial MWRMAB problem to \(N\times M\) subproblems, and compute Whittle indices \(\lambda_{ij}^{\star}\) for each subproblem. We tackle this in Sec. 4.1. This step assumes that, for each arm \(i\), MDPs corresponding to any pair of workers are mutually independent. However, the expected value of each arm may depend on interventions taken by multiple workers at different timesteps. **Step 2**: Adjust the decoupled indices \(\lambda_{ij}^{\star}\) to create \(\lambda_{ij}^{\text{addr},s}\), detailed in Sec. 4.2. **Step 3**: The adjusted indices are used for allocating the arms to workers while ensuring **fairness** and **per-timestep budget feasibility** among workers, detailed in Sec. 4.3. ### Identifying subproblem structure To arrive at a solution strategy, we relax the per-timestep budget constraints of Eq. 1 to time-averaged constraints, as follows: \(\frac{1}{T}\sum_{i\in[N]}\mathbb{E}\sum_{t=1}^{T}x_{ij}^{\pi}(s_{i}(t))\;c_{ij} \leq B,\;\forall j\in[M]\). The optimization problem (1) can be rewritten as: \[\min_{\{\lambda_{j}\}\geq 0\} \max_{\pi} \limsup_{T\rightarrow\infty}\frac{1}{T}\sum_{i\in[N]}\mathbb{E} \left[\sum_{t=1}^{T}\sum_{j\in[M]}\left(R_{i}(s_{i}(t))x_{ij}^{\pi}(s_{i}(t)) \right.\right.\] \[\left.\left.+\lambda_{j}(B-c_{ij}x_{ij}^{\pi}(s_{i}(t))\right) \right]\] s.t. \[\sum_{j\in A_{i}}x_{ij}^{\pi}(s_{i}(t))=1,\qquad\forall\;i\in[N], \;t\in\{1,2,\ldots\}\] \[\max_{j}\overline{C}_{j}^{\pi}(t)-\min_{j}\overline{C}_{j}^{\pi} (t)\leq\epsilon,\qquad\forall\;t\in\{1,2,\ldots\}\] \[x_{ij}^{\pi}(s_{i}(t))\in\{0,1\},\qquad\qquad\qquad\qquad \forall i,\;\forall j,\;\forall t \tag{2}\] Here, \(\lambda_{j}\)s are Lagrangian multipliers corresponding to each relaxed budget constraint \(j\in[M]\). Furthermore, as mentioned in (Bulman and Recht, 2017), if an arm \(i\) is _indexable_, then the optimization objective (2) can be decomposed into \(N\) independent subproblems, and separate index functions can be defined for each arm \(i\). Leveraging this, we decompose our problem to \(N\times M\) subproblems, each finding the minimum \(\lambda_{ij}\) that maximizes the following: \[\limsup_{T\rightarrow\infty}\frac{1}{T}\mathbb{E}\left[\sum_{t=1}^{T}\left(R_{ i}(s_{i}(t))-\lambda_{ij}c_{ij}\right)x_{ij}^{\pi}(s_{i}(t)\right)\right] \tag{3}\] Note that, the maximization subproblem (3) does not have the term \(\lambda_{ij}B\) since the term does not depend on the decision \(x_{ij}^{\pi}(s_{i}(t))\). Considering a 2-action MDP with action space \(\mathcal{H}_{ij}=\{0,j\}\) for an arm-worker pair, the maximization problem (3) can be solved by dynamic programming methods using Bellman's equations for each state to decide whether to take an active action (\(x_{ij}(s)=1\)) when the arm is currently at state \(s\): (4) \[V_{i,j}^{t}(s,\lambda_{ij},x_{ij}(t))=\begin{cases}R_{i}(s)-\lambda_{ij}c_{ij} +\sum_{s^{\prime}\in S_{i}}p_{ss^{\prime}}^{ij}V_{i,j}^{t+1}(s^{\prime}, \lambda_{ij}),&\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad arm in state "overgrown and snared" and ranger 2 to act on the arm in state "clear and snared". However, the fully decoupled index computation for each ranger \(j\) would reason about restricted MDPs that only have passive action and ranger type \(j\) available. So when computing, e.g., the index for ranger 1 in \(s=0\), the restricted MDP would have 0 probability of reaching state "clear and not snared", since it does not include ranger 2 in its restricted MDP. This would correspond to an MDP that always gives 0 reward, and thus would artificially force the index for ranger 1 to be 0, despite ranger 1 being the optimal action for \(s=0\). To address this, we define a new index notion that accounts for such inter-action effects. The key idea is that, when computing the index for a given worker, we will consider actions of _all other workers in future time steps_. So in our poaching example, the new index value for ranger 1 in \(s=0\) will _increase_ compared to its decoupled index value, because the new index will take into account the value of ranger 2's actions when the system progresses to \(s=1\) in the future. Note that the methods we build generalize to any number of workers \(M\). However, the manner in which we incorporate the actions of other workers must be done carefully, We propose an approach and provide theoretical results explaining why. Finally, we give the full algorithm for computing the new indices. **New index notion**: For a given arm, to account for the inter-worker action effects, we define the new index for an action \(j\) as the minimum charge that makes an intervention by \(j\) on that arm as valuable as _any_ other worker \(j^{\prime}\) in the combined MDP, with \(M+1\) actions. That is, we seek the minimum charge for action \(j\) that makes us indifferent between taking action \(j\) and _not_ taking action \(j\), a multi-worker extension Whittle's index notion. To capture this, we define an augmented reward function \(R^{\dagger}_{\mathbf{\lambda}}(s,j)=R(s)-\lambda_{j}c_{j}\). Let \(\mathbf{\lambda}\) be the vector of \(\{\lambda_{j}\}_{j\in[M]}\) charges. We define this **expanded MDP** as \(\mathcal{M}^{\dagger}_{\mathbf{\lambda}}\) and the corresponding value function as \(V^{\dagger}_{\mathbf{\lambda}}\). We now find adjusted index \(\lambda^{adj,*}_{j,\mathbf{\lambda}_{-j}}\) using the following expression: \[\min_{j^{\prime}\in[M]\setminus\{j\}}\arg\min_{\lambda_{j}}\{\lambda_{j^{ \prime}}V^{\dagger}_{\mathbf{\lambda}_{-j}}(s,\lambda_{j},j)=V^{\dagger}_{\mathbf{ \lambda}_{-j}}(s,\lambda_{j},j^{\prime})\} \tag{8}\] where \(\mathbf{\lambda}_{-j}\) is a vector of fixed charges for all \(j^{\prime}\neq j\), and the outer min over \(j^{\prime}\) simply captures the specific action \(j^{\prime}\) that the optimal planner is indifferent to taking over action \(j\) at the new index value. Note, this is the natural extension of the decoupled two-action index definition, Eq. (4), which defines the index as the charge on \(j\) that makes the planner indifferent between acting and, the only other option, being passive. Our new _adjusted index algorithm_ is given in Alg. 1. ``` 0: An arm: MDP \(\mathcal{M}^{\dagger}\), costs \(c_{j}\), state \(s\), and indices \(\lambda^{*}_{j}(s)\). 1:for\(j=1\) to \(M\)do 2:\(\mathbf{\lambda}_{j}=\lambda^{*}_{j}(s)\) {init \(\mathbf{\lambda}\)} 3:for\(j=1\) to \(M\)do 4: Compute \(\lambda^{adj,*}_{j,\mathbf{\lambda}_{-j}}(s)\) {via binary search on Eq. 8} 5:return\(\lambda^{adj,*}_{j,\mathbf{\lambda}_{-j}}(s)\) for all workers \(j\in[M]\) ``` **Algorithm 1** Adjusted Index Computation We use a binary search procedure to compute the adjusted indices since \(V^{\dagger}_{\mathbf{\lambda}_{-j}}(s,\lambda_{j},j)\) is convex in \(\lambda_{j}\). The most important consideration of the adjusted index computation is how to set the charges \(\lambda_{j^{\prime}}\) of the other action types \(j^{\prime}\) when computing the index for action \(j\). We show that a reasonable choice for \(\lambda_{j^{\prime}}\) is the Whittle Indices \(\lambda^{*}_{j^{\prime}}(s)\) which were pre-computed using Alg. 3. The intuition is that \(\lambda^{*}_{j^{\prime}}(s)\) provides a _lower bound_ on how valuable the given action \(j^{\prime}\) is, since it was computed against no-action in the restricted two-action MDP. In Observation 1 and Theorem 2, we describe the problem's structure to motivate these choices. The following observation explicitly connects decoupled indices and adjusted indices. **Observation 1**.: _For each worker \(j\), when \(\mathbf{\lambda}_{-j}\rightarrow\infty\), i.e., \(\lambda_{j^{\prime}}\rightarrow\infty\)\(\forall j^{\prime}\neq j\), then the following holds: \(\lambda^{adj,*}_{j,\mathbf{\lambda}_{-j}}\rightarrow\lambda^{*}_{j}\)._ This can be seen by considering the rewards \(R^{\dagger}_{\mathbf{\lambda}}(s,j^{\prime})=R(s)-\lambda_{j^{\prime}}c_{j^{ \prime}}\) for taking action \(j^{\prime}\) in any state \(s\). As the charge \(\lambda_{j^{\prime}}\rightarrow\infty\), \(R^{\dagger}_{\mathbf{\lambda}}(s,j^{\prime})\rightarrow\ -\infty\), making it undesirable to take action \(j^{\prime}\) in the optimal policy. Thus, the optimal policy would only consider actions \(\{0,j\}\), which reduces to the restricted MDP of the decoupled index computation. Next we analyze a potential naive choice for \(\mathbf{\lambda}_{-j}\) when computing the indices for each \(j\), namely, \(\mathbf{\lambda}_{-j}=0\). Though it may seem a natural heuristic, this corresponds to planning _without considering the costs of other actions_, which we show can lead to arbitrarily low values of the indices, which subsequently can lead to poorly performing policies. **Theorem 2**.: _As \(\lambda_{j^{\prime}}\to 0\)\(\forall j^{\prime}\neq j\), \(\lambda^{adj,*}_{j}\) will monotonically decrease, if (1) \(V^{\dagger}_{\lambda_{j^{\prime}}}(s,\lambda_{j},j^{\prime})\geq V^{\dagger}_{ \lambda_{j^{\prime}}}(s,\lambda_{j},0)\) for \(0\leq\lambda_{j^{\prime}}\leq\epsilon\) and (2) if the average cost of worker \(j^{\prime}\) under the optimal policy starting with action \(j^{\prime}\) is greater than the average cost of worker \(j^{\prime}\) under the optimal policy starting with action \(j\)._ Thm. 2 (proof in Appendix B) confirms that, although setting \(\lambda_{j^{\prime}}=0\) for all \(j^{\prime}\) may seem like a natural option, in many cases it will artificially reduce the index value for action \(j\). This is because \(\lambda_{j^{\prime}}=0\) corresponds to planning as if action \(j^{\prime}\) comes with _no charge_. Naturally then, as we try to determine the _non-zero_ charge \(\lambda_{j}\) we are willing to pay for action \(j\), i.e., the index of action \(j\), _we will be less willing to pay higher charges_, _since there are free actions \(j^{\prime}\)_. Note that Figure 1. Specialist domain: where specific actions are required in each state to advance to the reward-giving state. Decoupled indices lead to sub-optimal policies, whereas adjusted indices perform well. conditions (1) and (2) of the above proof are not restrictive. The first is a common epsilon-neighborhood condition, which requires that value functions do not change in arbitrarily non-smooth ways with \(\lambda\) values near 0. The second requires that a policy's accumulated costs of action \(j^{\prime}\) are greater when starting with action \(j^{\prime}\), than starting from any other action\(-\) this is same as assuming that the MDPs do not have arbitrarily long mixing times. That is to say that Thm. 2 applies to a wide range of problems that we care about. The key question then is: what are reasonable values of charges for other actions \(\mathbf{\lambda}_{-j}\), when computing the index for action \(j\)? We propose that a good choice is to set each \(\lambda_{j^{\prime}}\in\mathbf{\lambda}_{-j}\) to its corresponding decoupled index value for the current state, i.e., \(\lambda_{j^{\prime}}^{*}(s)\). The reason relies on the following key idea: we know that at charge \(\lambda_{j^{\prime}}^{*}(s)\), the optimal policy is indifferent between choosing that action \(j^{\prime}\) and the passive action, at least when \(j^{\prime}\) is the only action available. Now, assume we are computing the new adjusted index for action \(j\), when combined in planning with the aforementioned action \(j^{\prime}\) at charge \(\lambda_{j^{\prime}}^{*}(s)\). Since the charge for \(j^{\prime}\) is already set at a level that makes the planner indifferent between \(j^{\prime}\) and being passive, if adding \(j^{\prime}\) to the planning space with \(j\) does not provide any additional benefit over the passive action, _then the new adjusted index for \(j\) will be the same as the decoupled index for \(j\), which only planned with \(j\) and the passive action_. This avoids the undesirable effect of getting artificially reduced indices due to under-charging for other actions \(j^{\prime}\), i.e., Thm. 2. The ideas follow similarly for whether the adjusted index for \(j\) should increase or decrease relative to its decoupled index value. Le., if _higher_ reward can be achieved when planning with \(j\) and \(j^{\prime}\) together compared to planning with either action alone, as in the specialist anti-poaching example then we will become _more willing to pay a charge \(\lambda_{j}\)_ now to help reach states where the action \(j^{\prime}\) will let us achieve that higher reward. On the other hand, if \(j^{\prime}\) dominates \(j\) in terms of intervention effect, then even at a reasonable charge for \(j^{\prime}\), we will be less willing to pay for action \(j\) when both options are available, and so the adjusted index will decrease. We give our new _adjusted index algorithm_ in Alg. 1, and provide experimental results demonstrating its effectiveness. ### Allocation Algorithm We provide a method called _Balanced Allocation_ (Alg. 2) to tackle the problem of allocating intervention tasks to each worker in a balanced way. At each time step, given the current states of all the arms \(\{s_{i}^{t}\}_{i\in[N]}\), Alg. 2 creates an ordered list \(\sigma\) among workers based on their highest Whittle Indices \(\max_{i}\lambda_{ij}(s_{i}^{t})\). It then allocates the best possible (in terms of Whittle Indices) available arm to each worker according to the order \(\sigma\) in a round-robin way (allocate one arm to a worker and move on to the next worker until the stopping criterion is met). Note that this satisfies the constraint that the same arm cannot be allocated to more than one worker. In situations where the best possible available arm leads to the budget violation \(B\), an attempt is made to allocate the next best. This process is repeated until there are no more arms left to be allocated. If no available arms could be allocated to a worker \(j\) because of budget violation, then worker \(j\) is removed from the future round-robin allocations and are allocated all the arms in their bundle \(D_{j}\). Thus, the budget constraints are always satisfied. Moreover, in the simple setting, when costs and transition probabilities of all workers are equal, this heuristic obtain optimal reward and perfect fairness. ``` 0: Current states of each arm \(\{s_{i}\}_{i\in[N]}\), index values for each \((i,j)\) arm-worker pair \(\lambda_{ij}(s_{i})\), costs \(\{c_{ij}\}\), budget \(B\) 0: balanced allocation \(\{D_{j}\}_{j\in[M]}\) where \(D_{j}\subseteq[N]\), \(D_{j}\cap D_{j^{\prime}}=\emptyset\)\(\forall j,j^{\prime}\in[M]\). 1: Initiate allocation \(D_{j}\leftarrow\emptyset\) for all \(j\in[M]\) 2: Let \(L\leftarrow\{1,\dots,N\}\) be the set of all unallocated arms 3:while true do 4: Let \(\tau_{j}\) be the ordering over \(\lambda_{IJ}\) values from highest to lowest: \(\lambda[\tau_{j}[1]][j]\geq\dots\geq\lambda[\tau_{j}[N]][j]\geq 0\) 5: Let \(\sigma\) be the ordering over workers based on their highest indices: \(\lambda[\tau_{1}[1]][1]\geq\lambda[\tau_{2}[1]]\) [(2] and so on 6:for\(j=1\) to \(M\)do 7:if\(\tau_{\sigma_{j}}\cap L\neq\emptyset\)then 8:\(x\leftarrow\operatorname{top}(\tau_{j})\cap L\) 9:while\(c_{\sigma_{j}}\perp\sum_{h\in D_{\sigma_{j}}}c_{h\sigma_{j}}>B\)do 10:\(\tau_{\sigma_{j}}\leftarrow\tau_{\sigma_{j}}\setminus\{x\}\) 11:if\(\tau_{\sigma_{j}}\cap L=\emptyset\)then 12: break 13:else 14:\(x\leftarrow\operatorname{top}(\tau_{\sigma_{j}})\cap L\) 15:if\(\tau_{\sigma_{j}}\cap L\neq\emptyset\)then 16:\(D_{\sigma_{j}}\gets D_{\sigma_{j}}\cup\{x\}\); \(L\gets L\setminus\{x\}\); \(\tau_{\sigma_{j}}\leftarrow\tau_{\sigma_{j}}\setminus\{x\}\) 17:return\(\{D_{j}\}_{j\in[M]}\) ``` **Algorithm 2** Balanced Allocation Theorem 3 ().: _When all workers are homogeneous (same costs and transition probabilities on arms after intervention) and satisfy indexability, then our framework outputs the optimal policy while being exactly fair to the workers._ The proof consists of two components: (1) optimality, which can be proved using Corollary 1 (Whittle Indices for homogeneous workers are the same), and the fact that the same costs lead to considering all workers from the same pool of actions, and (2) perfect fairness, using the fact that, when costs are equal, Step 3 of our algorithm divides the arms among workers in a way such that the difference between the number of allocations between two workers differs by at most 1. First we define the technical condition, called _indexability_, under which choosing top arms according to Whittle indices results in an optimal RMAB solution. Definition 1 ().: _Let \(\Phi(\lambda)\) be the set of all states for which it is optimal to take a passive action over an active action that with permit \(\lambda\) charge. An arm is called indexable if \(\Phi(\lambda)\) monotonically increases from \(\emptyset\) to \(\mathbf{S}_{i}\) when \(\lambda\) increases from \(-\infty\) to \(+\infty\). An RMAB problem is indexable if all the arms are indexable._ Proof.: Consider an MWRMB problem instance with \(N\) arms, \(M\) homogeneous workers with costs \(c\), and per-worker per-round budget \(B\). Upon relaxing the per-worker budget constraint, this MWRMB problem reduces to an RMAB instance with \(N\) arms, 2 actions (_intervention_ action with cost 1 or _no-intervention_ action with cost \(o\)), and a total per-round budget of \(M[B/c]\). Under _indexability_ assumption, this problem can be solved using Whittle index policy (Mohammad and Delfton, 2010), wh--selecting \(M[B/c]\) arms with highest Whittle indices \(\lambda_{i}(s)\). Allocating the selected arms among all the workers, using our algorithm, ensures two properties: * _The per-worker budget \(B\) is met:_ The total cost incurred to intervene \(M[B/c]\) selected arms of the RMAB solution is \(cM[B/c]\). However, \[cM[B/c] \leq cMB/c = MB.\] Allocating these indivisible arms equally among all the workers would ensure that each worker incurs at most a cost of \(B\). * _Perfect fairness is achieved:_ When \(N\geq M[B/c]\), our algorithm distributes \(M[B/c]\) arms among \(M\) workers, such that each worker receives exactly \([B/c]\) interventions. In the case when \(N<M[B/c]\), then, our algorithm allocates \(\lfloor N/M\rfloor+1\) arms to each of the first \((N-\lfloor N/M\rfloor M)\) workers, and \(\lfloor N/M\rfloor\) arms to the rest of the workers. Thus, the difference between the allocations between any two workers in any round is at most \(1\), implying that the difference between the costs incurred is at most \(c\). This satisfies our fairness criteria. This completes the proof. ## 5. Empirical Evaluation We evaluate our framework on three domains, namely **constant unitary costs, ordered workers**, and **specialist domain**, each highlighting various challenging dimensions of the MWRMAB problem (detailed in Appendix C). In the first domain, the cost associated with all worker-arm pairs is the same, but transition probabilities differ; the main challenge is in finding optimal assignments, though fairness is still considered. In the second domain, there exists an ordering among the workers such that the highest (or lowest) ranked worker has the highest (or lowest) probability of transitioning any arm to "good" state; making balancing optimal assignments with _fair_ assignments challenging. The final domain highlights the need to consider inter-action effects via Step 2. We run experiments by varying the number of arms for each domain. For the first and third domains that consider unit costs, we use \(B=4\) budget per worker, and for the second domain where costs are in the range \([1,10]\), we use budget \(B=18\). We ran all the experiments on Apple M1 with 3.2 GHz Processor and 16 GB RAM. We evaluate the average reward per arm over a fixed time horizon of 100 steps and averaged over 50 epochs with random or fixed transition probabilities that follow the characteristics of each domain. BaselinesWe compare our approach, **CWI+BA** (Combined Whittle Index with Balanced Allocation), against: * **PWI+BA** (Per arm-worker Whittle Index with Balanced Allocation) that combines Steps 1 and 3 of our approach, skipping Step 2 (adjusted index algorithm) * **CWI+GA** (Combined arm-worker Whittle Index with Greedy Allocation) that combines Steps 1 and 2 and, instead of Step 3 (balanced allocation), the highest values of indices are used for allocating arms to workers while ensuring budget constraint per timestep * **Hawkins**(2003) solves a discounted version of Eq. 2 without the fairness constraint, to compute values of \(\lambda_{j}\), then solves a knapsack over \(\lambda_{j}\)-adjusted Q-values * **OPT** computes optimal solutions by running value iteration over the combinatorially-sized exact problem (1) without The fairness constraint. * **OPT-fair** follows OPT, but adds the fairness constraints. These optimal algorithms are exponential in the number of arms, states, and workers, and thus, could only be executed on small instances. * **Random** takes random actions \(j\in[M]\cup\{0\}\) on every arm while maintaining budget feasibility for every worker at each timestep ResultsFigure 2 shows that the reward obtained using our framework (CWI+BA) is comparable to that of the reward maximizing baselines (Hawkins and OPT) across all the domains. We observe at most 18.95% reduction in reward compared to OPT, where the highest reduction occurs for ordered workers in Fig. 2(b). In terms of fairness, Figs. 2(a) and (c) show that CW+BA achieves fair allocation among workers at all timesteps. In Figure 2(b) CW+BA achieves fair allocation in almost all timesteps. The fraction of timesteps where fairness is attained by CW+BA is significantly higher than Hawkins and OPT. We found an interesting corner case for the ordered worker's instances with heterogeneous costs where fairness was not attained (mainly because \(N\) was not large enough compared to the budget). The instance was with \(N=50\), \(B=40\), and \(M=3\). The worker costs were as follows: W1's cost for all agents was 1, W2's cost was 5, and W3's cost was 5. After 8 rounds of BA, all workers were allocated 8 agents, and W2 and W3's budgets of 40 were fulfilled. There were only 26 agents left to be allocated, and all of them were allocated to W1. In the end, W1 incurred a cost of 34 while W2 and W3 incurred a cost of 40 each. Thus, the fairness gap between W1 and the other two agents is 1 more than \(c_{max}=5\). Assuming costs are drawn from \([1,10]\), the probability of encountering this instance is infinitesimally small. Fig 2(b) also shows that Hawkins obtains _unfair_ solutions at every timestep (0 fairness) when N=5 and B=18, and, when N=10 and N=15, Hawkins is fair only 0.41 and 0.67 fractions of the time, respectively. **Thus, compared to reward maximizing baselines (Hawkins and OPT), CW+BA achieves the highest fairness.** We also compare against two versions of our solution approach, namely, PWI+BA and CW+GA. We observe that PWI+BA accumulates marginally lower reward while CWI+GA performs poorly in terms of fairness, hence asserting the importance of using CWI+BA for the MWRMAB problem. Fig 3 shows that **CWI+BA is significantly faster than OPT-fair** (the optimal MWRMAB solution), with an execution time improvement of 33%, 78% and 83% for the three domains, respectively, when N=5. Moreover, for instances with N=10 onwards, both OPT and OPT-fair ran out of memory because the execution of the optimal algorithms required exponentially larger memory. However, we observe that CWI+BA scales well even for \(N=10\) and \(N=15\) and runs within a few seconds, on average. Fig. 4 further demonstrates that our **CWI+BA scales well** and consistently outputs fair solutions for higher values of \(N\) and \(B\). On larger instances, with \(N\in\{50,100,150\}\), our approach achieves up to 374.92% improvement in fairness with only 6.06% reduction in reward, when compared against the reward-maximizing solution (Beng et al., 2019). In summary, CWI+BA is fairer than reward-maximizing algorithms (Hawkins and OPT) and much faster and scalable compared to the optimal fair solution (OPT fair), while accumulating reward comparable to Hawkins and OPT across all domains. Therefore, CWI+BA is shown to be a fair and efficient solution for the MWRMAB problem. Figure 4: The plot shows mean reward (left), fairness (middle), and run time (right) for \(N=50,100,150\) arms on constant unitary costs domain. CWI+GA scales well for larger instances, and even for N=150 arms, the average runtime is 10 seconds. Figure 3: Execution time averaged over 50 epochs for \(N=5,10,15\). For a fixed time horizon of 100 steps, CWI+BA runs faster than Hawkins (white), OPT (dark gray), and OPT fair (light gray) for all instances in each of the three domains evaluated. Figure 2: Mean reward (top row) and fraction of time steps with fair allocation (bottom row) for \(N=5,10,15\) arms. CWI+BA (blue) achieves the highest fraction of fair allocations than Hawkins (white) algorithm while attaining almost similar reward as the reward-maximizing baselines. ## 6. Conclusion We are the first to introduce multi-worker restless multi-armed bandit (MWRMAB) problem with worker-centric fairness. Our approach provides a scalable solution for the computationally hard MWRMAB problem. On comparing our approach against the (non-scalable) optimal fair policy on smaller instances, we find almost similar reward and fairness. Note that, assuming heterogeneous workers, an optimal solution (with indices computed via Step 2) would require solving a general version of the multiple knapsacks problem -- with m knapsacks (each denoting a worker with some capacity) and n items (each having a value and a cost, both of which vary depending on the knapsack to which the item is put into). There is no provable (approximate) solution for this general version of the multiple knapsacks problem in the literature. In addition to this challenging generalized multiple knapsack problem, in this work, we aim at finding a fair (balanced) allocation across all the knapsacks. The theoretical analysis of an approximation bound for the problem of balanced allocation with heterogeneous workers remains open. In summary, the multi-worker restless multi-armed problem formulation provides a more general model for the intervention planning problem capturing the heterogeneity of intervention resources, and thus it is useful to appropriately model real-world domains such as anti-poaching patrolling and machine maintenance, where the interventions are provided by a human workforce. ###### Acknowledgements. A. Biswas gratefully acknowledges the support of the Harvard Center for Research on Computation and Society (CRCS). J.A. Killian was supported by an National Science Foundation (NSF) Graduate Research Fellowship under grant DGE1745303. P. Rodriguez Diaz was supported by the NSF under grant IIS-1750358. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF.
2306.12543
Matroid lifts and representability
A 1965 result of Crapo shows that every elementary lift of a matroid $M$ can be constructed from a linear class of circuits of $M$. In a recent paper, Walsh generalized this construction by defining a rank-$k$ lift of a matroid $M$ given a rank-$k$ matroid $N$ on the set of circuits of $M$, and conjectured that all matroid lifts can be obtained in this way. In this sequel paper we simplify Walsh's construction and show that this conjecture is true for representable matroids but is false in general. This gives a new way to certify that a particular matroid is non-representable, which we use to construct new classes of non-representable matroids. Walsh also applied the new matroid lift construction to gain graphs over the additive group of a non-prime finite field, generalizing a construction of Zaslavsky for these special groups. He conjectured that this construction is possible on three or more vertices only for the additive group of a non-prime finite field. We show that this conjecture holds for four or more vertices, but fails for exactly three.
Daniel Irving Bernstein, Zach Walsh
2023-06-21T20:16:12Z
http://arxiv.org/abs/2306.12543v1
# Matroid lifts and representability ###### Abstract A 1965 result of Crapo shows that every elementary lift of a matroid \(M\) can be constructed from a linear class of circuits of \(M\). In a recent paper, Walsh generalized this construction by defining a rank-\(k\) lift of a matroid \(M\) given a rank-\(k\) matroid \(N\) on the set of circuits of \(M\), and conjectured that all matroid lifts can be obtained in this way. In this sequel paper we simplify Walsh's construction and show that this conjecture is true for representable matroids but is false in general. This gives a new way to certify that a particular matroid is non-representable, which we use to construct new classes of non-representable matroids. Walsh also applied the new matroid lift construction to gain graphs over the additive group of a non-prime finite field, generalizing a construction of Zaslavsky for these special groups. He conjectured that this construction is possible on three or more vertices only for the additive group of a non-prime finite field. We show that this conjecture holds for four or more vertices, but fails for exactly three. ## 1 Introduction and preliminaries Given matroids \(M\) and \(L\) on a common ground set \(E\), \(L\) is a _lift_ of \(M\) if there exists a matroid \(K\) on ground set \(E\cup F\) such that \(M=K/F\) and \(L=K\setminus F\). If \(L\) is a lift of \(M\), then the rank of \(L\) is at least the rank of \(M\). We say that \(L\) is a _rank-\(k\) lift_ of \(M\) if the rank of \(L\) is \(k\) greater than that of \(M\). Rank-\(1\) lifts, called _elementary lifts_, are well understood. Indeed, a classical theorem of Brylawski [2], which was previously stated in the dual by Crapo [3], says that the elementary lifts of a matroid \(M\) are in bijection with the set of _linear classes of circuits of \(M\)_, where a linear class is a set \(\mathcal{C}\) of circuits satisfying the following: \[\begin{array}{c}\mbox{if }C_{1},C_{2}\in\mathcal{C}\mbox{ and }|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})=2,\\ \mbox{then each circuit }C\mbox{ of }M\mbox{ contained in }C_{1}\cup C_{2}\mbox{ is also in }\mathcal{C}.\end{array}\] We can state this bijection between linear classes of circuits and elementary lifts as follows. **Theorem 1** ([2, 3]).: _Let \(M\) be a matroid on ground set \(E\) and let \(\mathcal{C}\) be a linear class of circuits of \(M\). Then the function \(r_{M^{\prime}}\colon 2^{E}\to\mathbb{Z}\) defined, for all \(X\subseteq E\), by_ \[r_{M^{\prime}}(X)=\begin{cases}r_{M}(X)&\text{ if each circuit of $M|X$ is in $\mathcal{C}$},\\ r_{M}(X)+1&\text{ otherwise}\end{cases}\] _is the rank function of an elementary lift \(M^{\prime}\) of \(M\). Moreover, every elementary lift of \(M\) can be obtained in this way._ This raises the question of whether a similar characterization of higher-rank lifts is possible. Walsh [12, Theorem 2] described a procedure that constructs a rank-\(k\) lift of a matroid \(M\) from a rank-\(k\) matroid \(N\) on the circuit set of \(M\). For this construction to work, \(N\) must satisfy a particular constraint. When \(k=1\), this constraint precisely says that the loops of \(N\) are a linear class of circuits of \(M\), so Walsh's construction generalizes Theorem 1. Our first main result, stated below using the notation \(\operatorname{cl}_{N}\) for the closure operator of a matroid \(N\), is a simplification of Walsh's original construction in which we only require \(N\) to satisfy a condition concerning pairs of circuits of \(M\). **Theorem 2**.: _Let \(M\) be a matroid on ground set \(E\) and let \(N\) be a matroid whose ground set is the circuit set of \(M\). Assume that if \(C_{1},C_{2}\) are circuits of \(M\) for which \(|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})=2\), then each circuit \(C\) of \(M\) contained in \(C_{1}\cup C_{2}\) satisfies \(C\in\operatorname{cl}_{N}(\{C_{1},C_{2}\})\). Then the function \(r:2^{E}\to\mathbb{Z}\) defined, for all \(X\subseteq E\), by_ \[r(X)=r_{M}(X)+r_{N}(\{C\colon C\text{ is a circuit of $M|X$}\})\] _is the rank function of a rank-\(r(N)\) lift of \(M\)._ Given a matroid \(M\) and another matroid \(N\) on the circuits of \(N\) satisfying the hypothesis of Theorem 2, we write \(M^{N}\) for the lift constructed in Theorem 2. There are natural choices for a matroid \(N\) satisfying the hypothesis of Theorem 2, such as the derived matroids [11, 10, 7], matroids from gain graphs over certain groups [12], and rank-2 uniform matroids. It was conjectured in [12] that every lift of \(M\) is isomorphic to \(M^{N}\) for some matroid \(N\) on the circuits of \(M\). We prove that this true for representable matroids but false in general. We use these facts to derive a new certificate for non-representability which we then use to generate new families of non-representable matroids. In particular, the following is our second main result. **Theorem 3**.: _For each integer \(r\geq 5\) there is a rank-\(r\) non-representable sparse paving matroid \(K\) with a two-element set \(X\) so that there is no matroid \(N\) on the set of circuits of \(K/X\) for which \((K/X)^{N}\cong K\backslash X\)._ This family of matroids may be of independent interest: they form an infinite antichain of non-representable sparse paving matroids that do not violate Ingleton's inequality. Part of the motivation of [12] was to generalize Zaslavsky's application of Theorem 1 to gain graphs. A _gain graph_ is a pair \((G,\phi)\) where \(G\) is a graph and \(\phi\) is a _gain function_ that orientably labels the edges of \(G\) by elements of a group \(\Gamma\). Zaslavsky [15] famously applied Theorem 1 to gain graphs by showing that for each gain function on a graph \(G\) one can naturally construct a linear class \(\mathcal{B}\) of circuits of \(M(G)\), the graphic matroid of \(G\). The circuits in \(\mathcal{B}\) are the _balanced_ cycles of \(G\) with respect to the gain function, and the pair \((G,\mathcal{B})\) is a _biased graph_. The elementary lift \(M(G,\phi)\) of \(M(G)\) obtained from applying Theorem 1 with the linear class \(\mathcal{B}\) is the _lift matroid_ of \((G,\mathcal{B})\), and it follows from Theorem 1 that a cycle of \(G\) is a circuit of \(M\) if and only if it is balanced. We direct the reader to [4, 12, 15] for more background on gain graphs and lift matroids. For certain groups Walsh [12] generalized this construction by defining a matroid \(N\) on the circuits of \(M(G)\) that satisfies the hypothesis of Theorem 2. To avoid technicalities, we only consider the _full \(\Gamma\)-gain graph_ graph \((K_{n}^{\Gamma},\phi_{n}^{\Gamma})\) for a finite group \(\Gamma\), where \(K_{n}^{\Gamma}\) has vertex set \([n]\) and edge set \(\binom{[n]}{2}\times\Gamma\), and the gain function \(\phi_{n}^{\Gamma}\) orients edge \((\{i,j\},\gamma)\) from \(i\) to \(j\) when \(i<j\) and assigns the label \(\gamma\). The previous version of Theorem 2 was applied to gain graphs to get the following, where \(\mathbb{Z}_{p}^{j}\) denotes the direct sum of \(j\) copies of the cyclic group of order \(p\). **Theorem 4** ([12, Theorem 3]).: _Let \(p\) be a prime, and let \(n\geq 3\) and \(j\geq 2\) be integers. For each integer \(i\) with \(1\leq i\leq j\), there is a rank-\(i\) lift \(M\) of \(M(K_{n}^{\mathbb{Z}_{p}^{j}})\) so that a cycle of \(K_{n}^{\mathbb{Z}_{p}^{j}}\) is a circuit of \(M\) if and only if it is balanced._ Surprisingly, it was shown that for finite abelian groups such a construction is only possible for groups of this form; namely, the additive group of a non-prime finite field. **Theorem 5** ([12, Theorem 4]).: _Let \(\Gamma\) be a nontrivial finite abelian group, and let \(n\geq 3\) be an integer. Let \(M\) be a lift of \(M(K_{n}^{\Gamma})\) so that a cycle of \(K_{n}^{\Gamma}\) is a circuit of \(M\) if and only if it is balanced. Then either \(\Gamma\cong\mathbb{Z}_{p}^{j}\) for some prime \(p\) and integer \(j\geq 2\), or \(M\) is an elementary lift of \(M(K_{n}^{\Gamma})\)._ It was conjectured in [12, Conj. 25] that this holds more generally for every nontrivial finite group. For our third and final main result we show that this conjecture is true when \(n\geq 4\) and is false when \(n=3\). A _group partition_ of a group \(\Gamma\) is a partition of the non-identity elements of \(\Gamma\) into sets \(A_{1},\ldots,A_{k}\) such that each \(A_{i}\cup\{\epsilon\}\) is a subgroup of \(\Gamma\) for all \(i\in[k]\), where \(\epsilon\) is the identity element of \(\Gamma\). The partition is _nontrivial_ if it has more than one part. **Theorem 6**.: _Let \(\Gamma\) be a nontrivial finite group and let \(n\geq 3\). Let \(\mathcal{M}_{n,\Gamma}\) be the class of lifts \(M\) of \(M(K_{n}^{\Gamma})\) so that a cycle of \(K_{n}^{\Gamma}\) is a circuit of \(M\) if and only if it is balanced. Then \(\mathcal{M}_{n,\Gamma}\) contains a non-elementary lift in precisely the following cases:_ 1. \(n=3\) _and_ \(\Gamma\) _has a nontrivial partition, or_ 2. \(n\geq 4\) _and_ \(\Gamma=\mathbb{Z}_{p}^{j}\) _for some prime_ \(p\) _and_ \(j\geq 2\) We prove Theorem 2 in Section 2 and Theorem 3 in Section 3. One direction of Theorem 6 was given in [12] as Lemma 21; we prove the converse direction in Section 4. We follow the notation and terminology of Oxley [9]. ## 2 A simplified construction We now state the original construction from [12] and then show that it is equivalent to Theorem 2. For a collection \(\mathcal{X}\) of sets we write \(\cup\mathcal{X}\) for \(\cup_{X\in\mathcal{X}}X\). A collection \(\mathcal{C}^{\prime}\) of circuits of a matroid \(M\) is _perfect_ if \(|\cup\mathcal{C}^{\prime}|-r_{M}(\cup\mathcal{C}^{\prime})=|\mathcal{C}^{\prime}|\) and no circuit in \(\mathcal{C}^{\prime}\) is contained in the union of the others. Equivalently, \(\mathcal{C}^{\prime}\) is contained in the collection of fundamental circuits with respect to a basis of \(M\), because the set obtained from \(\cup\mathcal{C}^{\prime}\) by deleting one element from each circuit that is not in any other circuit in \(\mathcal{C}^{\prime}\) is independent in \(M\). Note that a pair \(\{C_{1},C_{2}\}\) is perfect if and only if \(|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})=2\); in this case, we say that \(\{C_{1},C_{2}\}\) is a _modular pair_. In order to prove Theorem 2, we need to recall the lift construction from [12]. **Theorem 7** ([12, Theorem 2]).: _Let \(M\) be a matroid on ground set \(E\), and let \(N\) be a matroid on the set of circuits of \(M\) so that_ * _if_ \(\mathcal{C}^{\prime}\) _is a perfect collection of circuits of_ \(M\)_, then each circuit_ \(C\) _of_ \(M\) _contained in_ \(\cup\mathcal{C}^{\prime}\) _satisfies_ \(C\in\operatorname{cl}_{N}(\mathcal{C}^{\prime})\)_._ _Then the function \(r:2^{E}\to\mathbb{Z}\) defined, for all \(X\subseteq E\), by_ \[r(X)=r_{M}(X)+r_{N}(\{C\colon C\text{ is a circuit of }M|X\})\] _is the rank function of a rank-\(r(N)\) lift of \(M\)._ Theorem 2 follows from Theorem 7 and the following proposition, which shows that \((7*)\) and the hypothesis from Theorem 2 are equivalent. **Proposition 8**.: _Let \(M\) be a matroid and let \(N\) be a matroid on the circuits of \(M\). The following are equivalent_ * _For every modular pair_ \(\{C_{1},C_{2}\}\) _of circuits of_ \(M\)_, each circuit_ \(C\) _of_ \(M\) _contained in_ \(C_{1}\cup C_{2}\) _satisfies_ \(C\in\operatorname{cl}_{N}(\{C_{1},C_{2}\})\)_._ * _For every perfect collection_ \(\mathcal{C}^{\prime}\) _of circuits of_ \(M\)_, each circuit_ \(C\) _of_ \(M\) _contained in_ \(\cup\mathcal{C}^{\prime}\) _satisfies_ \(C\in\operatorname{cl}_{N}(\mathcal{C}^{\prime})\)_._ Proof.: The implication \((*)\implies(*^{\prime})\) is immediate. We now show \((*^{\prime})\implies(*)\). Assume \((*^{\prime})\) and let \(\mathcal{C}^{\prime}\) be minimal so that \((*)\) is false for \(\mathcal{C}^{\prime}\). Then \(|\mathcal{C}^{\prime}|>1\). Let \(C\) be a circuit of \(M\) contained in \(\cup\mathcal{C}^{\prime}\). Every subset of \(\mathcal{C}^{\prime}\) is a perfect collection of circuits of \(M\); we will make repeated tacit use of this fact. If there is some \(C^{\prime}\in\mathcal{C}^{\prime}\) so that \(C\) is contained in \(\cup(\mathcal{C}^{\prime}-\{C^{\prime}\})\), then \(C\in\operatorname{cl}_{N}(\mathcal{C}^{\prime}-\{C^{\prime}\})\) by the minimality of \(\mathcal{C}^{\prime}\), and therefore \(C\in\operatorname{cl}_{N}(\mathcal{C}^{\prime})\) as desired. So for each \(C^{\prime}\in{\cal C}^{\prime}\), there is at least one element in \(C^{\prime}\cap C\) that is not in any other circuit in \({\cal C}^{\prime}\). Let \(X\) be a transversal of \(\{(C^{\prime}\cap C)-\cup({\cal C}^{\prime}-\{C^{\prime}\})\colon C^{\prime}\in{ \cal C}^{\prime}\}\), and note that \(X\subseteq C\). Then \[|(\cup{\cal C}^{\prime})-X| =|\cup{\cal C}^{\prime}|-|X| \tag{1}\] \[=r_{M}(\cup{\cal C}^{\prime})+|{\cal C}^{\prime}|-|X|\] (2) \[=r_{M}(\cup{\cal C}^{\prime})\] (3) \[=r_{M}((\cup{\cal C}^{\prime})-X), \tag{4}\] so \((\cup{\cal C}^{\prime})-X\) is independent in \(M\). Line (2) holds because \({\cal C}^{\prime}\) is perfect, line (3) holds because \(|X|=|{\cal C}^{\prime}|\), and line (4) holds because each element in \(X\) is in a unique circuit of \(M|(\cup{\cal C}^{\prime})\), so \(X\subseteq{\rm cl}_{M}((\cup{\cal C}^{\prime})-X)\). We claim that \(M|(\cup{\cal C}^{\prime})\) is connected. Otherwise, \(C\) is contained in the union of a proper subset \({\cal C}^{\prime\prime}\) of \({\cal C}^{\prime}\) since each each circuit of a matroid is contained in a single connected component. Minimality of \({\cal C}^{\prime}\) then implies \(C\in{\rm cl}_{N}({\cal C}^{\prime\prime})\subseteq{\rm cl}_{N}({\cal C}^{ \prime})\). Since \(M|(\cup{\cal C}^{\prime})\) is connected and has corank at least two, there is a circuit \(C_{0}\) of \(M|(\cup{\cal C}^{\prime})\) so that \(|C\cup C_{0}|-r_{M}(C\cup C_{0})=2\) and \(C\cap C_{0}\neq\varnothing\). Let \(x_{1}\in X\). Then the matroid \(M|((C\cup C_{0})-x_{1})\) has corank one and thus contains a unique circuit \(C_{1}\) (where \(C_{1}=C_{0}\) if \(x_{1}\notin C_{0}\)). The circuit \(C_{1}\) contains some \(x_{2}\in X\), since \((\cup{\cal C}^{\prime})-X\) is independent in \(M\). Let \(C_{2}\) be the unique circuit of \(M|((C\cup C_{0})-x_{2})\). The circuits \(C,C_{1},C_{2}\) are distinct because \(x_{1},x_{2}\in C\), while \(C_{1}\) contains \(x_{2}\) but not \(x_{1}\) and \(C_{2}\) does not contain \(x_{2}\). Then \(|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})\geq 2\), and since \(|C\cup C_{0}|-r_{M}(C\cup C_{0})=2\), this implies that \(C_{1}\cup C_{2}=C\cup C_{0}\). In particular, \(|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})=2\) and \(C\subseteq C_{1}\cup C_{2}\). Then by \((*^{\prime})\) we have \(C\in{\rm cl}_{N}(\{C_{1},C_{2}\})\). We will finish the proof by using the minimality of \({\cal C}^{\prime}\) to show that \(C_{1},C_{2}\in{\rm cl}_{N}({\cal C}^{\prime})\). For each \(i\in\{1,2\}\), let \(C^{\prime}_{i}\) be the unique circuit in \({\cal C}^{\prime}\) that contains \(x_{i}\). No element in \(C^{\prime}_{i}-\cup({\cal C}^{\prime}-\{C^{\prime}_{i}\})\) is in a circuit of \(M|((\cup{\cal C}^{\prime})-x_{i})\), because \({\cal C}^{\prime}-\{C^{\prime}_{i}\}\) is a perfect collection of circuits with \(|\cup({\cal C}^{\prime}-\{C^{\prime}_{i}\})|-r_{M}(\cup({\cal C}^{\prime}-\{C^ {\prime}_{i}\}))=|{\cal C}^{\prime}|-1\). Since \(C_{i}\) does not contain \(x_{i}\), this implies that \(C_{i}\subseteq\cup({\cal C}^{\prime}-\{C^{\prime}_{i}\})\). Since \(|{\cal C}^{\prime}-\{C^{\prime}_{i}\}|<|{\cal C}^{\prime}|\), the minimality of \({\cal C}^{\prime}\) implies that \(C_{i}\in{\rm cl}_{N}({\cal C}^{\prime}-\{C^{\prime}_{i}\})\) and is thus in \({\rm cl}_{N}({\cal C}^{\prime})\). Finally, since \(C_{1},C_{2}\in{\rm cl}_{N}({\cal C}^{\prime})\) and \(C\in{\rm cl}_{N}(\{C_{1},C_{2}\})\), it follows that \(C\in{\rm cl}_{N}({\cal C}^{\prime})\). One advantage of \((*^{\prime})\) over \((*)\) is that it is a local condition rather than a global condition, and therefore may be easier to verify for certain choices of \(N\). For example, when \(M\) is graphic it suffices to check condition \((*^{\prime})\) only when \(C_{1}\) and \(C_{2}\) are in a common theta subgraph. ## 3 The converse It was conjectured in [12, Conj. 1.6] that the converse of Theorem 2 holds: for every matroid \(K\) with a set \(X\), there is a matroid \(N\) on the circuits of \(K/X\) so that \(K\setminus X\). We show that this is true if \(K\) is representable but false in general, even when \(|X|=2\). **Proposition 9**.: _Let \(K\) be an \(\mathbb{F}\)-representable matroid and let \(X\) be a subset of its ground set. Then there exists an \(\mathbb{F}\)-representable matroid \(N\) on the circuits of \(K/X\) such that \((K/X)^{N}\cong K\setminus X\)._ Proof.: Denote \(M:=K/X\) and \(L:=K\setminus X\) and let \(E\) denote the ground set of \(M\) and \(L\). Without loss of generality we may assume that \(X\) is independent in \(K\). Therefore there exists a matrix \(A\) whose column-matroid is \(K\) such that the columns corresponding to elements of \(X\) are distinct standard basis vectors. Thus one obtains an \(\mathbb{F}\)-representation \(A_{M}\) of \(M\) by deleting the columns corresponding to \(X\), as well as the rows corresponding to the nonzero entries of these columns. One obtains an \(\mathbb{F}\)-representation \(A_{L}\) of \(L\) by deleting the \(X\) columns. From this, one obtains an \(\mathbb{F}\)-representation \(A_{M}\) of \(M\) by deleting the rows where the columns corresponding to \(X\) have their nonzero entries. For each circuit \(C\) of \(M\), let \(x_{C}\) be an element of the kernel of \(A_{M}\) whose support is \(C\) and let \(B\) be a matrix whose column-set is the following. \[\{A_{L}x_{C}:C\text{ is a circuit of }M\}.\] Let \(N\) be the column matroid of \(B\), which we can view as a matroid whose ground set is the circuit set of \(M\). Let \(C_{1}\) and \(C_{2}\) be circuits of \(M\) with \(|C_{1}\cup C_{2}|-r_{M}(C_{1}\cup C_{2})=2\), and let \(C\) be a circuit of \(M\) with \(C\subseteq C_{1}\cup C_{2}\). Consider the matrix obtained by restricting \(A_{M}\) to the columns indexed by \(C_{1}\cup C_{2}\). It has a two-dimensional kernel with a basis obtained from \(\{x_{C_{1}},x_{C_{2}}\}\) by deleting entries corresponding to elements of \(E\) outside \(C_{1}\cup C_{2}\). Therefore \(x_{C}=\alpha x_{C_{1}}+\beta x_{C_{2}}\) for some \(\alpha,\beta\in\mathbb{F}\) and \(A_{L}x_{C}=\alpha A_{L}x_{C_{1}}+\beta A_{L}x_{C_{2}}\). In particular, \(C\in\operatorname{cl}_{N}(\{C_{1},C_{2}\})\), which allows us to apply the construction given in Theorem 2. It remains to prove that \(L=M^{N}\). Let \(Y\subseteq E\), let \(T\subseteq Y\) be a basis of \(M|Y\), and for each \(e\in Y\setminus T\) let \(C_{e}\) denote the unique circuit of \(M\) in \(T\cup\{e\}\). If \(Y\) is dependent in \(M^{N}\), then \(T\) is a proper subset of \(Y\) and there exists a nontrivial linear dependence of the following form \[\sum_{e\in Y\setminus T}\lambda_{e}A_{L}x_{C_{e}}=0.\] Then \(\sum_{e\in Y\setminus T}\lambda_{e}x_{C_{e}}\) lies in the kernel of \(A_{L}\). It is nonzero since \(x_{C_{e}}\) is zero at all \(f\in Y\setminus(T\cup\{e\})\) and nonzero at \(e\). So \(Y\) is also dependent in \(L\). Now assume \(Y\) is dependent in \(L\) and let \(y\) be such that \(A_{L}y=0\). Then \(A_{M}y=0\). Define \(k:=|Y|-r_{M}(Y)\) and note that this is the dimension of the kernel of the matrix \(D\) obtained from \(A_{L}\) by restricting to columns corresponding to \(Y\). Since the kernel of every matrix is spanned by its support-minimal elements, there exists a set of circuits \(\{C_{1},\ldots,C_{k}\}\) of \(M\) so that \(\{x_{C_{1}},\ldots,x_{C_{k}}\}\) is a basis of the nullspace of \(D\), modulo adding/removing entries corresponding to elements of \(E\setminus Y\). This gives us scalars \(\lambda_{1},\ldots,\lambda_{k}\) so that \[y=\sum_{i=1}^{k}\lambda_{i}x_{C_{i}}.\] Multiplying both sides of the above on the left by \(A_{L}\) tells us that \(\{C_{1},\ldots,C_{k}\}\) is dependent in \(N\). Since \(\{x_{C_{1}},\ldots,x_{C_{k}}\}\) spans the nullspace of \(D\), \(r_{N}(\{C_{1},\ldots,C_{k}\})<k\). But then \(|Y|>r_{M}(Y)+r_{N}(\{C_{1},\ldots,C_{k}\})\) so \(Y\) is also dependent in \(M^{N}\). If \(M\) and \(L\) are representable over a field \(\mathbb{F}\) and \(L\) is a lift of \(M\), this does _not_ imply that there is an \(\mathbb{F}\)-representable matroid \(K\) so that \(L=K\setminus X\) and \(M=K/X\) for some set \(X\subseteq E(K)\). Consider the following example. Given nonnegative integers \(r\leq n\), the uniform matroid of rank \(r\) on \(n\) elements is denoted \(U_{r,n}\). Let \(M=U_{1,3}\) and let \(L=U_{2,3}\). Then both \(M\) and \(L\) are representable over \(\mathbb{F}_{2}\). Let \(K\) be a matroid on ground set \(E\cup\{e_{0}\}\). If \(K\setminus e_{0}=L\) and \(K/e_{0}=M\), then \(K=U_{2,4}\) which is not representable over \(\mathbb{F}_{2}\). We next construct an infinite family of matroids for which the converse of Theorem 2 does not hold. For certain values of \(r\) and \(t\), we will define a set of \(r\)-element subsets \(\mathcal{C}(r,t)\) of \([2t+2]\) and then show that they are the circuit-hyperplanes of a matroid of rank \(r\) on ground set \([2t+2]\) which we will denote \(K(r,t)\). We will then show that there is no matroid \(N\) on the circuit set of \(K(r,t)/\{2t+1,2t+2\}\) such that \(K(r,t)\setminus\{2t+1,2t+2\}=(K(r,t)/\{2t+1,2t+2\})^{N}\). Proposition 9 will then imply that \(K(r,t)\) is not representable over any field. **Definition 10**.: Let \(r\geq 4\) and \(t\geq 3\) be integers satisfying \(r\leq 2t-2\). For \(i=1,\ldots,t\), let \(C_{i}\subseteq[2t]\) be defined as \[C_{i}:=\{1+2(i-1),\ldots,(r-2)+2(i-1)\}\] with indices taken cyclically modulo \(2t\). Then define: 1. \(X:=\{2t+1,2t+2\}\), 2. \(\mathcal{C}^{\prime}(r,t):=\{C_{i}\cup X\colon i\in[t]\}\), and 3. \(\mathcal{C}^{\prime\prime}(r,t):=\{C_{i}\cup C_{i+1}\colon i\in[t-1]\}\). Define \(\mathcal{C}(r,t)\) to be the set of subsets of \([2t+2]\) containing \(\mathcal{C}^{\prime}(r,t)\cup\mathcal{C}^{\prime\prime}(r,t)\) and all \((r+1)\)-element subsets that do not contain an element of \(\mathcal{C}^{\prime}(r,t)\) or \(\mathcal{C}^{\prime\prime}(r,t)\). We will soon see that \(\mathcal{C}(r,t)\) is the circuit set of a matroid, but before doing this, we look at the case of \(r=4\) and \(t=3\). Here we have 1. \(C_{1}=\{1,2\}\), \(C_{2}=\{3,4\}\), \(C_{3}=\{5,6\}\), and \(X=\{7,8\}\) 2. \(\mathcal{C}^{\prime}(4,3)=\{1278,3478,5678\}\) 3. \(\mathcal{C}^{\prime\prime}(4,3)=\{1234,3456\}\). In particular, \(\mathcal{C}^{\prime}(4,3)\cup\mathcal{C}^{\prime\prime}(4,3)\) is the set of circuit-hyperplanes of the Vamos matroid \(V_{8}\). So \(K(r,t)\) is a generalization that captures the cyclic nature of the set of circuit-hyperplanes of \(V_{8}\). Recall that a matroid of rank \(r\) is _sparse paving_ if every \(r\)-element subset is either a basis or a circuit-hyperplane. **Proposition 11**.: _Let \(r\geq 4\) and \(t\geq 3\) be integers satisfying \(r\leq 2t-2\). Then \(\mathcal{C}(r,t)\) is the circuit set of a rank-\(r\) sparse paving matroid \(K(r,t)\) on ground set \([2t+2]\)._ Proof.: It suffices to show that no two sets in \(\mathcal{C}^{\prime}(r,t)\cup\mathcal{C}^{\prime\prime}(r,t)\) intersect in \(r-1\) elements. We have three cases to consider. Case 1: Let \(C\in\mathcal{C}^{\prime}(r,t)\) and \(C^{\prime}\in\mathcal{C}^{\prime\prime}(r,t)\). Then \(|C\cap C^{\prime}|\leq r-2\) because \(C\) contains \(\{2t+1,2t+2\}\) and \(C^{\prime}\) does not. Case 2: Let \(C,C^{\prime}\in\mathcal{C}^{\prime}(r,t)\), so \(C=C_{i}\cup X\) and \(C^{\prime}=C_{j}\cup X\) for some \(i,j\in[t]\). Since \(r\leq 2t-2\) it follows that \(|C_{k}\cap C_{k+1}|=r-4\) for all \(k\) (indices taken modulo \(t\)) and \(|C_{i}\cap C_{j}|\leq r-4\). Then \(|C\cap C^{\prime}|=|C_{i}\cap C_{i+1}|+2\leq r-2\), as desired. Case 3: Let \(C,C^{\prime}\in\mathcal{C}^{\prime\prime}(r,t)\), so \(C=C_{i}\cup C_{i+1}\) and \(C^{\prime}=C_{j}\cup C_{j+1}\) for some \(i,j\in[t]\). Then \(|C\cap C^{\prime}|\) is maximized when \(j=i+1\), so we may assume that \(C^{\prime}=C_{i+1}\cup C_{i+2}\), taking indices modulo \(t\). Since \(r\leq 2t-2\) it follows that \(C_{i}\cap C_{i+2}\subseteq C_{i+1}\). This implies that \(C\cap C^{\prime}\subseteq C_{i+1}\) and so it follows that \(|C\cap C^{\prime}|\leq|C_{i+1}|=r-2\). The following implies Theorem 3. **Theorem 12**.: _Let \(r\geq 4\) and \(t\geq 3\) be integers satisfying \(r\leq 2t-2\). There is no matroid \(N\) on the circuits of \(K(r,t)/\{2t+1,2t+2\}\) for which \(K(r,t)\backslash\{2t+1,2t+2\}\) is isomorphic to \((K(r,t)/\{2t+1,2t+2\})^{N}\). Moreover, \(K(r,t)\) is not representable over any field._ Proof.: Denote \(K:=K(r,t)\) and \(X:=\{2t+1,2t+2\}\). Let \(M:=K/X\) and \(L:=M\setminus X\), so \(L\) is a rank-\(2\) lift of \(M\). For each \(i\in[t]\), the set \(C_{i}\) is a circuit of \(M\) and is independent in \(L\). So for each \(i\in[t]\) the set \(C_{i}\) is a non-loop of \(N\). We will argue that the following statements hold for \(M\) and \(L\): 1. \((C_{i},C_{i+1})\) is a modular pair of circuits of \(M\) for each \(i\in[t-1]\), 2. \((C_{1},C_{t})\) is a modular pair of circuits of \(M\), 3. \(r_{L}(C_{i}\cup C_{i+1})-r_{M}(C_{i}\cup C_{i+1})=1\) for each \(i\in[t-1]\), 4. \(r_{L}(C_{1}\cup C_{t})-r_{M}(C_{1}\cup C_{t})=2\). For \((a)\), note that \(|C_{i}\cup C_{i+1}|=r\) and \(r_{M}(C_{i}\cup C_{i+1})=r-2\), because \(r_{K}(C_{i}\cup C_{i+1}\cup X)=r\). The same argument also proves \((b)\). For \((c)\), note that \(r_{K}(C_{i}\cup C_{i+1})=r-1\) because \(C_{i}\cup C_{i+1}\) is a circuit of \(K\) of cardinality \(r\). However \(r_{L}(C_{1}\cup C_{t})-r_{M}(C_{1}\cup C_{t})=2\) because \(C_{1}\cup C_{t}\) is independent in \(K\), proving \((d)\). Now suppose there is a matroid \(N\) on the circuits of \(M\) so that \(M^{N}\cong L\). Then \((a)\) and \((c)\) together imply that \(C_{i}\) and \(C_{i+1}\) are parallel in \(N\) for all \(i\in[t]\), which implies that \(C_{1}\) and \(C_{t}\) are parallel in \(N\). But \((b)\) and \((d)\) together imply that \(C_{1}\) and \(C_{t}\) are independent in \(N\), a contradiction. It now follows from Proposition 9 that \(K\) is not representable. As illustrated by Theorem 12, the fact that the converse of Theorem 2 is false in general but true for representable matroids gives a new way to certify non-representability. We hope that this method is in fact 'new' and that \(K(r,t)\) cannot be certified as non-representable using existing means. However, there are many ways to certify non-representability, and we make no attempt to test \(K(r,t)\) against them all. We merely show that \(K(r,t)\) does not violate the most well-known certificate for non-representability: Ingleton's inequality. Ingleton [5] proved that if a matroid has sets \(A,B,C,D\) so that \[r(A\cup B)+r(A\cup C)+r(A\cup D)+r(B\cup C)+r(B\cup D)\] \[\geq r(A)+r(B)+r(A\cup B\cup C)+r(A\cup B\cup D)+r(C\cup D)\] then it is not representable. We show that this cannot be used to certify non-representability of \(K(r,t)\) when \(r\geq 5\) and \(r\leq 2t-3\). Following [8], we say that a matroid is _Ingleton_ if any choice of four subsets satisfies the above inequality. **Proposition 13**.: _Let \(r\geq 5\) and \(t\geq 3\) be integers satisfying \(r\leq 2t-3\). Then \(K(r,t)\) is Ingleton._ Proof.: Nelson and van der Pol [8, Lemma 3.1] showed that a rank-\(r\) sparse paving matroid is Ingleton if and only if there are no pairwise disjoint subsets \(I,P_{1},P_{2},P_{3},P_{4}\) so that \(|I|=r-4\) and \(|P_{i}|=2\) for all \(i\in\{1,2,3,4\}\), while \(I\cup P_{i}\cup P_{j}\) is a circuit of for all \(\{i,j\}\neq\{3,4\}\) and \(I\cup P_{3}\cup P_{4}\) is a basis. Suppose that such sets exist for \(K(r,t)\). Then \(I\) is an \((r-4)\)-element set contained in at least five circuit-hyperplanes of \(K(r,t)\), and \(Y=I\cup P_{1}\cup P_{2}\cup P_{3}\cup P_{4}\) is an \((r+4)\)-element set that contains at least five circuit-hyperplanes of \(K(r,t)\). Now let \(X=\{2t+1,2t+2\}\). If \(I\neq C_{i}\cap C_{i+1}\) for some \(i\) (indices taken modulo \(t\)), then \(I\) is in at most one circuit-hyperplanes of the form \(C_{j}\cup X\), and at most three of the form \(C_{j}\cup C_{j+1}\), a contradiction. So \(I=C_{i}\cap C_{i+1}\) for some \(i\). Then \(I\) is contained in exactly five circuit-hyperplanes, namely \(C_{i}\cup X\), \(C_{i+1}\cup X\), \(C_{i-1}\cup C_{i}\), \(C_{i}\cup C_{i+1}\), and \(C_{i+1}\cup C_{i+2}\), which means that each of these sets is contained in \(Y\). However, \(r\leq 2t-3\) implies that \(|C_{i-1}\cup C_{i}\cup C_{i+1}\cup C_{i+2}\cup X|>r+4\), a contradiction. It follows from Theorem 12 and results of Nelson and van der Pol [8] that \(K(r,t)\) also cannot be certified as non-representable via a small non-representable minor. Following [8], a rank-\(4\) sparse paving matroid \(M\) is _Vamos-like_ if it has a partition \((P_{1},P_{2},P_{3},P_{4})\) such that exactly five of the six pairs \(P_{i}\cup P_{j}\) form circuits of \(M\). There are \(39\) Vamos-like matroids, one of which is the Vamos matroid itself, and none are representable. They prove that a sparse paving matroid is Ingleton if and only if it has no Vamos-like minor. So, Theorem 12 implies that \(K(r,t)\) has no Vamos-like minor when \(r\geq 5\) and \(r\leq 2t-3\). While \(K(r,t)\) is non-representable, we conjecture that it is very close to being representable, in the following sense. **Conjecture 14**.: _For all integers \(r\) and \(t\) with \(r\geq 4\), \(t\geq 3\) and \(2t+2\geq r+4\), any matroid obtained from \(K(r,t)\) by relaxing a circuit-hyperplane into a basis is representable._ When \(r\leq 2t-3\), \(K(r,t)\) has no element in every circuit-hyperplane, so Conjecture 14 would imply that \(K(r,t)\) is an excluded minor for the class of representable matroids. We prove one more interesting property of \(K(r,t)\). **Proposition 15**.: _Let \(r\geq 4\) and \(t\geq 3\) be integers satisfying \(r\leq 2t-3\) and let \(M\) be a proper minor of \(K(r,t)\). Then \(M\) is not isomorphic to \(K(r^{\prime},t^{\prime})\) for any integers \(r^{\prime}\geq 4\) and \(t^{\prime}\geq 3\) satisfying \(r^{\prime}\leq 2t^{\prime}-3\)._ Proof.: Suppose \(K(r^{\prime},t^{\prime})\) is a minor of \(K(r,t)\) and \((r^{\prime},t^{\prime})\neq(r,t)\). Then \(r^{\prime}<r\). Let \(A\subseteq E(K(r,t))\) be independent so that \(K(r,t)/A\) has a spanning \(K(r^{\prime},t^{\prime})\)-restriction, so \(|A|=r-r^{\prime}\). If \(C\) is a circuit of \(K(r^{\prime},t^{\prime})\), then \(C\cup A\) contains a circuit of \(K(r,t)\). If \(C\) is a circuit-hyperplane of \(K(r^{\prime},t^{\prime})\) then \(|C\cup A|=r^{\prime}+(r-r^{\prime})=r\), which implies that \(C\cup A\) is a circuit-hyperplane of \(K(r,t)\). Since \(K(r^{\prime},t^{\prime})\) has \(2t^{\prime}-1\) circuit-hyperplanes, this implies that \(A\) is contained in at least \(2t^{\prime}-1\) circuit-hyperplanes of \(K(r,t)\). It is straightforward to show that the intersection of any \(h\) of the sets \(C_{i}\) has size at most \(r-2h\). So if \(A\) is contained in \(h\) of the sets \(C_{i}\) and \(h>r^{\prime}/2\), then \(|A|\leq r-2h<r-r^{\prime}=|A|\), a contradiction. So \(A\) is contained in \(C_{i}\) for at most \(r^{\prime}/2\) different choices of \(i\). Then \(A\) is contained in at most \(r^{\prime}/2\) circuit-hyperplanes of \(K(r,t)\) of the form \(C_{i}\cup X\) and at most \(r^{\prime}/2+1\) circuit-hyperplanes of the form \(C_{i}\cup C_{i+1}\). But then \(A\) is contained in at most \(r^{\prime}/2+(r^{\prime}/2+1)=r^{\prime}+1\) circuit-hyperplanes of \(K(r,t)\). Since \(r^{\prime}+1<2t^{\prime}-1\) when \(r^{\prime}\leq 2t^{\prime}-3\), this is a contradiction. So \(\{K(r,t)\colon r\geq 4,t\geq 3,r\leq 2t-3\}\) is an infinite antichain of non-representable matroids that all satisfy Ingleton's inequality, and if Conjecture 14 is true then each is also an excluded minor for representability. We comment that while \(K(r,t)\) is constructed using rank-2 lifts, related constructions using rank-\(t\) lifts with \(t>2\) are likely possible as well. We conclude this section with the following question. **Question 16**.: Which matroids \(K\) have the property that for every set \(X\subseteq E(K)\) there is a matroid \(N\) on the circuits of \(K/X\) such that \((K/X)^{N}\cong K\setminus X\)? For example, if every algebraic matroid has this property then \(K(r,t)\) would be non-algebraic for all \(r\geq 4\) and \(t\geq 3\). This may be a promising direction since the Vamos matroid is non-algebraic [6] and isomorphic to \(K(4,3)\). ## 4 Gain graphs Recall that \((K_{n}^{\Gamma},\phi_{n}^{\Gamma})\) is the gain graph over a finite group \(\Gamma\) where \(K_{n}^{\Gamma}\) has vertex set \([n]\) and edge set \(\binom{[n]}{2}\times\Gamma\), and the gain function \(\phi_{n}^{\Gamma}\) orients edge \((\{i,j\},\alpha)\) from \(i\) to \(j\) when \(i<j\) and assigns the label \(\alpha\). We write \(\alpha_{ij}\) for the edge \((\{i,j\},\alpha)\), for convenience. For each \(\alpha\in\Gamma\) we write \(E_{\alpha}\) for \(\{(\{i,j\},\alpha)\colon 1\leq i<j\leq n\}\); these are the edges _labeled_ by \(\alpha\). For a set \(A\subseteq\Gamma\) we write \(E_{A}\) for \(\cup_{\alpha\in A}E_{\alpha}\). A cycle of \((K_{n}^{\Gamma},\phi_{n}^{\Gamma})\) is _balanced_ if an oriented product of its edge labels is equal to the identity element of \(\Gamma\). For the remainder of this section we shall refer to balanced and unbalanced cycles of \(K_{n}^{\Gamma}\), with the gain function \(\phi_{n}^{\Gamma}\) implicit. We can use \(\Gamma\) to define special automorphisms of the graph \(K_{n}^{\Gamma}\), as follows. Given an integer \(k\in[n]\) and an element \(\beta\in\Gamma\), define an automorphism \(f_{\beta}\colon E(K_{n}^{\Gamma})\to E(K_{n}^{\Gamma})\) by \[f_{\beta}(\alpha_{ij})=\begin{cases}(\beta^{-1}\cdot\alpha)_{ij}&\text{if $i=k$}\\ (\alpha\cdot\beta)_{ij}&\text{if $j=k$}\\ \alpha_{ij}&\text{otherwise.}\end{cases}\] For each edge \(e\) of \(K_{n}^{\Gamma}\) we say that \(f_{\beta}(e)\) is obtained from \(e\) by _switching_ at vertex \(k\) with value \(\beta\). If a set \(X\) of edges of \(K_{n}^{\Gamma}\) can be obtained from a set \(Y\) via a sequence of switching operations we say that \(X\) and \(Y\) are _switching equivalent_. It is straightforward to check that switching maps balanced cycles to balanced cycles and unbalanced cycles to unbalanced cycles. We comment that switching is typically an operation on gain functions, and our application of this operation to define a graph automorphism is nonstandard. For each nontrivial finite group \(\Gamma\) and integer \(n\geq 3\), we define \(\mathcal{M}_{n,\Gamma}\) to be the class of lifts of \(M(K_{n}^{\Gamma})\) for which a cycle of \(K_{n}^{\Gamma}\) is a circuit of \(M\) if and only if it is balanced. Each matroid in \(\mathcal{M}_{n,\Gamma}\) is simple, since each \(2\)-element cycle of \(M(K_{n}^{\Gamma})\) is unbalanced. We now generalize Theorem 5 in the case that \(n\geq 4\). **Theorem 17**.: _Let \(n\geq 4\) be an integer, let \(\Gamma\) be a finite group, and let \(M\in\mathcal{M}_{n,\Gamma}\). If \(r(M)-r(M(K_{n}^{\Gamma}))>1\), then there is a prime \(p\) and an integer \(j\geq 2\) so that \(\Gamma\cong\mathbb{Z}_{p}^{j}\)._ Proof.: Let \(\epsilon\) denote the identity element of \(\Gamma\). It was proved in [12, Lemma 19] that each \(\alpha\in\Gamma-\{\epsilon\}\) satisfies \(r_{M}(E_{\{\alpha,\epsilon\}})=n\). For \(\alpha,\beta\in\Gamma-\{\epsilon\}\), we write \(\alpha\sim\beta\) if \(r_{M}(E_{\{\alpha,\beta,\epsilon\}})=n\). By [12, Lemma 21], \(\sim\) is an equivalence relation, and for each equivalence class \(A\) we have that \(r_{M}(E_{A\cup\epsilon})=n\) and \(A\cup\epsilon\) is a subgroup of \(\Gamma\). Let \(\mathcal{A}\) denote the set of equivalence classes under \(\sim\). Then \(\mathcal{A}\) is a group partition of \(\Gamma\), and \(|\mathcal{A}|\geq 2\) because \(r(M)-r(M(K_{n}^{\Gamma}))>1\). **Claim 17.1**.: _If \(\alpha,\beta\in\Gamma\) commute whenever \(\alpha\) and \(\beta\) are in different sets in \(\mathcal{A}\), then \(\Gamma\) is abelian and the theorem holds by Theorem 5._ Proof.: Let \(\alpha\in\Gamma\), and let \(A\in\mathcal{A}\) contain \(\alpha\). Since \(A\cup\epsilon\) is a proper subgroup of \(\Gamma\), we have \(|A|<|\Gamma|/2\), and so \(|\Gamma-A|>|\Gamma|/2\). The centralizer of \(\alpha\) contains \(\Gamma-A\), and thus contains more than \(|\Gamma|/2\) elements. Since the centralizer of \(A\) is a subgroup of \(\Gamma\), it follows that it is equal to \(\Gamma\). Thus, \(\alpha\) commutes with every element of \(\Gamma\). Since the same argument applies to every element of \(\Gamma\), it follows that \(\Gamma\) is abelian. Now that \(\Gamma\) is abelian the theorem statement follows from Theorem 5. Let \(|\Gamma|\) be minimal so that the theorem is false. We first show that \(\Gamma\) has a \(2\)-element generating set. We may assume that there are elements \(a_{1}\in A_{1}\) and \(a_{2}\in A_{2}\) that do not commute, or else the theorem statement holds by Claim 17.1. Let \(\Gamma^{\prime}\) be the subgroup of \(\Gamma\) generated by \(\{a_{1},a_{2}\}\), and let \(M^{\prime}=M|E_{\Gamma^{\prime}}\). Since \(a_{1}\nsim a_{2}\), \(r(M^{\prime})-r(M(K_{n}^{\Gamma^{\prime}}))>1\). Since \(M^{\prime}\) is a restriction of \(M\), a cycle of \(K_{n}^{\Gamma^{\prime}}\) is a circuit of \(M^{\prime}\) if and only if it is balanced. If \(\Gamma^{\prime}\neq\Gamma\), then the minimality of \(|\Gamma|\) implies that \(\Gamma^{\prime}\) is abelian. But then \(a_{1}\) and \(a_{2}\) commute, a contradiction. So \(\{a_{1},a_{2}\}\) is a \(2\)-element generating set of \(\Gamma\). It follows from [12, Lemma 20] that \(E_{\{a_{1},a_{2},\epsilon\}}\) spans \(M\), and it follows from the submodularity of \(r_{M}\) that \(r_{M}(E_{\{a_{1},a_{2},\epsilon\}})=n+1\). So \(M\) is a rank-2 lift of \(M(K_{n}^{\Gamma})\). Suppose for a contradiction that there is some \(A\in\mathcal{A}\) so that \(A\cup\epsilon\) is not a normal subgroup of \(\Gamma\). We will define a pair of hyperplanes of \(M\) that violate the hyperplane axioms. Let \(H_{1}=E_{A\cup\epsilon}\). Note that \(H_{1}\) is a hyperplane of \(M\), by the definition of \(\sim\). Let \(\{V,V^{\prime}\}\) be a partition of \([n]\) with \(|V|,|V^{\prime}|\geq 2\), and let \(H_{2}\) be the set of edges with both ends in the same part. Then \(H_{2}\) is a flat of \(M(K_{n}^{\Gamma})\), and is therefore a flat of \(M\) because \(M\) is a lift of \(M(K_{n}^{\Gamma})\) (see [9, Prop. 7.3.6]). By adding an edge labeled by \(\epsilon\) with one end in \(V\) and the other in \(V^{\prime}\) and repeatedly adding the third edge of a balanced cycle we obtain all edges. So \(r_{M}(H_{2})\geq r(M)-1\) and therefore \(H_{2}\) is a hyperplane of \(M\). Since \(A\cup\epsilon\) is not normal, there is some element \(b\in\Gamma\) so that \(b^{-1}\cdot A\cdot b\neq A\). Let \(a\in A\) so that \(b^{-1}\cdot a\cdot b=c\) for some \(c\notin A\cup\epsilon\). Consider the set \(B=(H_{1}\cap H_{2})\cup e\), where \(e\) is an edge with one end in \(V\) and the other in \(V^{\prime}\), with label \(b\), and directed from \(V\) to \(V^{\prime}\). By the hyperplane axioms, \(B\) is contained in a hyperplane of \(M\). However, we claim that \(B\) spans \(M\). Let \(B^{\prime}\) be obtained from \(B\) by switching with value \(b\) at each vertex in \(V\). In \(B^{\prime}\), the edge \(e\) is labeled by \(\epsilon\), and the set of labels of edges between each pair of vertices in \(V\) is \(b^{-1}\cdot(A\cup\epsilon)\cdot b\). Note that \(B^{\prime}\) has a spanning tree of edges all labeled by \(\epsilon\). Then the set obtained from \(B^{\prime}\) by taking closure under balanced cycles contains \(E_{\{\epsilon,a,c\}}\), which spans \(M\) because \(a\nsim c\). Since switching preserves balanced cycles, this implies that \(B\) also spans \(M\), a contradiction. We have shown that \(A\cup\epsilon\) is a normal subgroup of \(\Gamma\) for each \(A\in\mathcal{A}\). Since normal subgroups that intersect in the identity commute, it follows that each \(\alpha\in A\) commutes with every element in \(\Gamma-A\). Thus, the theorem holds by Claim 17.1. Surprisingly, Theorem 5 does not generalize to non-abelian groups when \(n=3\). Walsh showed that if there exists \(M\in\mathcal{M}_{n,\Gamma}\) whose rank is at least two greater than that of \(M(K_{n}^{\Gamma})\), then \(\Gamma\) has a nontrivial partition [12, Lemma 21]. Theorem 19 establishes the converse for \(n=3\). Recall that a _nontrivial partition_ of a group \(\Gamma\) with identity \(\epsilon\) is a partition \(\mathcal{A}\) of \(\Gamma-\{\epsilon\}\) so that \(A\cup\epsilon\) is a subgroup of \(\Gamma\) for all \(A\in\mathcal{A}\), and \(|\mathcal{A}|\geq 2\). For example, \(\mathbb{Z}_{p}^{j}\) has a nontrivial partition into \(\frac{p^{j}-1}{p-1}\) copies of the cyclic group \(\mathbb{Z}_{p}\), and the dihedral group has a nontrivial partition where each reflection is a part and the nontrivial rotations form a part. We direct the reader to [14] for background on group partitions. A group may have multiple nontrivial partitions. That said, each finite group has a canonical partition called the _primitive partition_, first described in [13]. It is universal in the following sense: if \(\mathcal{A}\) is the primitive partition of a finite group \(G\) and \(\mathcal{B}\) is another partition, then for each \(B\in\mathcal{B}\), the following is a partition of \(B\cup\epsilon\) \[\{A\in\mathcal{A}:A\cup\epsilon\text{ is a subgroup of }B\cup\epsilon\}.\] For our purposes, the most important property of the primitive partition is the following. **Proposition 18** ([1, 14]).: _Let \(\mathcal{A}\) be the primitive partition of a group \(\Gamma\). If \(A\in\mathcal{A}\) and \(\gamma\in\Gamma\), then \(\gamma\cdot A\cdot\gamma^{-1}\in\mathcal{A}\)._ We next prove a partial generalization of Theorem 4 to non-abelian groups in the case that \(n=3\). **Theorem 19**.: _Let \(\Gamma\) be a finite group with a nontrivial partition. Then there is a rank-\(2\) lift \(M\) of \(M(K_{3}^{\Gamma})\) so that a cycle of \(K_{3}^{\Gamma}\) is a circuit of \(M\) if and only if it is balanced._ Proof.: Let \(\epsilon\) be the identity element of \(\Gamma\), and let \(\mathcal{A}\) be the primitive partition of \(\Gamma\). We define a collection \(\mathcal{H}\) of subsets of the edges of \(K_{3}^{\Gamma}\) where \(H\in\mathcal{H}\) if 1. \(H\) is switching equivalent to \(E_{A\cup\{\epsilon\}}\) for some \(A\in\mathcal{A}\), or 2. \(H\) consists of all edges between some pair \(i,j\in\{1,2,3\}\). Note that \(\mathcal{H}\) is invariant under switching, and also relabeling vertices. We need the following fact about nontrivial partitions. **Claim 19.1**.: _Let \(A\in\mathcal{A}\) and let \(\alpha,\beta\in\Gamma\) so that \(\epsilon\in\alpha\cdot(A\cup\epsilon)\cdot\beta\). Then \(\alpha\cdot(A\cup\epsilon)\cdot\beta=A^{\prime}\cup\epsilon\) for some \(A^{\prime}\in\mathcal{A}\)._ Proof.: There is some \(\gamma\in A\cup\epsilon\) so that \(\alpha\cdot\gamma\cdot\beta=\epsilon\), i.e. so that \(\beta^{-1}=\alpha\cdot\gamma\). Since \(A\cup\epsilon\) is a subgroup of \(\Gamma\), \(\gamma\cdot(A\cup\epsilon)=A\cup\epsilon\) and therefore \(\alpha\cdot(A\cup\epsilon)\cdot\beta=\beta^{-1}\cdot(A\cup\epsilon)\cdot\beta\). Proposition 18 then implies the claim. We use the following to show that certain edge sets are not contained in a set in \(\mathcal{H}\). **Claim 19.2**.: _Let \(X\) be a set of edges that contains an \(\epsilon\)-labeled spanning tree of \(K_{3}^{\Gamma}\), and an edge labeled by \(\alpha\in\Gamma-\{\epsilon\}\). Suppose \(X\) is contained in a set in \(H\in\mathcal{H}\). Then \(H=E_{A\cup\epsilon}\), where \(A\) is the set in \(\mathcal{A}\) that contains \(\alpha\)._ Proof.: If \(H\) contains a path, then \(H\) also contains the balanced cycle obtained by completing that path to a cycle. Therefore we may assume that \(X\) is closed under completing paths to balanced cycles. Therefore \(E_{\epsilon}\subseteq X\). By symmetry we may assume that \(\alpha_{12}\in X\). Since \(X\) is closed under completing paths to balanced cycles, either \(\alpha_{ij}\) or \(\alpha_{ij}^{-1}\) is in \(X\) for all \(i,j\in\{1,2,3\}\). Let \(A\in\mathcal{A}\) so that \(\alpha\in A\). Say \(H\) is obtained from \(E_{A^{\prime}\cup\epsilon}\) for some \(A^{\prime}\in\mathcal{A}\) by switching at each vertex \(i\) with some \(a_{i}\in\Gamma\). Since \(H\) contains \(X\) which contains \(\epsilon_{12}\), we have that \(a_{1}^{-1}\cdot(A^{\prime}\cup\epsilon)\cdot a_{2}\) contains \(\epsilon\). Claim 19.1 implies that \(a_{1}^{-1}\cdot(A^{\prime}\cup\epsilon)\cdot a_{2}=A_{12}\cup\epsilon\) for some \(A_{12}\in\mathcal{A}\). So the labels on the edges of \(H\) between \(1\) and \(2\) are the elements of \(A_{12}\cup\epsilon\). Similarly, we can argue that between \(i\) and \(j\), the edge labels are the elements of \(A_{ij}\cup\epsilon\) for some \(A_{ij}\in\mathcal{A}\). But since \(\alpha_{ij}\) or \(\alpha_{ij}^{-1}\) is in \(X\) and \(X\subseteq H\) it follows that \(A_{ij}\) contains \(\alpha_{ij}\) or \(\alpha_{ij}^{-1}\), so \(A_{ij}=A\) for all \(i,j\in\{1,2,3\}\), and \(H=E_{A\cup\epsilon}\). A set of edges of \(K_{3}^{\Gamma}\) is _balanced_ if it contains no unbalanced cycles. **Claim 19.3**.: _If a set \(X\) of edges has at most \(3\) edges or consists of a balanced cycle with an extra edge, then \(X\) is contained in some \(H\in\mathcal{H}\)._ Proof.: First suppose that \(X\) has at most \(3\) edges. If \(X\) does not contain a spanning tree of \(K_{3}^{\Gamma}\) then \(X\) is contained in a set in \(\mathcal{H}\) of type (b). So assume otherwise. If \(X\) is balanced then \(X\) is switching equivalent to a subset of \(E_{\epsilon}\) and thus lies in \(E_{A\cup\epsilon}\) for every \(A\in\mathcal{A}\). If \(X\) is not balanced, then \(X\) is switching equivalent to the graph obtained by adding a single edge of the form \(\alpha_{ij}\) to a spanning tree where every edge has label \(\epsilon\). Let \(A\in\mathcal{A}\) contain \(\alpha\). Then \(X\) is contained in a set switching equivalent to \(E_{A\cup\epsilon}\). It remains to consider the case that \(X\) consists of a balanced triangle with a doubled edge. By switching, we may assume that each edge of the triangle is \(\epsilon\). Without loss of generality assume the extra edge is \(\alpha_{12}\). Then \(X\) is contained in \(E_{A\cup\epsilon}\) where \(A\in\mathcal{A}\) contains \(\alpha\). We now show that \(\mathcal{H}\) is the collection of hyperplanes of a matroid \(M\). We must show that for all distinct \(H_{1},H_{2}\in\mathcal{H}\) and \(e\notin H_{1}\cup H_{2}\), there is a set in \(\mathcal{H}\) that contains \(H_{1}\cap H_{2}\) and \(e\). This is clearly true if \(H_{1}\) and \(H_{2}\) are both type (b). If \(H_{1}\) is type (a) and \(H_{2}\) is type (b) then, up to switching equivalence and relabeling vertices, \(H_{1}\cap H_{2}\) consists of all edges between vertices \(1\) and \(2\) with label in \(A\cup\epsilon\) for some \(A\in\mathcal{A}\). Then \(e\) has vertex \(3\) as an end, and by switching at vertex \(3\) we may assume that \(e\) is labeled by \(\epsilon\). Then \((H_{1}\cap H_{2})\cup e\) is contained in \(E_{A\cup\epsilon}\), which is in \(\mathcal{H}\). So we assume that \(H_{1}\) and \(H_{2}\) are both type (a). For \(i\in\{1,2\}\), say \(H_{i}\) is switching equivalent to \(E_{A_{i}\cup\epsilon}\) for \(A_{i}\in\mathcal{A}\). We may assume that \(H_{1}\cap H_{2}\) contains a spanning tree. Otherwise, if \(e\) has the same ends as each edge in \(H_{1}\cap H_{2}\), then \((H_{1}\cap H_{2})\cup e\) is contained in a set of type (b). If \(e\) has an end not incident to any edge in \(H_{1}\cap H_{2}\), then we can argue as in the case where \(H_{1}\) is of type (a) and \(H_{2}\) of type (b) that \((H_{1}\cap H_{2})\cup e\) is contained in a set of type (a). By switching equivalence, we may further assume that the edges of a spanning tree of \(H_{1}\cap H_{2}\) are all labeled by \(\epsilon\). If \(H_{1}\cap H_{2}\) contains an edge labeled by \(\alpha\in\Gamma-\{\epsilon\}\), then \(H_{1}=H_{2}\) by Claim 19.2, a contradiction. So \(H_{1}\cap H_{2}\) is a balanced cycle, and therefore \((H_{1}\cap H_{2})\cup e\) is contained in a set in \(\mathcal{H}\) by Claim 19.3. Thus, \(\mathcal{H}\) is the hyperplane set of a matroid \(M\). Next, \(M\) is in fact a rank-\(2\) lift of the graphic matroid \(M(K_{3}^{\Gamma})\). The only nonempty proper flats of \(M(K_{3}^{\Gamma})\) are the parallel classes of edges. All such sets are flats of \(M\) as well, so \(M\) is a lift of \(M(K_{3}^{\Gamma})\) by [9, Prop. 7.3.6]. \(M\) is not an elementary lift because every set of three edges is contained in a hyperplane by Claim 19.3. Given \(\alpha,\beta\in\Gamma-\{\epsilon\}\) such that no \(A\in\mathcal{A}\) contains both \(\alpha\) and \(\beta\), the set \(\{\epsilon_{12},\alpha_{12},\epsilon_{23},\beta_{23}\}\) is not contained in a set in \(\mathcal{H}\) by Claim 19.2. Therefore the rank of \(M\) is \(4\), i.e. \(M\) is a rank-\(2\) lift of \(M(K_{3}^{\Gamma})\). Finally, we show that a cycle of \(K_{3}^{\Gamma}\) is a circuit of \(M\) if and only if it is balanced. Claim 19.3 implies that no balanced cycle is contained in a basis. Now suppose \(C\) is an unbalanced cycle. Up to switching equivalence and relabeling vertices we may assume that \(C=\{\epsilon_{12},\epsilon_{23},\alpha_{13}\}\). Consider \(B=C\cup\beta_{12}\) where \(\beta\) and \(\alpha\) are not in the same set in \(\mathcal{A}\). Then \(B\) is not contained in a set in \(\mathcal{H}\) by Claim 19.2. Therefore \(B\) is a basis of \(M\) and thus \(C\) is independent in \(M\). We now prove Theorem 6. Proof of Theorem 6.: Let \(M\in\mathcal{M}_{n,\Gamma}\) and assume \(M\) has rank at least \(n+1\), i.e. is a non-elementary lift of \(M(K_{n}^{\Gamma})\). It follows from [12, Lemma 5.3] that \(\Gamma\) has a nontrivial partition. Theorem 17 implies that if \(n\geq 4\) then \(\Gamma=\mathbb{Z}_{p}^{j}\) for a prime \(p\) and integer \(j\geq 2\). Conversely, if \(\Gamma\) has a nontrivial partition and \(n=3\) then Theorem 19 implies that a desired lift exists. If \(n\geq 4\), then the desired lift is given by Theorem 4. We comment that while Theorem 19 constructs a rank-2 lift of \(M(K_{3}^{\Gamma})\) that respects balanced cycles, no such lift can be constructed using Theorem 2 and a matroid \(N\) on the cycles of \(K_{3}^{\Gamma}\) when \(\Gamma\) is non-abelian; this can be proved in a similar manner to Theorem 17. So the matroids constructed in the proof of Theorem 19 provide another family of examples for which Theorem 2 does not apply. In particular, if \(\Gamma\) is non-abelian and \(M\) is a 2-element extension of one of the matroids from Theorem 19 by a set \(X\) so that \(M/X=M(K_{3}^{\Gamma})\), then \(M\) non-representable by Proposition 9. Finally, we point out that Theorem 19 constructs a rank-2 lift of \(M(K_{3}^{\Gamma})\), while for certain abelian groups it is possible to construct higher-rank lifts, as shown by Theorem 4. Is this possible for non-abelian groups as well? We expect an affirmative answer when \(\Gamma\) has no two-element generating set.
2305.10604
Topological realization of algebras of quasi-invariants, I
This is the first in a series of papers, where we introduce and study topological spaces that realize the algebras of quasi-invariants of finite reflection groups. Our result can be viewed as a generalization of a well-known theorem of A. Borel that realizes the ring of invariant polynomials a Weyl group $W$ as a cohomology ring of the classifying space $BG$ of the associated Lie group $G$. In the present paper, we state our realization problem for the algebras of quasi-invariants of Weyl groups and give its solution in the rank one case (for $G = SU(2)$). We call the resulting $G$-spaces $ F_m(G,T) $ the $m$-quasi-flag manifolds and their Borel homotopy quotients $ X_m(G,T) $ the spaces of $m$-quasi-invariants. We compute the equivariant $K$-theory and the equivariant (complex analytic) elliptic cohomology of these spaces and identify them with exponential and elliptic quasi-invariants of $W$. We also extend our construction of spaces quasi-invariants to a certain class of finite loop spaces $ \Omega B $ of homotopy type of $ S^3 $ originally introduced by D. L. Rector. We study the cochain spectra $ C^*(X_m,k) $ associated to the spaces of quasi-invariants and show that these are Gorenstein commutative ring spectra in the sense of Dwyer, Greenlees and Iyengar.
Yuri Berest, Ajay C. Ramadoss
2023-05-17T23:20:57Z
http://arxiv.org/abs/2305.10604v1
# Topological realization of algebras of quasi-invariants, I ###### Abstract. This is the first in a series of papers, where we introduce and study topological spaces that realize the algebras of quasi-invariants of finite reflection groups. Our result can be viewed as a generalization of a well-known theorem of A. Borel that realizes the ring of invariant polynomials a Weyl group \(W\) as a cohomology ring of the classifying space \(BG\) of the associated Lie group \(G\). In the present paper, we state our realization problem for the algebras of quasi-invariants of Weyl groups and give its solution in the rank one case (for \(G=\operatorname{SU}(2)\)). We call the resulting \(G\)-spaces \(F_{m}(G,T)\) the \(m\)-quasi-flag manifolds and their Borel homotopy quotients \(X_{m}(G,T)\) the spaces of \(m\)-quasi-invariants. We compute the equivariant \(K\)-theory and the equivariant (complex analytic) elliptic cohomology of these spaces and identify them with exponential and elliptic quasi-invariants of \(W\). We also extend our construction of spaces quasi-invariants to a certain class of finite loop spaces \(\Omega B\) of homotopy type of \(\mathbb{S}^{3}\) originally introduced by D. L. Rector [10]. We study the cochain spectra \(C^{*}(X_{m},k)\) associated to the spaces of quasi-invariants and show that these are Gorenstein commutative ring spectra in the sense of Dwyer, Greenlees and Iyengar [1]. ###### Contents * 1 Introduction * 2 Realization problem * 3 Spaces of quasi-invariants * 4 'Fake' spaces of quasi-invariants * 5 Equivariant \(K\)-theory * 6 Elliptic cohomology * 7 Topological Gorenstein duality * A Milnor bundles * B Duality of commutative ring spectra ## 1. Introduction Quasi-invariants are natural generalizations of classical invariant polynomials of finite reflection groups. In the case of Coxeter groups, they first appeared in mathematical physics -- in the work of O. Chalykh and A. Veselov [14, 15] in the early 1990s, and since then have found applications in many other areas: most notably, representation theory, algebraic geometry and combinatorics (see [13], [15], [17], [18], [19], [20], [21], [16], [15], [17], [18]). For arbitrary (complex) reflection groups, quasi-invariants were introduced in [1]. This last paper developed a general approach to quasi-invariants in the context of representation theory of rational double affine Hecke algebras, extending and refining the earlier results of [1] in the Coxeter case. We will use [1] as our main reference on algebras of quasi-invariants; in particular, we will follow the notation and conventions of that paper in the present work. We begin by recalling the definition of quasi-invariants in the Coxeter case. Let \(W\) be a finite real reflection group acting in its reflection representation \(V\). Denote by \(\mathcal{A}:=\{H\}\) the set of reflection hyperplanes of \(W\) in \(V\) and write \(s_{H}\in W\) for the reflection operator in \(H\). The group \(W\) acts naturally on the polynomial algebra \(\mathbb{C}[V]\) and, since the \(s_{H}\)'s generate \(W\), the invariant polynomials \(p\in\mathbb{C}[V]^{W}\) are determined by the equations \[s_{H}(p)=p \tag{1.1}\] for all \(H\in\mathcal{A}\). To define quasi-invariants we modify ('weaken') the equations (1.1) in the following way. For each reflection hyperplane \(H\in\mathcal{A}\), we choose a linear form \(\alpha_{H}\in V^{*}\) such that \(H=\operatorname{Ker}(\alpha_{H})\) and fix a non-negative integer \(m_{H}\in\mathbb{Z}_{+}\), assuming that \(m_{w(H)}=m_{H}\) for all \(w\in W\). In other words, we choose a system of roots of \(W\) in \(V^{*}\), which (abusing notation) we still denote by \(\mathcal{A}\), and fix a \(W\)-invariant function \(m:\mathcal{A}\to\mathbb{Z}_{+},\,H\mapsto m_{H}\), which values we will refer to as _multiplicities_ of hyperplanes (or roots) in \(\mathcal{A}\). Now, with these extra data in hand, we replace the equations (1.1) by the following congruences in \(\mathbb{C}[V]\): \[s_{H}(p)\,\equiv\,p\,\bmod\langle\alpha_{H}\rangle^{2m_{H}} \tag{1.2}\] where \(\langle\alpha_{H}\rangle\) denotes the principal ideal in \(\mathbb{C}[V]\) generated by the form \(\alpha_{H}\). For each \(H\in\mathcal{A}\), the congruence (1.2) simply means that the polynomial \(s_{H}(p)-p\) is divisible in \(\mathbb{C}[V]\) by the power of the linear form \(\alpha_{H}\) determined by the value of the multiplicity function \(m\). It is easy to see that the set of all polynomials satisfying (1.2) (for \(m\) fixed) forms a graded subalgebra in \(\mathbb{C}[V]\), which we denote \(Q_{m}(W)\). Following [10], we call \(Q_{m}(W)\) the algebra \(W\)_-quasi-invariant polynomials of multiplicity \(m\)_. Note that, for \(m=0\), we have \(Q_{0}(W)=\mathbb{C}[V]\), while \(\mathbb{C}[V]^{W}\subseteq Q_{m}(W)\subseteq\mathbb{C}[V]\) in general. Thus, for varying \(m\), the quasi-invariants interpolate between the \(W\)-invariants and all polynomials. Despite its simple definition, the algebras \(Q_{m}(W)\) have a complicated structure: they do not seem to admit a good combinatorial description, nor do they have a natural presentation in terms of generators and relations. Nevertheless, these algebras possess many remarkable properties, such as Gorenstein duality (see Theorem 2.3), and are closely related to some fundamental objects in representation theory, such as Dunkl operators and double affine Hecke algebras (see [1, 2]). The goal of the present work is to give a topological realization of the algebras of quasi-invariants as (equivariant) cohomology rings of certain spaces naturally attached to compact connected Lie groups. Our main result can be viewed as a generalization of a well-known theorem of A. Borel [1] that realizes the algebra of invariant polynomials of a Weyl group \(W\) as the cohomology ring of the classifying space \(BG\) of the associated Lie group \(G\). As the algebras \(Q_{m}(W)\) are defined over \(\mathbb{C}\), we should clarify what we really mean by "topological realization". It is a fundamental consequence of Quillen's rational homotopy theory [11] that every reduced, locally finite, graded commutative algebra \(A\) defined over a field \(k\) of characteristic zero is topologically realizable, i.e. \(A\cong H^{*}(X,k)\) for some (simply-connected) space \(X\). When equipped with cohomological grading, the algebras \(Q_{m}(W)\) have all the above-listed properties (_cf._ Lemma 2.2); hence, the natural question: For which values of \(m\) the \(Q_{m}(W)\)'s are realizable, has an immediate answer: for all \(m\). A more interesting (and much less obvious) question is whether one can realize quasi-invariants topologically as _a diagram of algebras_\(\{Q_{m}(W)\}\) (indexed by \(m\)) together with natural structure that these algebras carry (e.g., \(W\)-action). It is one of the objectives of this work to formulate a realization problem for the algebras of quasi-invariants in a precise (axiomatic) form by selecting a list of the desired properties. In the present paper, we state this problem for the classical Weyl groups (i.e., the crystallographic Coxeter groups over \(\mathbb{R}\) or \(\mathbb{C}\)) in terms of classifying spaces of compact Lie groups (see Section 2.4); in our subsequent paper, we will try to formulate a \(p\)-local version of the realization problem for algebras of quasi-invariants of non-crystallographic (in fact, non-Coxeter) groups defined over the \(p\)-adic numbers in terms of \(p\)-compact groups. We now give a general overview of our work, our problems and motivation. ### Quasi-invariants and cohomology theories In mathematical physics, quasi-invariants naturally arise in three different flavors: rational (polynomial), trigonometric (exponential) and elliptic. Having in hand topological spaces \(X_{m}(G,T)\) that realize the algebras \(Q_{m}(W)\), it is natural to expect that the above three types of quasi-invariants correspond to three basic cohomology theories evaluated at \(X_{m}(G,T)\): namely, the ordinary (singular) cohomology, topological \(K\)-theory and elliptic cohomology. We will show that this is indeed the case: in fact, quasi-invariants can be defined for an arbitrary (complex-oriented generalized) cohomology theory, though in general their properties have yet to be studied. ### Quasi-flag manifolds For a compact connected Lie group \(G\), our spaces of quasi-invariants can be naturally realized as Borel homotopy quotients of certain \(G\)-spaces \(F_{m}(G,T)\): \[X_{m}(G,T)=EG\times_{G}F_{m}(G,T)\] We call \(F_{m}(G,T)\) the \(m\)_-quasi-flag manifold of \(G\)_ as in the special case \(m=0\), we have \(F_{0}(G,T)=G/T\), the classical flag manifold. We remark that, in general, the spaces \(F_{m}(G,T)\) are defined only as \(G\)-equivariant homotopy types, although our construction provides some natural models for them as finite \(G\)-CW complexes. By restricting the action of the Lie group \(G\) on \(F_{m}(G,T)\) to its maximal torus \(T\subseteq G\), it is natural to ask for \(T\)_-equivariant_ cohomology (resp., \(T\)-equivariant \(K\)-theory, elliptic cohomology,...) of \(F_{m}(G,T)\). The \(T\)-equivariant cohomology is related to the \(G\)-equivariant one by the well-known general formula \[H^{*}_{G}(F_{m},\mathbb{C})\cong H^{*}_{T}(F_{m},\mathbb{C})^{W}, \tag{1.3}\] where \(W\) is the Weyl group associated to \((G,T)\). Since \(H^{*}_{G}(F_{m},\mathbb{C})=H^{*}(X_{m},\mathbb{C})\cong Q_{m}(W)\), formula (1.3) shows that the \(W\)-quasi-invariants can be, in fact, realized as \(W\)-invariants: \(Q_{m}(W)\cong H^{*}_{T}(F_{m},\mathbb{C})^{W}\) in the graded commutative algebras \(H^{*}_{T}(F_{m},\mathbb{C})\). The latter algebras come equipped with natural \(H^{*}_{T}(\operatorname{pt},\mathbb{C})\)-module structure induced by the canonical map \(F_{m}\to\operatorname{pt}\). Identifying \(H^{*}_{T}(\operatorname{pt},\mathbb{C})\cong\mathbb{C}[V]\) and taking onto account the \(W\)-action, we can view \(H^{*}_{T}(F_{m},\mathbb{C})\) as modules over the crossed product algebra \(\mathbb{C}[V]\rtimes W\). We will show that these \(\mathbb{C}[V]\rtimes W\)-modules coincide -- up to a half-integer shift of multiplicities -- with the modules of \(\mathbb{C}W\)_-valued quasi-invariants_, \(\mathbf{Q}_{m+\frac{1}{2}}(W)\), introduced and studied in [1]. As observed in [1], for _integer_\(m\), the action of \(\mathbb{C}[V]\rtimes W\) on \(\mathbf{Q}_{m}(W)\) naturally extends to the rational double affine Hecke (a.k.a. Cherednik) algebra \(\mathcal{H}_{m}(W)\) associated to \((W,m)\). We will show that the topological construction of the quasi-flag manifolds \(F_{m}(G,T)\) generalizes to half-integer values of \(m\), although at the expense of producing spaces equipped only with \(T\)-action. By [1], we get then an action of \(\mathcal{H}_{m+1}(W)\) on the \(T\)-equivariant cohomology of \(F_{m+\frac{1}{2}}(G,T)\). This phenomenon seems to generalize to other cohomology theories, defining, in particular, an action of trigonometric (resp., non-degenerate) DAHA on \(T\)-equivariant \(K\)-theoretic (resp, elliptic) quasi-invariants. Constructing these actions algebraically and giving them a topological explanation is an interesting problem that we leave for the future. ### Topological refinements The realization of algebras of quasi-invariants raises many natural questions regarding topological analogues ('refinements') of basic properties that these algebras possess. A general framework to deal with such questions is provided by stable homotopy theory. Indeed, our spaces of quasi-invariants \(X_{m}(G,T)\) are closely related to the classifying spaces of compact Lie groups, and the latter have been studied extensively in recent years by means of stable homotopy theory (see, e.g., [11], [12], [13], [14], [15], [16]). From this perspective, the main object of study is the mapping spectrum \[C^{*}(X,k):=\operatorname{Map}\left(\Sigma^{\infty}X_{+},\,Hk\right) \tag{1.4}\] called the _cochain spectrum_ of a topological space \(X\). As its notation suggests, \(C^{*}(X,k)\) is a commutative ring spectrum that -- for an arbitrary commutative ring \(k\) -- plays the same role as the usual (differential graded) \(k\)-algebra of cochains on \(X\) in the case when \(k\) is a field of characteristic zero. In particular, the (stable) homotopy groups of the spectrum (1.4) are isomorphic to the singular cohomology groups of the space \(X\): \[\pi_{-i}[C^{*}(X,k)]\,\cong\,H^{i}(X,k)\] The ring spectrum (1.4) thus refines (in a homotopy-theoretic sense) the cohomology ring \(H^{*}(X,k)\). For example, if \(G\) is a compact connected Lie group and \(k\) is a field of characteristic \(0\), the Borel Theorem mentioned above identifies \(H^{*}(BG,k)\) with the algebra \(k[V]^{W}\) of invariant polynomials of \(W\). The cochain spectrum \(C^{*}(BG,k)\) of the classifying space \(BG\) can thus be viewed as a refinement of the algebra \(k[V]^{W}\). In the same manner, we will regard the cochain spectra \(C^{*}(X_{m}(G,T),k)\) of our spaces \(X_{m}(G,T)\) as homotopy-theoretic refinements of the algebras of quasi-invariants \(Q_{m}(W)\). The point is that the known algebraic properties of \(Q_{m}(W)\) should have topological analogues for \(C^{*}(X_{m}(G,T),k)\). For example, one of the main theorems about quasi-invariants (see Theorem 2.3) says that the (graded) algebras \(Q_{m}(W)\) defined over \(\mathbb{C}\) are Gorenstein if \(W\) is a Coxeter group. It is therefore natural to expect that the corresponding ring spectra \(C^{*}(X_{m}(G,T),k)\) are also Gorenstein -- but now in a _topological_ sense [11] and over an arbitrary field \(k\). We will show that this expectation is indeed correct, at least in the rank one case (see Theorem 7.1 and Theorem 7.2), and the spectra of quasi-invariants have a number of other interesting properties. Our results are only first steps in this direction, and many natural questions motivated, in particular, by representation theory have yet to be answered. ### Homotopy Lie groups The spaces of quasi-invariants of compact Lie groups, \(X_{m}(G,T)\), can be constructed functorially in a purely homotopy-theoretic way. In the rank one case, we use to this end the so-called _fibre-cofibre construction_ -- a classical (though not very well-known) construction in homotopy theory introduced by T. Ganea [10]. A generalization of Ganea's construction allows us to define the analogues of \(X_{m}(G,T)\) for certain finite loop spaces closely related to compact Lie groups, and perhaps most interestingly, for \(p\)-compact groups -- \(p\)-local analogues of finite loop spaces also known as _homotopy Lie groups_. In this last case, the classical Weyl groups are replaced by pseudo-reflection groups defined over the field \(\mathbb{Q}_{p}\) of \(p\)-adic numbers. It is well known that all such pseudo-reflection groups can be realized as complex reflection groups (see [12]), and we thus provide realizations of algebras of quasi-invariants of complex reflection groups defined in [11], albeit in a \(p\)-local setting. The simplest exotic examples are the rank one \(p\)-compact groups \(\hat{\mathbb{S}}_{p}^{2n-1}\), called the _Sullivan spheres_, whose 'Weyl groups' are the cyclic groups \(W=\mathbb{Z}/n\) of order \(n>2\) such that \(\,n\,|\,(p-1)\). These examples are already quite rich: we will treat them in a separate paper. We divide our work into three parts. The present paper (Part I) focuses entirely on the 'global' rank one case: here, we define and study the spaces of quasi-invariants for the Lie group \(G=SU(2)\) and for a certain class of finite loop spaces \(\Omega B\) of homotopy type of \(\,\mathbb{S}^{3}\) known as _Rector spaces_. In Part II, we formulate a \(p\)-local version of the realization problem for algebras of quasi-invariants defined over \(\mathbb{Q}_{p}\) and give its solution in the 'local' rank one case: namely, for the \(p\)-compact groups associated with Sullivan spheres \(\hat{\mathbb{S}}_{p}^{2n-1}\). In Part III, we then use the spaces introduced in Part I and Part II as 'building blocks' for constructing spaces of quasi-invariants for arbitrary compact connected Lie groups and for 'generic' \(p\)-compact groups related to Clark-Ewing spaces. ### Contents of the present paper We now describe in more detail the results of the present paper. In Section 2, after reviewing basic facts about quasi-invariants, we state our realization problem for Weyl groups in the classical framework of compact connected Lie groups. As mentioned above, we take an axiomatic approach: the properties that we choose to characterize the topological spaces of quasi-invariants are modeled on properties of algebraic varieties of quasi-invariants introduced and studied in [1]. In fact, our main axioms (\(\mathrm{QI}_{1}\))-(\(\mathrm{QI}_{5}\)) in Section 2.4 are natural homotopy-theoretic analogues of basic geometric properties of the varieties of quasi-invariants listed in Section 2.2. In Section 3, we give a solution of our realization problem for \(G=SU(2)\) (see Theorem 3.9). To this end, as mentioned above, we employ the Ganea fibre-cofibre construction. This construction plays an important role in abstract homotopy theory (specifically, in the theory of LS-categories and related work on the celebrated Ganea Conjecture in algebraic topology, see e.g. [1] and Example 3.2 below). However, we could not find any applications of it in Lie theory or classical homotopy theory of compact Lie groups (perhaps, with the exception of the simple (folkore) Example 3.3). We therefore regard Proposition 3.7 and Theorem 3.9 that describe the Ganea tower of the Borel maximal torus fibration of a compact connected Lie group as original contributions of the present paper. The \(G\)-spaces \(F_{m}(G,T)\) that we call the \(m\)_-quasi-flag manifolds of \(G\)_ are defined to be the homotopy fibres of iterated (level \(m\)) fibrations in this Ganea tower (see Definition 3.10). In Section 3.4 and Section 3.5, we describe some basic properties of the \(G\)-spaces \(F_{m}(G,T)\). First, we compute the \(T\)-equivariant cohomology of \(F_{m}(G,T)\) (see Proposition 3.14) and identify it with a module of 'nonsymmetric' (\(\mathbb{C}W\)-valued) quasi-invariants (see Corollary 3.16). In this way, we provide a topological interpretation of generalized quasi-invariants introduced in [1]. Then, in Section 3.5, we define natural analogues of the classical Demazure (divided difference) operators for our quasi-flag manifolds \(F_{m}(G,T)\). Our construction is purely topological (see Proposition 3.19): it generalizes the Bressler-Evans construction of the divided difference operators for the classical flag manifolds \(F_{0}(G,T)\) given in [1]. In Section 4, we extend our topological construction of spaces of quasi-invariants to a large class of finite loop spaces \(\Omega B\) called the _Rector spaces_ (or fake Lie groups of type \(SU(2)\)). These remarkable loop spaces were originally constructed in [11] as examples of nonstandard ('exotic') deloopings of \(\mathbb{S}^{3}\). Our construction does not apply to all Rector spaces, but only to those that accept homotopically nontrivial maps from \(\mathbb{C}\mathbb{P}^{\infty}\). These last spaces admit a beautiful arithmetic characterization discovered by D. Yau in [20]. We show that the 'fake' spaces of quasi-invariants, \(X_{m}(\Omega B,T)\), associated to the Rector-Yau spaces have the same _rational_ cohomology as our 'genuine' spaces of quasi-invariants, \(X_{m}(G,T)\), constructed in Section 3 (see Theorem 4.7); however, in general, they are homotopically non-equivalent (see Corollary 5.12). In Section 5, we compute the \(G\)-equivariant (topological) \(K\)-theory \(K^{*}_{G}(F_{m})\) of the spaces \(F_{m}=F_{m}(G,T)\) and identify it with \(\mathcal{Q}_{m}(W)\), the _exponential quasi-invariants_ of the Weyl group \(W=\mathbb{Z}/2\mathbb{Z}\) (see Theorem 5.6). Then, we relate \(K^{*}_{G}(F_{m})\) to the (completed) \(G\)-equivariant cohomology \(\widehat{H}^{*}_{G}(F_{m},\mathbb{Q})\,:=\,\prod_{k=0}^{\infty}H^{k}_{G}(F_{m },\mathbb{Q})\,\) by constructing explicitly the \(G\)-equivariant Chern character map \[\mathrm{ch}_{G}(F):\ K^{*}_{G}(F_{m})\,\to\,\widehat{H}^{*}_{G}(F_{m}, \mathbb{Q}) \tag{1.5}\] We show that (1.5) factors through the natural map \(K^{*}_{G}(F_{m})\to K^{*}(X_{m})\) to the Borel \(G\)-equivariant \(K\)-theory \(K^{*}(X_{m})=K^{*}(EG\times_{G}F_{m})\) of \(F_{m}\), inducing an isomorphism upon rationalization (see Proposition 5.8): \(K^{*}(X_{m})_{\mathbb{Q}}\cong\widehat{H}^{*}_{G}(F_{m},\mathbb{Q})\cong \widehat{Q}_{m}(W)\,\). In this way, we link topologically the exponential and the usual (polynomial) quasi-invariants of \(W\). In Section 5, we also compute the \(K\)-theory of 'fake' spaces of quasi-invariants associated to the Rector-Yau loop spaces \(\Omega B\) (see Theorem 5.10). The result of this computation has an important consequence -- Corollary 5.12 -- that provides a numerical \(K\)-theoretic invariant \(N_{B}\) distinguishing the spaces \(X_{m}(\Omega B,T)\) up to homotopy equivalence for different \(B\)'s. In Section 6, we compute the \(T\)-equivariant \(\mathcal{E}ll^{*}_{T}(F_{m})\) and \(G\)-equivariant \(\mathcal{E}ll^{*}_{G}(F_{m})\) complex analytic elliptic cohomology of \(F_{m}\) (see Theorem 6.3 and Theorem 6.6, respectively). We express the result in two ways: geometrically (as coherent sheaves on a given Tate elliptic curve \(E\)) and analytically (in terms of \(\Theta\)-functions and \(q\)-difference equations). We also compute the spaces (graded modules) of global sections of the elliptic cohomology sheaves of \(F_{m}\) with _twisted_ coefficients: \[\operatorname{Ell}^{*}_{T}(E,\mathcal{L}):=\bigoplus_{n=0}^{\infty}\,H^{0}_{ \operatorname{an}}(E,\,\mathcal{E}ll^{*}_{T}(F_{m})\otimes\mathcal{L}^{n}) \quad\text{and}\quad\operatorname{Ell}^{*}_{G}(E,\mathcal{L}):=\operatorname{ Ell}^{*}_{T}(E,\mathcal{L})^{W}\,\] where \(\mathcal{L}^{n}\) stands for the \(n\)-th tensor power of the _Looijenga bundle_\(\mathcal{L}\) on the elliptic curve \(E\), a canonical \(W\)-equivariant line bundle originally introduced and studied in [10]. This computation (see Theorem 6.7) is inspired by results of [12], and technically, it is perhaps the most interesting cohomological computation of the paper. Finally, in Section 7, we prove that our spaces of quasi-invariants \(X_{m}(G,T)\) are Gorenstein in the sense of stable homotopy theory: more precisely, the associated commutative ring spectra \(C^{*}(X_{m},k)\) (see (1.4)) are orientable Gorenstein (relative to \(k\)) and satisfies the Gorenstein duality of shift \(a=1-4m\) (see Theorem 7.1). This result should be viewed as a homotopy-theoretic analogue of Theorem 2.3 on Gorenstein property of algebras of quasi-invariants. We also prove the analogous result (see Theorem 7.2) for the 'fake' spaces of quasi-invariants \(X_{m}(\Omega B,T)\), although under the additional assumption that \(k=\mathbb{F}_{p}\) for some prime \(p\). This work brings together ideas and techniques from parts of algebra and topology that are (still) fairly distant from each other. To make it accessible to readers with different background we included two appendices. In Appendix A, we briefly review Milnor's classical construction of classifying spaces of topological groups in terms of iterated joins. As it should be clear from results of Section 3, our construction of spaces of quasi-invariants can be viewed as a generalization of Milnor's construction. In Appendix B, we collect basic definitions from stable homotopy theory concerning regularity and duality properties of commutative ring spectra. This material is needed to understand our motivation and results in Section 7 that were greatly inspired by the beautiful paper [1]. All in all, we tried to give references to all essential facts that we are using, even when these facts are considered to be obvious or well known by experts. ### Acknowledgements We would like to thank Oleg Chalykh and Pavel Etingof for many interesting discussions, questions and comments related to the subject of this paper. We are particularly grateful to O. Chalykh for clarifying to us his definition of quasi-invariants in the elliptic case (see Remark 6.8). The work of the first author was partially supported by NSF grant DMS 1702372 and the Simons Collaboration Grant 712995. The second author was partially supported by NSF grant DMS 1702323. ## 2. Realization problem In this section, we state our topological realization problem for algebras of quasi-invariant polynomials of Weyl groups in terms of classifying spaces of compact connected Lie groups. ### Quasi-invariants of finite reflection groups We recall the general definition of quasi-invariants from [1]. Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\), and let \(W\) be a finite subgroup of \(\operatorname{GL}(V)\) generated by pseudoreflections. We recall that an element \(s\in\operatorname{GL}(V)\) is a _pseudoreflection_ if it has finite order \(n_{s}>1\) and acts as the identity on some hyperplane \(H_{s}\) in \(V\). We let \(\,\mathcal{A}=\{H_{s}\}\,\) denote the set of all hyperplanes corresponding to the pseudoreflections of \(W\) and observe that \(W\) acts naturally on \(\mathcal{A}\) by permutations. The (pointwise) stabilizer \(W_{H}\) of each \(\,H\in\mathcal{A}\,\) in \(W\) is a cyclic subgroup of order \(n_{H}\geq 2\) that depends only on the orbit of \(H\) in \(\mathcal{A}\). The characters of \(W_{H}\) then also form a cyclic group of order \(n_{H}\) generated by the determinant character \(\det:\,\operatorname{GL}(V)\to\mathbb{C}^{*}\) of \(\operatorname{GL}(V)\) restricted to \(W_{H}\). We write \[\boldsymbol{e}_{H,\,i}:=\frac{1}{n_{H}}\,\sum_{w\in W_{H}}(\det w)^{-i}\,w\,\quad i=0,\,1,\,\ldots,\,n_{H}-1\,\] for the corresponding idempotents in the group algebra \(\mathbb{C}W_{H}\subseteq\mathbb{C}W\). Now, let \(\mathbb{C}[V]=\operatorname{Sym}_{\mathbb{C}}(V^{*})\) denote the polynomial algebra of \(V\). This algebra carries a natural \(W\)-action (extending the linear action of \(W\) on \(V^{*}\)) and can thus be viewed as a \(\mathbb{C}W\)-module. We can then characterize the invariant polynomials \(p\in\mathbb{C}[V]^{W}\) by the equations \[\boldsymbol{e}_{H,-i}(p)\,=\,0\,\quad i=1,\,\ldots,\,n_{H}-1\, \tag{2.1}\] which hold for all hyperplanes \(H\in\mathcal{A}\). To define quasi-invariants we relax the equations (2.1) in the following way (_cf._ (1.2)). For each hyperplane \(H\in\mathcal{A}\), we fix a linear form \(\,\alpha_{H}\in V^{*}\), such that \(\,H=\operatorname{Ker}(\alpha_{H})\,\), and choose \(\,n_{H}-1\,\) positive integers \(\,\{m_{H,i}\}_{i=1,\ldots,\,n_{H}-1}\) which we refer to as _multiplicities_ of \(H\). We assume that \(\,m_{H,i}=m_{H^{\prime},i}\,\) for each \(i\) whenever \(H\) and \(H^{\prime}\) are in the same orbit of \(W\) in \(\mathcal{A}\). We write \(\mathcal{M}(W):=\{m_{H,i}\in\mathbb{Z}_{+}\,:\,i=1,\ldots,\,n_{H}-1\}_{[H]\in \mathcal{A}/W}\) for the set of all such multiplicities regarding them as functions on the set \(\mathcal{A}/W\) of \(W\)-orbits in \(\mathcal{A}\). **Definition 2.1** ([1]).: A polynomial \(p\in\mathbb{C}[V]\) is called a \(W\)_-quasi-invariant of multiplicity \(m=\{m_{H,i}\}\in\mathcal{M}(W)\)_ if it satisfies the conditions \[\boldsymbol{e}_{H,-i}(p)\,\equiv 0\ \operatorname{mod}\,\langle\alpha_{H} \rangle^{n_{H}m_{H,i}}\,\quad i=1,\,\ldots,\,n_{H}-1\, \tag{2.2}\] for all \(H\in\mathcal{A}\). We write \(Q_{m}(W)\) for the subspace of all such polynomials in \(\mathbb{C}[V]\). In general, \(Q(W)\) is _not_ an algebra: for arbitrary \(W\) and \(m\in\mathcal{M}(W)\), the subspace of quasi-invariant polynomials may not be closed under multiplication in \(\mathbb{C}[V]\). In Part II of our work, we will give necessary and sufficient conditions (on \(W\) and \(m\)) that ensure the multiplicativity property of \(Q_{m}(W)\). In the present paper, we simply restrict our attention to _Coxeter groups_, i.e. the finite subgroups \(W\) of \(\operatorname{GL}(V)\) generated by real reflections. In this case the conditions (2.2) are equivalent to (1.2) and the above definition of quasi-invariants reduces to the original definition of Chalykh and Veselov [11] given in the Introduction. _Thus, from now on, we assume that \(W\) is a real finite reflection group, \(V\) being its_ (_complexified_) _reflection representation._ The next lemma collects some elementary properties of quasi-invariants that follow easily from the definition (see, e.g., [1]). **Lemma 2.2**.: _Let \(W\) be an arbitrary Coxeter group. Then, for any \(\,m\in\mathcal{M}(W)\,\),_ 1. \(\,\mathbb{C}[V]^{W}\subset Q_{m}(W)\subseteq\mathbb{C}[V]\,\) _with_ \(\,Q_{0}(W)=\mathbb{C}[V]\,\) _and_ \(\,\cap_{m}Q_{m}(W)=\mathbb{C}[V]^{W}\)_._ 2. \(Q_{m}(W)\) _is a graded subalgebra of_ \(\mathbb{C}[V]\) _stable under the action of_ \(W\)_._ 3. \(Q_{m}(W)\) _is a finite module over_ \(\mathbb{C}[V]^{W}\) _and hence a finitely generated_ \(\mathbb{C}\)_-subalgebra of_ \(\mathbb{C}[V]\) We may think of quasi-invariants of \(W\) as a family of subalgebras of \(\mathbb{C}[V]\) interpolating between the \(W\)-invariants and all polynomials. To make this more precise we will identify the set \(\mathcal{M}(W)\) of multiplicities on \(\mathcal{A}\) with the set of \(W\)-invariant functions \(m:\mathcal{A}\to\mathbb{Z}_{+}\) and put on this set the following natural partial order1: Footnote 1: Abusing notation, in the Coxeter case, we will often write \(\alpha\in\mathcal{A}\) instead of \(H\in\mathcal{A}\) for \(H=\operatorname{Ker}(\alpha)\). \[m^{\prime}\geq m\quad\stackrel{{\mathrm{def}}}{{ \Longleftrightarrow}}\quad m^{\prime}_{\alpha}\geq m_{\alpha}\,\ \forall\,\alpha\in\mathcal{A}\,,\] The algebras of \(W\)-quasi-invariants of varying multiplicities then form a contravariant diagram of shape \(\mathcal{M}(W)\) -- a functor \(\,\mathcal{M}(W)^{\mathrm{op}}\to\mathtt{CommAlg}_{\mathbb{C}}\) with values in the category of commutative algebras -- that we simply depict as a filtration on \(\mathbb{C}[V]\): \[\mathbb{C}[V]=Q_{0}(W)\,\supseteq\,\ldots\,\supseteq\,Q_{m}(W)\,\supseteq\,Q_ {m^{\prime}}(W)\,\supseteq\,\ldots\,\supseteq\,\mathbb{C}[V]^{W} \tag{2.3}\] The most interesting algebraic property of quasi-invariants is given by the following theorem, the proof of which (unlike the proof of Lemma 2.2) is not elementary. **Theorem 2.3** (see [1], [1], [2]).: _For any Coxeter group \(W\) and any multiplicity \(m\in\mathcal{M}(W)\), \(\,Q_{m}(W)\) is a free module over \(\mathbb{C}[V]^{W}\) of rank \(|W|\). Moreover, \(\,Q_{m}(W)\) is a graded Gorenstein algebra with Gorenstein shift \(\,a=\dim(V)-2\sum_{\alpha\in\mathcal{A}}m_{\alpha}\,\)._ _Remark 2.4_.: For \(m=0\) (i.e., for the polynomial ring \(\,Q_{0}(W)=\mathbb{C}[V]\)), Theorem 2.3 is a well-known result due to C. Chevalley [10]. For \(m\neq 0\), it was first proven in the case of dihedral groups (i.e. Coxeter groups of rank \(2\)) in [11]. For arbitrary Coxeter \(W\), Theorem 2.3 was proven (by different methods) in [1] and [1]. It is worth mentioning that the classical arguments of [10] do not work for nonzero \(m\)'s. _Remark 2.5_.: The first statement of Theorem 2.3 makes sense and holds true for an arbitrary finite pseudoreflection group \(W\) and for all multiplicities. In this generality, Theorem 2.3 was proven in [1] (see, _loc. cit._, Theorem 1.1). However, for \(W\) non-Coxeter, the module \(Q_{m}(W)\) may not be Gorenstein even when it is an algebra. ### Varieties of quasi-invariants The algebraic properties of quasi-invariants can be recast geometrically. To this end, following [1], we introduce the affine schemes \(\,V_{m}(W):=\operatorname{Spec}Q_{m}(W)\,\) called the _varieties of quasi-invariants_ of \(W\). The schemes \(V_{m}(W)\) come equipped with natural projections \(p_{m}:V_{m}(W)\to V/\!/W\) and form a covariant diagram (tower) over the poset \(\mathcal{M}(W)\): \[V=V_{0}(W)\to\ldots\to V_{m}(W)\xrightarrow{\pi_{m,m^{\prime}}}V_{m^{\prime}} (W)\to\ldots \tag{2.4}\] that is dual to (2.3). The following formal properties of (2.4) hold: 1. Each \(V_{m}(W)\) is a reduced irreducible scheme (of finite type over \(\mathbb{C}\)) equipped with an algebraic \(W\)-action, all morphisms in (2.4) being \(W\)-equivariant. The morphism \(p_{0}:V_{0}(W)\to V/\!/W\) coincides with the canonical projection \(p:V\to V/\!/W\), and the triangles commute for all \(m^{\prime}\geq m\). Thus, (2.4) is a diagram of \(W\)-schemes over \(V/\!/W\). 2. The diagram (2.4) 'converges' to \(V/\!/W\) in the sense that the maps \(p_{m}\) induce \[\operatorname{colim}_{\mathcal{M}^{\operatorname{alg}}(W)}[V_{m}(W)]\,\stackrel{{ \sim}}{{\to}}\,V/\!/W\,.\] 3. Each projection \(p_{m}:V_{m}\to V/\!/W\) factors naturally (in \(m\)) through \(V_{m}/\!/W\), inducing isomorphisms of schemes \(V_{m}/\!/W\cong V/\!/W\) for all \(m\in\mathcal{M}(W)\). 4. Each map \(\pi_{m,m^{\prime}}:V_{m}\to V_{m^{\prime}}\) in (2.4) is a universal homeomorphism: i.e., a finite morphism of schemes that is surjective and set-theoretically injective on closed points. _Remark 2.6_.: The first three properties in the above list are formal consequences of Lemma 2.2. In contrast, Property (4) is a nontrivial geometric fact that does not follow immediately from definitions (see [1, Lemma 7.3]). We recall that a morphism of schemes \(f:S\to T\) is called a _universal homeomorphism_ if for every morphism \(T^{\prime}\to T\) the pullback map \(T^{\prime}\times_{T}S\to T^{\prime}\) is a homeomorphism in the category of schemes. For a map of algebraic varieties \(f:S\to T\) defined over \(\mathbb{C}\), this categorical property is known to be equivalent to the geometric property (4). We will construct a topological analogue of the diagram (2.4), where the schemes \(V_{m}(W)\) are replaced by topological spaces \(X_{m}(G,T)\), with Properties (1)-(3) holding in a homotopy meaningful (i.e. homotopy invariant) way. The universal homeomorphisms in the category of schemes will be modeled homotopy theoretically by the classical fibre-cofibre construction. ### Borel Theorem Next, we recall a fundamental result of A. Borel on cohomology of classifying spaces of compact Lie groups [10]. Let \(G\) be a compact connected Lie group. Fix a maximal torus \(T\subseteq G\) and write \(N=N_{G}(T)\) for its normalizer in \(G\). Let \(W:=N/T\) be the associated Weyl group. The \(W\) acts naturally on \(T\) by conjugation: \(W\times T\to T\), \(w\cdot t=ntn^{-1}\), and on the classifying space \(BT=EG/T\) via the right action of \(G\) on \(EG\): \(W\times BT\to BT\), \(w\cdot[x]_{T}=[xn^{-1}]_{T}\), where \(w=nT\in W\) and \([x]_{T}\) denotes the \(T\)-orbit of \(x\) in \(EG\). Let \(p:BT\twoheadrightarrow BG\) denote the natural fibration, i.e. the quitient map induced by the inclusion \(T\hookrightarrow G\). **Theorem 2.7** (Borel).: _The map \(p^{*}:H^{*}(BG,\mathbb{Q})\to H^{*}(BT,\mathbb{Q})\) induced by \(p\) on rational cohomology is an injective ring homomorphism whose image is precisely the subring of \(W\) invariants in \(H^{*}(BT,\mathbb{Q})\) :_ \[H^{*}(BG,\mathbb{Q})\cong H^{*}(BT,\mathbb{Q})^{W}. \tag{2.5}\] In fact, more is true. Let \(V:=\pi_{1}(T)\otimes\mathbb{Q}\), which is \(\mathbb{Q}\)-vector space of dimension \(n=\operatorname{rank}(G)\). The natural action of \(W\) on \(T\) induces a group homomorphism \(W\to\operatorname{Aut}[\pi_{1}(T)]\) that extends by linearity to a group homomorphism \[\varrho:\,W\to\operatorname{GL}_{\mathbb{Q}}(V). \tag{2.6}\] The latter is known to be faithful, with image being a reflection subgroup of \(\operatorname{GL}_{\mathbb{Q}}(V)\) (see, e.g., [11, Theorem 5.16]). Furthermore, since \(T\) is a connected topological group, there is a natural isomorphism \(\pi_{1}(T)\cong\pi_{2}(BT)\) induced by the homotopy equivalence \(T\stackrel{{\sim}}{{\to}}\Omega BT\); combining this with the rational Hurewicz isomorphism \(\pi_{2}(BT)\otimes\mathbb{Q}\cong H_{2}(BT,\mathbb{Q})\), we get a natural isomorphism of \(\mathbb{Q}\)-vector spaces \[V\cong H_{2}(BT,\mathbb{Q}) \tag{2.7}\] which shows that \(H_{2}(BT,\mathbb{Q})\) carries a reflection representation of \(W\) as a Coxeter group. Dualizing (2.7) gives an isomorphism \[H^{2}(BT,\mathbb{Q})\cong V^{*} \tag{2.8}\] which extends to an isomorphism of graded \(\mathbb{Q}\)-algebras \[H^{*}(BT,\mathbb{Q})\cong\operatorname{Sym}_{\mathbb{Q}}(V^{*})=\mathbb{Q}[V] \tag{2.9}\] where the linear forms on \(V\) (covectors in \(V^{*}\)) are given cohomological degree \(2\) (in agreement with (2.8)). Borel's Theorem 2.7 thus identifies \(H^{*}(BG,\mathbb{Q})\) with the ring \(\mathbb{Q}[V]^{W}\) of polynomial invariants on the (rational) reflection representation of \(W\). We are now in a position to state our main problem -- the realization problem for algebras of quasi-invariants of Weyl groups -- in an axiomatic way. ### Realization problem Given a compact connected Lie group \(G\) with maximal torus \(T\subseteq G\) and associated Weyl group \(W=W_{G}(T)\), construct a diagram of spaces \(X_{m}(G,T)\) over the poset \(\mathcal{M}(W)\): \[BT=X_{0}(G,T)\to\ldots\to X_{m}(G,T)\xrightarrow{\pi_{m,m^{\prime}}}X_{m^{ \prime}}(G,T)\to\ldots \tag{2.10}\] together with natural maps \(p_{m}:X_{m}(G,T)\to BG\), one for each \(m\in\mathcal{M}(W)\), such that 1. Each \(X_{m}(G,T)\) is a \(W\)-space (i.e., a CW complex equipped with an action of \(W\)), and all maps are \(W\)-equivariant. The map \(\,p_{0}:X_{0}(G,T)\to BG\) coincides with the canonical map \(p:BT\to BG\), and for all \(\,m^{\prime}\geq m\,\), the following diagrams commute up to homotopy: \[\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0. * With natural identification \(H^{*}(BT,\mathbb{Q})=\mathbb{Q}[V]\) (see (2.9)), the maps \(\pi^{*}_{0,m}:H^{*}(X_{m},\mathbb{Q})\to H^{*}(BT,\mathbb{Q})\) in (QI\({}_{4}\)) induce isomorphisms \[H^{*}(X_{m},\mathbb{Q})\otimes\mathbb{C}\cong Q_{m}(W)\] where \(Q_{m}(W)\) are the subalgebras of quasi-invariants in \(\mathbb{C}[V]\). _Remark 2.8_.: The first three properties of the spaces \(X_{m}(G,T)\) are homotopy-theoretic analogues of the corresponding geometric properties of the varieties \(V_{m}(W)\) listed in Section 2.2. Properties (QI\({}_{4}\)) and (QI\({}_{5}\)) reflect the fact that the diagram (2.10) is a topological realization of the diagram of algebras (2.3): in particular, the maps \(\pi^{*}_{m,m^{\prime}}\) in (QI\({}_{4}\)) induced by the cohomology functor correspond to the natural inclusions (2.3) of algebras \(Q_{m}(W)\) determined by their multiplicities. _Remark 2.9_.: The spaces \(X_{m}(G,T)\) will arise naturally as homotopy \(G\)-orbit spaces \[X_{m}(G,T)=EG\times_{G}F_{m}(G,T)\,,\] where \(F_{m}(G,T)\) are the homotopy fibres of the maps \(\,p_{m}:X_{m}(G,T)\to BG\,\) (see Theorem 3.9). These homotopy fibres form a diagram of \(G\)-spaces \[G/T=F_{0}(G,T)\to\ldots\to F_{m}(G,T)\to F_{m^{\prime}}(G,T)\to\ldots\] that induces the diagram (2.10). We will call \(F_{m}(G,T)\) the _quasi-flag manifolds_ of the group \(G\). ## 3. Spaces of quasi-invariants In this section, we give a solution of our Realization Problem (see Section 2.4) in the rank one case. Our main observation (see Proposition 3.7 and Theorem 3.9) is that, for \(G=SU(2)\), the diagram of spaces (2.10) satisfying all five axioms (QI\({}_{1}\))-(QI\({}_{5}\)) can be obtained inductively, using the so-called 'fibre-cofibre construction' introduced in homotopy theory by T. Ganea [10]. ### Ganea construction First, we recall some basic definitions from topology. If \(f:\,X\to Y\) is a map of (well) pointed spaces, its _homotopy fibre_ is defined by \[\operatorname{hofib}_{*}(f):=X\times_{Y}P_{*}(Y)=\{(x,\gamma)\,:\,\gamma(0)= *\,\gamma(1)=f(x)\}\,\] where \(P_{*}(Y):=\operatorname{Map}_{*}(I,Y)=\{\gamma:\,I\to Y\,,\ \gamma(0)=*\}\) is the (based) path space over \(Y\). Any map \(f:\,X\to Y\) can be replaced by a fibration in the sense that it admits a factorization \(X\stackrel{{\sim}}{{\to}}X^{\prime}\ \stackrel{{ p}}{{\twoheadrightarrow}}\ Y\) in \(\operatorname{\mathtt{Top}}_{*}\), where the first arrow is a weak homotopy equivalence and the second is a (Serre) fibration. The homotopy fibre is a homotopy invariant in \(\operatorname{\mathtt{Top}}_{*}\) so that the pullback along a weak equivalence \(X\stackrel{{\sim}}{{\to}}X^{\prime}\) induces a weak equivalence: \(\operatorname{hofib}_{*}(f)\stackrel{{\sim}}{{\to}} \operatorname{hofib}_{*}(p)\,\). On the other hand, for any fibration \(p:X^{\prime}\ \twoheadrightarrow\ Y\), the natural inclusion map \[p^{-1}(*)\stackrel{{\sim}}{{\to}}\operatorname{hofib}_{*}(p)\,\quad x \mapsto(x,*)\] is a (based) homotopy equivalence. Thus, the homotopy fibres of fibrations can be represented in \(\operatorname{Ho}(\operatorname{\mathtt{Top}}_{*})\) by usual (set-theoretic) fibres. Dually, the _homotopy cofibre_ of a map \(f:\,X\to Y\) is defined by \[\operatorname{hocof}_{*}(f):=Y\cup_{X}C_{*}(X)\,,\] where \(C_{*}(X):=(X\times I)/(\{*\}\times I)\cup(X\times\{1\})\) is the reduced cone on \(X\). Any map \(f:\,X\to Y\) can be replaced by a cofibration in the sense that it admits a factorization \(X\stackrel{{ j}}{{\hookrightarrow}}Y^{\prime}\stackrel{{ \sim}}{{\to}}Y\) in \(\operatorname{\mathtt{Top}}_{*}\), where the first arrow is a cofibration (i.e., an injective map) in \(\operatorname{\mathtt{Top}}_{*}\) and the second is (weak) homotopy equivalence. The homotopy cofibre is a homotopy invariant so that the pushout along the homotopy equivalence \(Y^{\prime}\stackrel{{\sim}}{{\to}}Y\) induces an equivalence: \(\operatorname{hocof}_{*}(j)\stackrel{{\sim}}{{\to}} \operatorname{hocof}_{*}(f)\,\). On the other hand, for a cofibration \(j:X\hookrightarrow Y^{\prime}\), the homotopy cofibre \(\,\operatorname{hocol}_{*}(j)\,\) is simply obtained by erecting the cone \(C_{*}(\operatorname{Im}j)\) on the image of \(j\) in \(Y^{\prime}\). The natural map collapsing this cone to the basepoint gives then a natural map \[\operatorname{hocol}_{*}(j)\cong Y^{\prime}\cup C_{*}(\operatorname{Im}j)\, \stackrel{{\sim}}{{\to}}\,Y^{\prime}/X\] which is a (based) homotopy equivalence. Thus, the homotopy cofibres of cofibrations can be represented in \(\operatorname{Ho}(\mathtt{Top}_{*})\) by usual (set-theoretic) cofibres. Formally, \(\operatorname{hofib}_{*}(f)\) and \(\operatorname{hocol}_{*}(f)\) can be defined in \(\operatorname{Ho}(\mathtt{Top}_{*})\) by the following homotopy limit and homotopy colimit: \[\operatorname{hofib}_{*}(f)=\operatorname{holim}\{*\to Y\stackrel{{ f}}{{\leftarrow}}X\}\quad,\quad\operatorname{hocol}_{*}(f)= \operatorname{hocolim}\{*\gets X\stackrel{{ f}}{{\to}}Y\} \tag{3.1}\] The advantage of these formal definitions is that they make sense in any homotopical context: in particular, in an arbitrary pointed model category or \(\infty\)-category. Now, the Ganea construction starts with a homotopy fibration sequence with a well-pointed base \[F\,\stackrel{{ j}}{{\to}}\,X\stackrel{{ p}}{{\to}}\,B \tag{3.2}\] and produces another homotopy fibration sequence on the same base: \[F_{1}\,\stackrel{{ j_{1}}}{{\to}}\,X_{1}\stackrel{{ p_{1}}}{{\to}}\,B \tag{3.3}\] The space \(X_{1}\) in (3.3) is defined to be the homotopy cofibre of the fibre inclusion in (3.2): \(X_{1}:=\operatorname{hocol}_{*}(j)\,\). The map \(p_{1}\) -- called the (first) _whisker map_ -- is obtained by extending \(p:X\to B\) to \(X_{1}=X\cup C_{*}(F)\) so that \(C_{*}(F)\) maps to the basepoint of \(B\). The \(F_{1}\) is then defined to be the homotopy fibre of \(p_{1}\,\colon\,\,F_{1}:=\operatorname{hofib}_{*}(p_{1})\). The above construction can be iterated _ad infinitum_; as a result, one gets a tower of fibration sequences over \(B\): \[\begin{CD}F@>{j}>{}>X@>{p}>{}>B\\ @V{}V{\pi_{0}}V@V{}V{}V\\ F_{1}@>{j_{1}}>{}>X_{1}@>{p_{1}}>{}>B\\ @V{}V{\pi_{1}}V@V{}V{}V\\ F_{2}@>{j_{2}}>{}>X_{2}@>{p_{2}}>{}>B\\ \vdots\qquad\qquad\vdots\qquad\qquad\vdots\qquad\end{CD} \tag{3.4}\] where \(X_{m}\) and \(F_{m}\) are defined by \[X_{m}:=\operatorname{hocol}_{*}(j_{m-1})\quad,\quad F_{m}:=\operatorname{ hofib}_{*}(p_{m})\,\quad\forall\,m\geq 1\,. \tag{3.5}\] Note that the horizontal arrows \(p_{m}\) in (3.4) are whisker maps making each row \(\,F_{m}\,\stackrel{{ j_{m}}}{{\to}}\,X_{m}\,\stackrel{{ p_{m}}}{{\to}}\,B\) of the above diagram a homotopy fibration sequence. On the other hand, the vertical arrows \(\pi_{m}\) are canonical maps making each triple \(\,F_{m}\,\stackrel{{ j_{m}}}{{\to}}\,X_{m}\,\stackrel{{ \pi_{m}}}{{\to}}\,X_{m+1}\,\) a homotopy cofibration sequence. The main observation of [1] is that the homotopy fibres in (3.4) can be described explicitly in terms of iterated joins2 of based loop spaces \(\Omega B\). More precisely, we have **Theorem 3.1** (Ganea).: (1) _For all \(m\geq 1\), there are natural homotopy equivalences_ \[F_{m}\,\simeq\,F\ast\,\Omega B\,\ast\,\dots\,\ast\,\Omega B\quad(m\text{-fold join})\] _compatible with the fibre inclusions \(F_{m}\to F_{m+1}\) in (3.4)._ (2) _The whisker maps \(p_{m}:X_{m}\to B\) induce a weak homotopy equivalence_ \[\operatorname{hocolim}\,\{X\xrightarrow{\pi_{0}}X_{1}\xrightarrow{\pi_{1}}X_{ 2}\to\dots\to X_{m}\to\dots\}\xrightarrow{\sim}\,B\] _where the homotopy colimit is taken over the telescope diagram in the middle of (3.4)._ Note that the second claim of Theorem 3.1 follows from the first by Milnor's Lemma (see A.2). **Example 3.2** (LS-categories).: Recall that the _LS-category_ of a topological space \(B\) is defined to be \(\operatorname{cat}(B):=n-1\), where \(n\) is the least cardinality of an open cover \(\{U_{1},\dots,U_{n}\}\) of \(B\) such that each \(U_{i}\) is contractible as a subspace in \(B\). Given a pointed connected space \(B\), one applies the fibre-cofibre construction to the canonical path fibration \(\,\Omega B\to P_{\ast}B\xrightarrow{p}B\,\). The result is the sequence of spaces \[P_{\ast}B\xrightarrow{\pi_{0}}(P_{\ast}B)_{1}\xrightarrow{\pi_{1}}(P_{\ast}B )_{2}\xrightarrow{\pi_{2}}(P_{\ast}B)_{3}\to\dots\] called the _Ganea tower_ of the space \(B\). The main theorem of [10] asserts that if \(B\) is a normal space, its LS category \(\,\operatorname{cat}(B)\leq m\,\) if and only if the \(m\)-th whisker map \(\,p_{m}:(P_{\ast}B)_{m}\to B\) associated to the above tower splits (i.e., admits a section). Most applications of Ganea's construction in topology are related to or inspired by this observation (see, e.g., [11]). **Example 3.3** (Milnor bundles).: If \(G\) is a topological group, we can apply the Ganea construction to the universal principal \(G\)-fibration \(\,G\to EG\to BG\,\). In this case, the diagram (3.4) reads where \(\,E_{n}G:=G^{\ast(n+1)}\,\) is the join of \((n+1)\) copies of the group \(G\). The group \(G\) acts freely on \(E_{n}G\) and hence \(B_{n}G\simeq E_{n}G/G\). The induced fibration \(\,\Omega BG\to E_{n}G\to B_{n}G\,\) associated to the Ganea fibration at the \(n\)-th step of the above tower is thus equivalent to Milnor's \(n\)_-universal_ principal \(G\)-bundle \(\,G\to E_{n}G\to B_{n}G\,\). We review the properties of such bundles in Appendix A Note that this example can be viewed as a special case of Example 3.2 if we take \(B=BG\). ### Derived schemes of quasi-invariants The fibre-cofibre construction is essentially formal: it can be performed in an arbitrary (pointed) model category or \(\infty\)-category. To see why this construction is relevant to our problem we will apply it first in a simple algebraic model category: the category \(\mathtt{dAff}_{k,\ast}\) of pointed derived affine schemes over a field \(k\) of characteristic \(0\). As a model for \(\mathtt{dAff}_{k,\ast}\), we take the category \((\mathtt{DGCA}_{k}\downarrow k)^{\mathrm{op}}\) dual to the category of non-negatively graded commutative DG \(k\)-algebras \(A\) equipped with augmentation map \(A\to k\). Extending the standard algebro-geometric notation, we write \(\,\mathrm{Spec}(A)\,\) for the object (affine DG scheme) in \(\mathtt{dAff}_{k}\) corresponding to the DG algebra \(A\) in \(\mathtt{DGCA}_{k}\). Since we assume that \(\operatorname{char}(k)=0\), the category \(\mathtt{DGCA}_{k}\) carries a natural (projective) model structure, where weak equivalences are the quasi-isomorphisms of DG algebras and fibrations are the DG algebra maps which are surjective in positive (homological) degrees (see, e.g., [1, Appendix B]). The category \(\mathtt{dAff}_{k}=\mathtt{DGCA}_{k}^{\mathrm{op}}\) is equipped with the dual (injective) model structure. The homotopy (co)fibres of morphisms in \(\mathtt{dAff}_{k}\) are defined in terms of homotopy (co)limits, using formulas (3.1). Explicitly, given a morphism of pointed affine DG schemes \(f:\operatorname{Spec}(A)\to\operatorname{Spec}(B)\) corresponding to a DG algebra homomorphism \(f^{*}:B\to A\), its homotopy fibre and homotopy cofibre are given by \[\operatorname{hofib}_{*}(f)\cong\operatorname{Spec}\left(A\otimes_{B}^{ \boldsymbol{L}}k\right)\,\quad\operatorname{hocol}_{*}(f)\cong\operatorname{Spec}\left(B\times_{A}^ {\boldsymbol{R}}k\right) \tag{3.6}\] where \(\otimes_{B}^{\boldsymbol{L}}\) and \(\times_{A}^{\boldsymbol{R}}\) denote the derived tensor product (homotopy pushout) and the derived direct product (homotopy pullback) in the model category \(\mathtt{DGCA}_{k}\). We apply the fibre-cofibre construction in the category \(\mathtt{dAff}_{k,*}\) to the canonical (algebro-geometric) quotient map \(p:V\ \twoheadrightarrow\ V/\!/W\) in the situation of the following simple example. **Example 3.4**.: Let \(W=\mathbb{Z}/2\mathbb{Z}\), acting in its one-dimensional reflection representation \(V\). Choosing a basis vector in \(V\), we can identify \(V=\mathbb{C}\) and \(k[V]=k[x]\), with \(W\) acting on \(k[x]\) by the rule \(s(p)(x)=p(-x)\). In this case, \(\mathcal{A}=\{0\}\) and \(m\) is a non-negative integer. Condition (1.2) says that \(p(x)\) is a quasi-invariant of multiplicity \(m\) iff \(p(x)-p(-x)\) is divisible by \(x^{2m}\). Hence \(Q_{m}(W)\) is spanned by the monomials \(\{x^{2i}\,:\,i\geq 0\}\) and \(\{x^{2i+1}\,:\,i\geq m\}\), or equivalently \[Q_{m}(W)=k[x^{2}]\oplus x^{2m+1}k[x^{2}]=k[x^{2},x^{2m+1}]\.\] Thus, we take \(V\) to be the affine line acted upon by \(W=\mathbb{Z}/2\mathbb{Z}\) via the reflection at \(0\). Regarding \(V\cong\operatorname{Spec}k[x]\) and \(V/\!/W\cong\operatorname{Spec}k[x^{2}]\) as affine (DG) schemes pointed at \(0\), we can compute the homotopy fibre \(F:=\operatorname{hofib}_{*}(p)\) in \(\mathtt{dAff}_{k,*}\), using formula (3.6): \[F\cong\operatorname{Spec}\left(k[x]\otimes_{k[x^{2}]}^{\boldsymbol{L}}k \right)\cong\operatorname{Spec}\left(k[x]\otimes_{k[x^{2}]}k\right)\cong \operatorname{Spec}(k[x]/x^{2}). \tag{3.7}\] Note that the second isomorphism in (3.7) is due to the fact that \(k[x]\) is a free module (and hence, a flat algebra) over \(k[x^{2}]\). Thus, in \(\mathtt{dAff}_{k,*}\), we have the fibration sequence \[F\xrightarrow{j}V\xrightarrow{p}V/\!/W \tag{3.8}\] where \(F\) is given by (3.7). The following simple observation, which was the starting point of the present paper, provides a motivation for our topological results in the next section. **Proposition 3.5**.: _The fibre-cofibre construction in \(\,\mathrm{dAff}_{k,*}\) applied to the fibration (3.8) produces the tower (2.4) of varieties of quasi-invariants for the reflection representation of \(W=\mathbb{Z}/2\mathbb{Z}\) :_ (3.9) _Thus, for all \(\,m\geq 0\), we have_ \[V_{m}\cong\operatorname{Spec}(Q_{m})\,\quad F_{m}\cong\operatorname{Spec} \left[Q_{m}/(x^{2})\right]\,, \tag{3.10}\] _where \(\,Q_{m}=k[x^{2},x^{2m+1}]\) and the maps \(\pi_{m},\,p_{m}\) and \(j_{m}\) in (3.9) correspond to the natural inclusions \(\,Q_{m+1}\hookrightarrow Q_{m}\), \(\,k[x^{2}]\hookrightarrow Q_{m}\), and the projection \(Q_{m}\ \twoheadrightarrow\ Q_{m}/(x^{2})\), respectively._ Proof.: The proof is an easy induction in \(m\). For \(m=0\), we have already shown in (3.7) that \(F=F_{0}\), with (3.8) corresponding (i.e. dual) to the natural algebra maps \(k[x^{2}]\hookrightarrow k[x]\ \twoheadrightarrow\ k[x]/(x^{2})\). Now, assuming that \(V_{m}\) is given by (3.10) together with \(p_{m}:V_{m}\ \twoheadrightarrow\ V/W\) corresponding to the inclusion \(\,k[x^{2}]\hookrightarrow Q_{m}\), we compute the fibre \(F_{m}\) in the same way as in (3.7), using formula (3.6): \[F_{m}\,:=\,\operatorname{hofib}_{*}(p_{m})\,\cong\,\operatorname{Spec}\left(Q _{m}\otimes_{k[x^{2}]}^{\boldsymbol{L}}k\right)\,\cong\,\operatorname{Spec} \left(Q_{m}\otimes_{k[x^{2}]}k\right)\,\cong\,\operatorname{Spec}\left[Q_{m}/ (x^{2})\right]\] Again, crucial here is the fact that \(Q_{m}\) is a free module (and hence, a flat algebra) over \(k[x]^{W}\), which is a general property of quasi-invariants (see Theorem 2.3). Next, we have \[V_{m+1}:=\operatorname{hocof}_{*}(j_{m})\cong\operatorname{Spec}\left(Q_{m} \times_{Q_{m}/(x^{2})}^{\boldsymbol{R}}k\right)\cong\operatorname{Spec}\left( Q_{m}\times_{Q_{m}/(x^{2})}k\right)\cong\operatorname{Spec}(Q_{m+1}) \tag{3.11}\] The first isomorphism in (3.11) is the result of formula (3.6) for homotopy cofibres in \(\mathtt{dAff}_{k,*}\). The second isomorphism is due to the fact that the canonical map \(Q_{m}\ \twoheadrightarrow\ Q_{m}/(x^{2})\) is surjective, and hence a fibration in the standard model structure on \(\mathtt{DGCA}_{k}\) (this implies that \(\operatorname{hocof}_{*}(j_{m})\) coincides with the usual cofibre of \(j_{m}\) in the category of affine \(k\)-schemes). Finally, the last isomorphism in (3.11) is given by the composition of canonical algebra maps \[Q_{m}\times_{Q_{m}/(x^{2})}k\,\hookrightarrow\,Q_{m}\times k\ \twoheadrightarrow\ Q_{m} \tag{3.12}\] It is easy to see that the map (3.12) is injective, and its image is precisely \(Q_{m+1}=k[x^{2},x^{2m+3}]\). This gives an identification \(\,Q_{m}\times_{Q_{m}/(x^{2})}k\cong Q_{m+1}\,\) together with the inclusion \(Q_{m+1}\hookrightarrow Q_{m}\) that defines the morphism of schemes \(\pi_{m}:V_{m}\to V_{m+1}\). _Remark 3.6_.: Proposition 3.5 does not extend directly to higher rank groups: the standard fibre-cofibre construction in \(\mathtt{dAff}_{k,*}\) does _not_ produce the tower of varieties of quasi-invariants, (2.4), for an arbitrary Coxeter group \(W\) (_cf._ Proposition 3.7 below). ### Spaces of quasi-invariants of \(Su(2)\) Let \(G\) be a compact connected Lie group with a fixed maximal torus \(T\) and Weyl group \(W=W_{G}(T)\). Associated to \((G,T)\) there is a natural fibration sequence3 Footnote 3: If we choose a model for the universal \(G\)-bundle \(EG\) (for example, the Milnor model described in Section A) and let \(BG=EG/G\) and \(BT=EG/T\), then (3.13) is represented by a canonical locally trivial fibre bundle \(G/T\to EG/T\ \twoheadrightarrow\ EG/G\) (see, e.g., [10], Chap 4, Sect. 7). \[G/T\xrightarrow{j}BT\xrightarrow{p}BG\, \tag{3.13}\] where \(p\) is the map induced by the inclusion \(T\hookrightarrow G\) and \(\,j\,\) is the classifying map for the principal \(T\)-bundle \(G\to G/T\). **Proposition 3.7**.: _Assume that \(W\) is simply-laced (i.e., of ADE type). Then the fibre-cofibre construction applied to (3.13) produces a tower of fibrations_ (3.14) _where the diagram of spaces_ \[BT\xrightarrow{\pi_{0}}X_{1}(G,T)\xrightarrow{\pi_{1}}X_{2}(G,T)\xrightarrow{\pi _{2}}\ldots\to X_{m}(G,T)\xrightarrow{\pi_{m}}\ldots \tag{3.15}\] _together with maps \(p_{m}:X_{m}(G,T)\to BG\) satisfy the first three properties_ (QI\({}_{1}\)), (QI\({}_{2}\)) _and_ (QI\({}_{3}\)) _of Section 2.4._ Proof.: If \(W\) is simply-laced, all reflection hyperplanes of \(W\) are in the same orbit, and the poset \(\mathcal{M}(W)\) consists only of constant multiplicities which we identify with \(\mathbb{Z}_{+}\). By Ganea's Theorem 3.1, the homotopy fibre \(F_{m}=F_{m}(G,T)\) at stage \(m\) in (3.14) can be represented by the iterated join \[F_{m}\,=\,G/T*\Omega BG*\,\,.\,\,\overset{m}{\text{.}}\,*\Omega BG\simeq G/T\,* \,G*\,.\,\,\overset{m}{\text{.}}\,*G\,=\,G/T\,*\,E_{m-1}G\,\,, \tag{3.16}\] where \(E_{m-1}G\) is Milnor's model for the \((m-1)\)-universal \(G\)-bundle (see Section A). The fibre (3.16) carries a natural left (holonomy) action \(\,\Omega BG\times F_{m}\to F_{m}\,\) that under the identification (3.16), corresponds to the diagonal action of \(G\,\): \[G\times F_{m}\to F_{m}\,\,,\quad g\cdot(t_{0}(g_{0}T)+t_{1}g_{1}+\ldots+t_{m}g _{m})=t_{0}(gg_{0}T)+t_{1}gg_{1}+\ldots+t_{m}gg_{m} \tag{3.17}\] where \(g\), \(g_{0}\), \(g_{1}\),..., \(g_{m}\in G\) and \((t_{0},\ldots,t_{m})\in\Delta^{m}\), see (A.2). The space \(X_{m}=X_{m}(G,T)\) can then be represented as the homotopy quotient \[X_{m}=(F_{m})_{hG}=EG\times_{G}(G/T\,*\,E_{m-1}G) \tag{3.18}\] and the fibration \(F_{m}\to X_{m}\to BG\) in (3.14) is identified with the Borel fibration \[F_{m}\to(F_{m})_{hG}\to BG \tag{3.19}\] Now, the Weyl group \(W=N_{G}(T)/T\) acts on \(G/T\) by \(\,w\cdot(gT)=gn^{-1}T\,\), where \(w=nT\in W\). With identification (3.16), this action naturally induces a \(W\)-action on \(F_{m}=G/T*E_{m-1}G\). The latter commutes with the \(G\)-action (3.17), and hence extends to the space \(X_{m}\) of homotopy \(G\)-orbits in \(F_{m}\). Explicitly, with identification (3.18), the action of \(W\) on \(X_{m}=EG\times_{G}(G/T*E_{m-1}G)\) is given by \[w\cdot(x,\,t_{0}(g_{0}T)+t_{1}g_{1}+\ldots+t_{m}g_{m})=(x,\,t_{0}(g_{0}n^{-1} T)+t_{1}g_{1}+\ldots+t_{m}g_{m}) \tag{3.20}\] where \(x\in EG\) and \(w=nT\in W\,\). The inclusions \(F_{m}\hookrightarrow F_{m+1}\) defined by \[t_{0}(g_{0}T)+t_{1}g_{1}+\ldots+t_{m}g_{m}\ \mapsto\ t_{0}(g_{0}T)+t_{1}g_{1}+ \ldots+t_{m}g_{m}+0\,e\] are obviously \((G\times W)\)-equivariant, hence induce \(W\)-equivariant maps on homotopy \(G\)-quotients: \(\pi_{m}:X_{m}\to X_{m+1}\,\). The whisker maps \(\,p_{m}:X_{m}\to BG\,\) are induced by the trivial maps \(F_{m}\to\operatorname{pt}\) and hence are \(W\)-invariant. Thus, we have established property (P1) for the tower (3.15). Property (P2) follows directly from part (2) of Theorem 3.1. For (P3), it suffices to show that \[H^{*}_{W}(F_{m},\mathbb{Q})\cong\mathbb{Q} \tag{3.21}\] Indeed, since the actions of \(G\) and \(W\) on \(F_{m}\) commute, we have \[(X_{m})_{hW}=EW\times_{W}(EG\times_{G}F_{m})\simeq EG\times_{G}(EW\times_{W}F_ {m})=EG\times_{G}(F)_{hW}\] Whence \[H^{*}_{W}(X_{m},\,\mathbb{Q})\cong H^{*}_{G}((F_{m})_{hW},\mathbb{Q}) \tag{3.22}\] On the other hand, if (3.21) holds, the Serre spectral sequence of the Borel fibration \[(F_{m})_{hW}\to EG\times_{G}(F_{m})_{hW}\to BG\] degenerates, giving an isomorphism \(\,H^{*}_{G}((F_{m})_{hW},\mathbb{Q})\cong H^{*}(BG,\mathbb{Q})\,\). Combining this last isomorphism with (3.22) yields \(\,H^{*}_{W}(X_{m},\,\mathbb{Q})\cong H^{*}(BG,\mathbb{Q})\,\), as required by (QI\({}_{3}\)). Now, since \(F_{m}\) is connected, (3.21) is equivalent to vanishing of higher cohomology: \[H^{n}_{W}(F_{m},\,\mathbb{Q})=0\quad\forall\,n>0. \tag{3.23}\] We prove (3.23) by induction on \(m\). For \(m=0\), we have \(\,F_{0}=G/T\) and \(\,(G/T)_{hW}\simeq(G/T)/W\cong G/N\,\), since the action of \(W\) on \(G/T\) is free. It follows that \(H^{n}_{W}(F_{0},\mathbb{Q})\cong H^{n}(G/N,\,\mathbb{Q})=0\) for all \(n>0\) as it is well known that the space \(G/N\) is rationally acyclic for any compact connected Lie group (see [1, Theorem 20.3]). Now, assume that (3.23) holds for some \(m\geq 0\) and consider \((F_{m+1})_{hW}=(F_{m}\ast G)_{hW}\). Representing this space by homotopy colimits (see (A.3)) and using the fact that the homotopy colimits commute, we have \[(F_{m+1})_{hW} \simeq \operatorname{hocolim}_{W}\operatorname{hocolim}\,[\,F_{m}\gets F _{m}\times G\to G\,]\] \[\simeq \operatorname{hocolim}\operatorname{hocolim}_{W}\,[\,F_{m}\gets F _{m}\times G\to G\,]\] \[\simeq \operatorname{hocolim}\,[\,(F_{m})_{hW}\leftarrow(F_{m}\times G)_ {hW}\rightarrow(G)_{hW}\,]\] \[\simeq \operatorname{hocolim}\,[\,(F_{m})_{hW}\leftarrow(F_{m})_{hW} \times G\to BW\times G\,]\] This homotopy decomposition implies that the cohomology groups of \((F_{m+1})_{hW}\) and \((F_{m})_{hW}\) are related by the following Mayer-Vietoris type long exact sequence: \[H^{n-1}[(F_{m})_{hW}\times G]\,\rightarrow\,H^{n}[(F_{m+1})_{hW}]\,\to \,H^{n}[(F_{m})_{hW}]\oplus H^{n}[BW\times G]\,\rightarrow\,H^{n}[(F_{m})_{hW }\times G]\] Since \(W\) is a finite, its rational cohomology vanishes in positive degrees. Hence, by Kunneth Theorem, we have \(H^{*}(BW\times G,\mathbb{Q})\cong H^{*}(G,\mathbb{Q})\). Furthermore, our induction assumption (3.23) implies that \(H^{*}((F_{m})_{hW}\times G,\mathbb{Q})\cong H^{*}(G,\mathbb{Q})\) and for each \(n\geq 1\), the last map in the above exact sequence is an isomorphism. Thus, for \(n\geq 2\), the above sequence breaks up into short exact sequences \[0\to H^{n}((F_{m+1})_{hW},\mathbb{Q})\,\rightarrow\,H^{n}(G,\mathbb{Q}) \xrightarrow{\sim}H^{n}(G,\mathbb{Q})\to 0\] which show that \(H^{n}_{W}(F_{m+1},\mathbb{Q})=0\) for all \(n\geq 2\). On the other hand, in dimension \(0\) and \(1\), the above long exact sequence reads \[H^{0}((F_{m})_{hW},\mathbb{Q})\oplus H^{0}(G,\mathbb{Q})\,\,\,\twoheadheadrightarrow\,H^{0}(G, \mathbb{Q})\to H^{1}((F_{m+1})_{hW},\mathbb{Q})\,\to H^{1}(G, \mathbb{Q})\xrightarrow{\sim}H^{1}(G,\mathbb{Q})\] where the first arrow is surjective and the last is an isomorphism. This shows that \(H^{1}_{W}(F_{m+1},\mathbb{Q})\) also vanishes, thus finishing the induction and the proof of (QI\({}_{3}\)). **Example 3.8**.: Let us describe the cohomology \(H^{*}(X_{1},\mathbb{Q})\) of the first space \(X_{1}=X_{1}(G,T)\) in the diagram (3.15) explicitly. By general properties of the Ganea construction (see Section 3.1), this space fits in the homotopy cofibration sequence \[G/T\xrightarrow{j}BT\xrightarrow{\pi_{0}}X_{1} \tag{3.24}\] Since both \(BT\) and \(G/T\) have no cohomology classes in odd dimensions and the natural map \(j^{*}:H^{*}(BT,\mathbb{Q})\to H^{*}(G/T,\mathbb{Q})\) is surjective, the long cohomology sequence associated to (3.24) reduces to the short exact sequence \[0\to\tilde{H}^{*}(X_{1},\mathbb{Q})\xrightarrow{\pi_{0}^{*}}\tilde{H}^{*}(BT, \mathbb{Q})\xrightarrow{j^{*}}\tilde{H}^{*}(G/T,\mathbb{Q})\to 0 \tag{3.25}\] where \(\tilde{H}^{*}\) stands for the reduced cohomology. Since \(X_{1}\) is connected, (3.25) shows that the algebra map \(\pi_{0}^{*}:H^{*}(X_{1},\mathbb{Q})\to H^{*}(BT,\mathbb{Q})\) is injective, and with identification \(H^{*}(BT,\mathbb{Q})\cong\mathbb{Q}[V]\) (as in (2.9)), its image being \[H^{*}(X_{1},\mathbb{Q})\cong\mathbb{Q}+\langle\mathbb{Q}[V]_{+}^{W}\rangle\, \subset\,\mathbb{Q}[V]\, \tag{3.26}\] where \(\langle\mathbb{Q}[V]_{+}^{W}\rangle\) is the ideal in \(\mathbb{Q}[V]\) generated by the \(W\)-invariant polynomials of positive degrees. Formula (3.26) shows that \(X_{1}\) has no odd cohomology; moreover, the map \(p_{1}^{*}:H^{*}(BG,\mathbb{Q})\to H^{*}(X_{1},\mathbb{Q})\) induced by the first whisker map in (3.14) is injective, and \(H^{*}(X_{1},\mathbb{Q})\) is a finite module over \(H^{*}(BG,\mathbb{Q})\cong\mathbb{Q}[V]^{W}\) via \(p_{1}^{*}\). By Hilbert-Noether Theorem, this implies that \(H^{*}(X_{1},\mathbb{Q})\) is a finitely generated graded \(\mathbb{Q}\)-algebra, however it is _not_ Cohen-Macaulay (and hence _not_ Gorenstein) when \(\dim_{\mathbb{Q}}(V)\geq 2\). To see this we set \(R:=H^{*}(X_{1},\mathbb{Q}),\ S:=H^{*}(BT,\mathbb{Q})\) and \(S^{W}=H^{*}(BG,\mathbb{Q})\) to simplify the notation. Since \(S\) is a free \(S^{W}\)-module, the long exact sequence obtained by dualizing the short exact sequence \(\,0\to R\to S\to S/R\to 0\,\) over \(S^{W}\) yields \[\operatorname{Ext}^{i}_{S^{W}}(R,S^{W})\,\cong\,\operatorname{Ext}^{i+1}_{S^ {W}}(R/S,S^{W})\,\quad\forall\,i\geq 1\] Since \(R/S\cong\tilde{H}^{*}(G/T,\mathbb{Q})\) by (3.25), \(\,\dim_{\mathbb{Q}}(R/S)=|W|-1<\infty\,\). Hence \(\operatorname{Ext}^{n}_{S^{W}}(R/S,S^{W})\neq 0\,\) and therefore \(\,\operatorname{Ext}^{n-1}_{S^{W}}(R,S^{W})\neq 0\,\), where \(n:=\dim_{\mathbb{Q}}(V)\). It follows that when \(n>1\,\), \(R\) is not free as a graded module over \(S^{W}\), and hence not Cohen-Macaulay as a graded algebra (see, e.g., [13, Prop. 6.8]). Example 3.8 shows that, unfortunately, the tower of spaces (3.15) constructed in Proposition 3.7 cannot satisfy all five axioms of our realization problem for an arbitrary compact Lie group. Indeed, if \(\,\operatorname{rk}(G)=n\geq 2\), then (\(\operatorname{QI}_{5}\)) already fails for \(H^{*}(X_{1}(G,T),\mathbb{Q})\), since \(H^{*}(X_{1}(G,T),\mathbb{Q})\) is not a Gorenstein algebra, while \(Q_{1}(W)\) is (see Theorem 2.3). Note, however, that in the rank one case, for \(G=SU(2)\), we still have \(\,H^{*}(X_{1}(G,T),\mathbb{Q})\cong Q_{1}(\mathbb{Z}/2\mathbb{Z})\) by formula (3.26). The next theorem shows that this is not a coincidence. **Theorem 3.9**.: _Assume that \(G=SU(2)\) and \(W=\mathbb{Z}/2\mathbb{Z}\). Then the diagram of spaces (3.15) together with whisker maps \(p_{m}\) produced by the fibre-cofibre construction satisfies all five properties_ (\(\operatorname{QI}_{1}\))-(\(\operatorname{QI}_{5}\)) _of Section 2.4. In particular, for all \(m\geq 0\), there are isomorphisms of graded commutative algebras_ \[H^{*}(X_{m}(G,T),\,\mathbb{Q})\,\cong\,Q_{m}(W)\, \tag{3.27}\] _where \(Q_{m}(W)\) is the subring of \(W\)-quasi-invariants of multiplicity \(m\) in \(\mathbb{Q}[V]\). Moreover, \(\,X_{m}(G,T)\) are unique, up to rational homotopy equivalence, topological spaces realizing the algebras \(Q_{m}(W)\)._ Proof.: Properties (\(\operatorname{QI}_{1}\))-(\(\operatorname{QI}_{3}\)) have already been established for arbitrary \(G\) in Proposition 3.7; we need only to check (\(\operatorname{QI}_{4}\)) and (\(\operatorname{QI}_{5}\)). As a topological space, \(SU(2)\) is homeomorphic to \(\mathbb{S}^{3}\) and \(G/T=\mathbb{CP}^{1}\cong\mathbb{S}^{2}\). Hence, applying a well-known formula for the join of spheres, we can identify the fibre (3.16): \[F_{m}=G/T\,*\,G^{*\,m}\,\cong\,\mathbb{S}^{2}\,*\,(\mathbb{S}^{3})^{*m}\,\cong\, \mathbb{S}^{4m+2}. \tag{3.28}\] Thus, for \(G=SU(2)\), (3.19) is equivalent to the sphere fibration: \(\,\mathbb{S}^{4m+2}\to X_{m}\to B\mathbb{S}^{3}\). We will look at the Serre spectral sequence of this fibration and apply the Leray-Hirsch Theorem. Since both the basespace and the fibre of (3.19) have no cohomology in odd dimensions, the Serre spectral sequence collapses, giving an isomorphism of graded vector spaces (see, e.g., [10, Lemma III.4.5(1)]) \[H^{*}(X_{m},\mathbb{Q})\cong H^{*}(BG,\mathbb{Q})\otimes H^{*}(F_{m},\mathbb{Q})\] Then, the Leray-Hirsch Theorem (see, e.g., [10, Theorem III.4.2]) implies that \(H^{*}(X_{m},\mathbb{Q})\) is a free graded module over the algebra \(H^{*}(BG,\mathbb{Q})=H^{*}(BSU(2),\,\mathbb{Q})\), which is the rational polynomial algebra \(\mathbb{Q}[c_{2}]\) generated by the second Chern class \(c_{2}\in H^{4}(BSU(2),\mathbb{Q})\,\). This graded module has rank two, with \(H^{*}(BG,\mathbb{Q})\) identified with a direct summand in \(H^{*}(X_{m},\mathbb{Q})\) under the whisker map \(p_{m}^{*}:H^{*}(BG,\mathbb{Q})\hookrightarrow H^{*}(X_{m},\mathbb{Q})\). The complement of \(H^{*}(BG,\mathbb{Q})\) in \(H^{*}(X_{m},\mathbb{Q})\) is generated by a cohomology class \(\xi\) of dimension \(4m+2\) whose image under the projection \(j_{m}^{*}:H^{*}(X_{m},\mathbb{Q})\to H^{*}(F_{m},\mathbb{Q})\cong H^{*}( \mathbb{S}^{4m+2},\mathbb{Q})\) is the fundamental cohomology class of \(\mathbb{S}^{4m+2}\). Thus, we have \[H^{*}(X_{m},\mathbb{Q})\cong\mathbb{Q}[c_{2}]\oplus\mathbb{Q}[c_{2}]\xi \tag{3.29}\] where \(|c_{2}|=4\) and \(|\xi|=4m+2\,\). Next, we look at the homotopy cofibration sequence in (3.14) \[F_{m}\xrightarrow{j_{m}}X_{m}\xrightarrow{\pi_{m}}X_{m+1} \tag{3.30}\] arising from the Ganea construction. This gives a long exact sequence on (reduced) cohomology: \[\ldots\,\to\,\tilde{H}^{n-1}(F_{m},\mathbb{Q})\,\to\,\tilde{H}^{n}(X_{m+1}, \mathbb{Q})\,\xrightarrow{\pi_{m}^{*}}\,\tilde{H}^{n}(X_{m},\mathbb{Q})\, \xrightarrow{j_{m}^{*}}\,\tilde{H}^{n}(F_{m},\mathbb{Q})\,\to\,\ldots \tag{3.31}\] Since neither \(F_{m}\) nor \(X_{m}\) (by (3.29)) have odd cohomology, we see immediately from (3.31) that all algebra maps \(\pi_{m}^{*}\) must be injective, i.e. property (QI\({}_{4}\)) holds for (3.15). For each \(m\geq 0\), the composition of these maps then gives an embedding \[\pi_{0}^{*}\,\pi_{1}^{*}\,\ldots\,\pi_{m-1}^{*}:\ H^{*}(X_{m},\mathbb{Q}) \hookrightarrow H^{*}(X_{m-1},\mathbb{Q})\hookrightarrow\ldots\hookrightarrow H ^{*}(BT,\mathbb{Q}) \tag{3.32}\] If we identify \(H^{*}(BT,\mathbb{Q})=\mathbb{Q}[x]\) by choosing \(x\in H^{2}(BT,\mathbb{Q})=H^{2}(B\mathbb{S}^{1},\mathbb{Q})\) to be the universal Euler class, which is the image of the canonical generator of \(H^{2}(B\mathbb{S}^{1},\mathbb{Z})=H^{2}(K(\mathbb{Z},2),\mathbb{Z})\), then the Chern class \(c_{2}\in H^{4}(BG,\mathbb{Q})\) maps by (3.32) to \(x^{2}\in H^{*}(BT,\mathbb{Q})\). Then, for degree reasons, the generator \(\xi\in H^{4m+2}(X_{m},\mathbb{Q})\) in (3.29) should map to (a scalar multiple of) \(x^{2m+1}\in\mathbb{Q}[x]\). Thus the algebra homomorphism (3.32) identifies \(H^{*}(X_{m},\mathbb{Q})\cong\mathbb{Q}[x^{2},x^{2m+1}]\), which is precisely the subring \(Q_{m}\) of \(W\)-quasi-invariants in \(H^{*}(BT,\mathbb{Q})=\mathbb{Q}[x]\). This gives property (QI\({}_{5}\)) and completes the proof of the first part of the theorem. The last claim of the theorem follows from Sullivan's formality theorem [11]. Indeed, the algebras \(Q_{m}(W)\) have the presentation \(\mathbb{Q}[\xi,\eta]/(\xi^{2}-\eta^{2m+1})\), where \(|\eta|=4\) and \(|\xi|=4m+2\) (see Example 3.4). Hence, by [11, Remark (v), p. 317], they are _intrinsically_ formal. This means that, for each \(m\geq 0\), there is only one rational homotopy type that realizes \(Q_{m}\,\). From now on, we will assume that \(G=SU(2)\) and \(T=U(1)\) embedded in \(SU(2)\) in the standard way as a maximal torus. **Definition 3.10**.: We call the \(G\)-space \(\,F_{m}(G,T):=G/T\,*\,E_{m-1}G\,\) the \(m\)_-quasi-flag manifold_ and the associated homotopy quotient \[X_{m}(G,T):=F_{m}(G,T)_{hG}=EG\times_{G}(G/T\,*\,E_{m-1}G)\] the _space of \(m\)-quasi-invariants_ for \(G=SU(2)\). These spaces fit in the Borel fibration sequence \[F_{m}(G,T)\,\xrightarrow{j_{m}}X_{m}(G,T)\xrightarrow{p_{m}}BG \tag{3.33}\] that generalizes the fundamental sequence (3.13). _Remark 3.11_.: By definition, \(\,H^{*}(X_{m}(G,T),\,\mathbb{Q})=H^{*}_{G}(F_{m}(G,T),\,\mathbb{Q})\,\) for all \(m\geq 0\). With this identification, the algebra homomorphisms \(\,H^{*}(X_{m},\mathbb{Q})\to H^{*}(BT,\mathbb{Q})\,\) constructed in Theorem 3.9 (see (3.32)) are induced (on \(G\)-equivariant cohomology) by the natural inclusion maps \[i_{0}:\;G/T\hookrightarrow F_{m}(G,T)\,\quad gT\mapsto 1\cdot(gT)+0\cdot x\, \tag{3.34}\] where \(x\in E_{m-1}G\,\). Note that the maps (3.34) are null-homotopic in the category Top of ordinary spaces, the null homotopy being \(\,i_{t}:\,gT\mapsto(1-t)\cdot(gT)+t\cdot x\,\); however, they are _not_ null-homotopic in the category of \(G\)-spaces and \(G\)-equivariant maps. In fact, the proof of Theorem 3.9 shows that the maps induced by (3.34) on \(G\)-equivariant cohomology are injective and hence nontrivial. ### \(T\)-equivariant cohomology Our next goal is to compute the \(T\)-equivariant cohomology of the \(G\)-spaces \(F_{m}(G,T)\) by restricting the \(G\)-action to the maximal torus \(T\subset G\). The computation is based on the following simple observations. **Lemma 3.12**.: _For all \(m\geq 0\), there is a natural \(T\)-equivariant homeomorphism_ \[F_{m}(G,T)\,\cong\,\Sigma\,E_{2m}(T)\, \tag{3.35}\] _where \(\Sigma\) stands for the unreduced suspension in_ Top_._ Proof.: First, note that \(G\) is \(T\)-equivariantly homeomorphic to the (unreduced) join of two copies of \(T\): the required homeomorphism \[T*T\,\cong\,G \tag{3.36}\] can be explicitly written as \(\,t\lambda+(1-t)\mu\,\mapsto\,t^{1/2}\,\lambda+(1-t)^{1/2}\,\mu j\,\), where \(G=SU(2)\) is identified with the group of unit quaternions in \(\mathbb{H}=\mathbb{C}\oplus\mathbb{C}j\) and \(T=U(1)\) with unit complex numbers. Similarly, we can define a \(T\)-equivariant homeomorphism \[(G/T)^{T}*T\,\cong\,G/T \tag{3.37}\] where \((G/T)^{T}\) denotes the set of \(T\)-fixed points in \(G/T\). Combining (3.36) and (3.37) with natural associativity isomorphisms for joins, we get \[F_{m}(G,T)=(G/T)*G^{\,*\,m}\,\cong\,(G/T)^{T}*T^{\,*\,(2m+1)}=\mathbb{S}^{0}* E_{2m}(T) \tag{3.38}\] which is equivalent to formula (3.35). **Lemma 3.13**.: _For all \(n\geq 0\), there are natural algebra isomorphisms_ \[H^{*}_{T}(\Sigma E_{n}(T)\,,\,\mathbb{Q})\,\cong\,\mathbb{Q}[x]\,\times_{ \mathbb{Q}[x]/(x^{n+1})}\,\mathbb{Q}[x]\,. \tag{3.39}\] Proof.: We compute \[\begin{array}{rcl}[\Sigma E_{n}(T)]_{hT}&\simeq&[\mathrm{hocolim}(\,\mathrm{ pt}\gets E_{n}(T)\to\mathrm{pt})]_{hT}\\ &\simeq&\mathrm{hocolim}(BT\gets E_{n}(T)_{hT}\to BT)\\ &\simeq&\mathrm{hocolim}(BT\gets B_{n}(T)\to BT)\end{array} \tag{3.40}\] where the last equivalence follows from the fact that \(E_{n}(T)\) is an \(n\)-universal \(T\)-bundle, so that the \(T\)-action on \(E_{n}(T)\) is free and hence \(E_{n}(T)_{hT}\simeq E_{n}(T)/T=B_{n}(T)\) (see Section A). To complete the proof it remains to note that \(\,BT\simeq\mathbb{C}\mathbb{P}^{\infty}\) and \(\,B_{n}(T)\cong\mathbb{C}\mathbb{P}^{n}\) for \(T=U(1)\), with natural map \(B_{n}T\to BT\) represented by the inclusion \(\mathbb{C}\mathbb{P}^{2m}\hookrightarrow\mathbb{C}\mathbb{P}^{\infty}\) (see, e.g., [20, Example 9.2.3]). Hence, (3.40) shows that \(\,[\Sigma\,E_{n}(T)]_{nT}\simeq\mathbb{CP}^{\infty}\bigvee_{\mathbb{CP}^{n}}\mathbb{ CP}^{\infty}\), which, by Mayer-Vietoris sequence, yields the isomorphism (3.39). As a consequence of Lemma 3.12 and Lemma 3.13, we get **Proposition 3.14**.: _For all multiplicities \(m\geq 0\), there are natural algebra isomorphisms_ \[H^{*}_{T}(F_{m}(G,T),\,\mathbb{Q})\,\cong\,\mathbb{Q}[x]\,\times_{\mathbb{Q}[ x]/(x^{2m+1})}\,\mathbb{Q}[x]\,, \tag{3.41}\] _where \(x\in H^{2}(BT,\mathbb{Q})\) is the universal \((\)rational\()\) Euler class._ _Remark 3.15_.: For \(m=0\), formula (3.41) is well known: it follows, for example, from a general combinatorial description of \(T\)-equivariant cohomology of equivariantly formal spaces in terms of moment graphs (see [6]). In our subsequent paper, we will generalize the main localization theorem of [6] to moment graphs with multiplicities, and as an application, extend the result of Proposition 3.14 to quasi-flag manifolds for an arbitrary compact connected Lie group. Next, we recall the modules of \(\mathbb{C}W\)-valued quasi-invariants, \(\mathbf{Q}_{k}(W)\), introduced in [1]. In [1, Section 3.2], these modules are considered only for integral multiplicities \(k\in\mathbb{Z}_{+}\); however, their definition makes sense -- in the Coxeter case -- for all \(k\in\frac{1}{2}\,\mathbb{Z}_{+}\) (_cf._[1, (3.8)]). We provide a natural topological interpretation of these modules. **Corollary 3.16**.: _For all \(n\geq 0\), there are natural isomorphisms of \(\mathbb{Q}[x]\rtimes W\)-modules_ \[H^{*}_{T}(\Sigma E_{n}(T)\,,\,\mathbb{C})\,\cong\,\mathbf{Q}_{\frac{n+1}{2}}(W )\,. \tag{3.42}\] _In particular, \(\,H^{*}_{T}(F_{m}(G,T),\,\mathbb{C})\,\cong\,\mathbf{Q}_{m+\frac{1}{2}}(W)\,\) for all \(\,m\geq 0\)._ Proof.: Under the isomorphism (3.39), the geometric action of \(W=\mathbb{Z}/2\mathbb{Z}\) on \(H^{*}_{T}(\Sigma E_{n}(T)\,,\,\mathbb{Q})\) corresponds to the action \((p,\,q)\mapsto(s(q),\,s(p))\) on the fiber product. Relative to this action, we can then define the \(W\)-equivariant map \[f:\,\mathbb{Q}[x]\,\times_{\mathbb{Q}[x]/(x^{n+1})}\mathbb{Q}[x]\,\to\, \mathbb{Q}[x]\otimes\mathbb{Q}W\,\quad(p,\,q)\mapsto\frac{1}{2}(p+qs)\] This map is obviously injective, and it is easy to see that its image is \(\,\mathbb{Q}[x]e_{0}+\mathbb{Q}[x]\,x^{n+1}e_{1}\,\), where \(e_{0}=(1+s)/2\) and \(e_{1}=(1-s)/2\) are the idempotents in \(\mathbb{Q}W\) corresponding to the trivial and sign representations of \(W\). Example 3.9 of [1] shows that \(\operatorname{Im}(f)\) is precisely \(\mathbf{Q}_{\frac{n+1}{2}}(W)\); thus, combining \(f\) with the isomorphism of Lemma 3.13 gives the required isomorphism (3.42). The last statement then follows from Proposition 3.14. _Remark 3.17_.: Recall that, for any compact connected Lie group \(G\), there is a natural isomorphism \[H^{*}_{G}(X,\mathbb{Q})\,\cong\,H^{*}_{T}(X,\mathbb{Q})^{W} \tag{3.43}\] that extends the result of Borel's Theorem 2.7 to an arbitrary \(G\)-space \(X\) (see, e.g., [10, Chap III, Prop. 1]). For \(X=F_{m}(G,T)\), it follows from Corollary 3.16 that \[H^{*}_{T}(F_{m}(G,T),\,\mathbb{C})^{W}\,\cong\,e_{0}\mathbf{Q}_{m+\frac{1}{2} }(W)\,\cong\,Q_{m}(W)\,.\] Thus the isomorphism (3.27) of Theorem 3.9 can be deduced from (3.41) by (3.43). ### Divided difference operators As an application of Theorem 3.9, we give a topological construction of generalized divided difference operators associated with quasi-invariants. Recall that the classical divided difference operators \(\,\Delta_{\alpha}:\mathbb{Q}[V]\to\mathbb{Q}[V]\,\) are attached to reflections \(s_{\alpha}\in W\) of a Coxeter group \(W\) by the rule (_cf._[13, 13]): \[(1-s_{\alpha})p\,=\,\Delta_{\alpha}(p)\cdot\alpha_{H} \tag{3.44}\] where \(\alpha_{H}\subset V^{*}\) is a linear form vanishing on the reflection hyperplane \(H=H_{\alpha}\). Note that (3.44) defines \(\Delta_{\alpha}\) uniquely up to a nonzero constant factor. The definition of quasi-invariants of Coxeter groups suggests the following natural generalization of (3.44): \[(1-s_{\alpha})p\,=\,\Delta_{\alpha}^{(m_{\alpha})}(p)\cdot\alpha_{H}^{2m_{ \alpha}+1} \tag{3.45}\] To be precise, given a \(W\)-invariant multiplicity function \(m:\mathcal{A}\to\mathbb{Z}_{+},\ \alpha\mapsto m_{\alpha}\), formulas (3.45) define unique (up to nonzero constants) linear maps \[\Delta_{\alpha}^{(m_{\alpha})}:Q_{m}(W)\to Q_{0}(W) \tag{3.46}\] one for each reflection \(s_{\alpha}\in W\). Note that \(Q_{0}(W)=\mathbb{Q}[V]\), and for \(m=0\), the maps (3.46) coincide with the classical divided difference operators: \(\,\Delta_{\alpha}^{(0)}=\Delta_{\alpha}\). **Definition 3.18**.: We call (3.46) the _divided difference operators of \(W\) of multiplicity \(m\)_. When \(W\) has rank one, i.e. \(W\) is generated by a single reflection \(s\), the corresponding map \(\Delta_{s}^{(m)}\) takes values in \(\mathbb{Q}[V]^{W}\) thus defining a linear operator on \(W\)-quasi-invariants: \[\Delta_{s}^{(m)}:\,Q_{m}(W)\to Q_{m}(W). \tag{3.47}\] The operator (3.47) has a natural topological interpretation in terms of our spaces of quasi-invariants. The proof of Theorem 3.9 shows that the basic fibration (3.33) is equivalent to a sphere fibration with fibre \(F_{m}\simeq\mathbb{S}^{4m+2}\). Hence, associated to (3.33) there is a Gysin long exact sequence of the form (see, e.g., [16, Example II.5.C]): \[\,\ldots\,\to H^{n}(BG,\mathbb{Q})\xrightarrow{p_{m}^{*}}H^{n}(X_{m}, \mathbb{Q})\xrightarrow{(p_{m})_{*}}H^{n-4m-2}(BG,\mathbb{Q})\to H^{n+1}(BG, \mathbb{Q})\to\,\ldots \tag{3.48}\] where \(p_{m}^{*}\) is the natural pullback map induced on cohomology by the \(m\)-th whisker map \(p_{m}:X_{m}\to BG\) and \((p_{m})_{*}\) is a 'wrong way' pushforward map called the Gysin homomorphism. Combining these last two maps, we get the graded linear endomorphism on \(H^{*}(X_{m},\mathbb{Q})\) of degree \(-(4m+2)\) : \[p_{m}^{*}\circ(p_{m})_{*}:\ H^{*}(X_{m},\mathbb{Q})\,\to\,H^{*}(X_{m},\, \mathbb{Q}) \tag{3.49}\] The next proposition generalizes a well-known formula for the classical divided difference operators \(\Delta_{\alpha}\) (proven, for example, in [1]). **Proposition 3.19**.: _Under the isomorphism of Theorem 3.9, the operator (3.49) coincides with the divided difference operator (3.47) of multiplicity \(m\): i.e.,_ \[\Delta_{s}^{(m)}=p_{m}^{*}\circ(p_{m})_{*} \tag{3.50}\] Proof.: Since the algebra homomorphism \(p_{m}^{*}:H^{*}(BG,\mathbb{Q})\to H^{*}(X_{m},\mathbb{Q})\) is injective (for all \(m\)), the Gysin sequence (3.48) breaks up into short exact sequences \[0\to H^{*}(BG,\mathbb{Q})\xrightarrow{p_{m}^{*}}H^{*}(X_{m},\mathbb{Q}) \xrightarrow{(p_{m})_{*}}H^{*-4m-2}(BG,\mathbb{Q})\to 0 \tag{3.51}\] Now, if we identify \(\,H^{*}(BG,\mathbb{Q})=\mathbb{Q}[c_{2}]\,\) and \(\,H^{*}(X_{m},\mathbb{Q})=\mathbb{Q}[x^{2},x^{2m+1}]\) as in (the proof of) Theorem 3.9, the map \(p_{m}^{*}\) takes \(c_{2}\) to \(x^{2}\) and hence \(c_{2}^{k}\) to \(x^{2k}\) for all \(k\geq 0\). By exactness of (3.51), we then conclude that \((p_{m})_{*}(x^{2k})=0\), while \((p_{m})_{*}(x^{2m+1})=\kappa_{m}\), where \(\kappa_{m}\in\mathbb{Q}^{\times}\) is a nonzero constant. Hence, \(p_{m}^{*}(p_{m})_{*}(x^{2k})=0\) for all \(k\geq 0\); on the other hand, by projection formula, \[p_{m}^{*}(p_{m})_{*}(x^{2k+2m+1}) = p_{m}^{*}(p_{m})_{*}(x^{2k}\cdot x^{2m+1})\] \[= p_{m}^{*}(p_{m})_{*}(p_{m}^{*}(c_{2}^{k})\cdot x^{2m+1})\] \[= p_{m}^{*}(c_{2}^{k})\cdot(p_{m})_{*}(x^{2m+1})\] \[= \kappa_{m}\,x^{2k}\] Thus, up to a nonzero constant factor, we have \[p_{m}^{*}(p_{m})_{*}(x^{N})=\left\{\begin{array}{ll}0\;,&\mbox{if}\;\;N=2k\\ x^{2k}\;,&\mbox{if}\;\;N=2k+2m+1\end{array}\right.\] which agrees with the action of \(\Delta_{s}^{(m)}=\frac{1}{x^{2m+1}}(1-s)\) on \(Q_{m}(W)=\mathbb{Q}[x^{2},x^{2m+1}]\). ## 4. 'Fake' spaces of quasi-invariants By Theorem 3.9, the spaces \(X_{m}(G,T)\) provide topological realizations for the algebras \(Q_{m}(W)\) that are unique up to _rational_ equivalence. This raises the question whether the \(X_{m}(G,T)\)'s are actually unique up to homotopy equivalence. In this section, we answer the above question in the negative by constructing a natural class of counterexamples related to finite loop spaces. These remarkable loop spaces -- sometimes referred to as _fake Lie groups_ -- were originally constructed by D. L. Rector [11] as examples of nonstandard ('exotic') deloopings of \(\mathbb{S}^{3}\). We will show that the rational cohomology rings of the spaces of quasi-invariants associated to Rector's spaces are isomorphic to the 'genuine' spaces of quasi-invariants \(X_{m}(G,T)\); however, the spaces themselves are _not_ homotopy equivalent (in fact, as we will see in Section 5, they can be distinguished \(K\)-theoretically). Thus, we get many different topological realizations of \(Q_{m}(W)\), but among these only the 'genuine' spaces of quasi-invariants \(X_{m}(G,T)\) satisfy all properties (\(\mathrm{QI}_{1}\))-(\(\mathrm{QI}_{5}\)). ### Finite loop spaces We recall the definition of a finite loop space which is a natural homotopy-theoretic generalization of a compact Lie group. An exposition of classical results as well as many interesting examples of finite loop spaces can be found in the monograph [10]; for more recent developments, we refer to the survey papers [14], [15], and [16]. **Definition 4.1**.: A _finite loop space_ is a pointed connected space \(B\) such that \(\Omega B\) is homotopy equivalent to a finite CW-complex. It is convenient to represent a finite loop space as a triple \((X,B,e)\), where \(X\) is a finite CW-complex, \(B\) is a pointed connected space, and \(e:X\stackrel{{\sim}}{{\to}}\Omega B\) is a homotopy equivalence. A prototypical example is \((G,BG,e)\), where \(G\) is a compact Lie group, \(BG\) its classifying space, and \(e:G\stackrel{{\sim}}{{\to}}\Omega BG\) is a canonical equivalence. In general, finite loop spaces have many properties in common with compact Lie groups; however, the class of such spaces is much larger. In fact, if \(G\) is a compact connected non-abelian Lie group, there exist uncountably many homotopically distinct spaces \(B\) such that \(\Omega B\simeq G\); thus the underlying topological space of \(G\) carries uncountably many finite loop structures (see [17]). In the case \(G=SU(2)\), this striking phenomenon was originally discovered by Rector [11] (see Theorem 4.2 below). ### Fake Lie groups of type \(Su(2)\) We will work with localizations of topological spaces in the sense of D. Sullivan. A modern exposition of this classical construction can be found in [12]. Given a space \(X\) and a prime number \(p\), we denote the localization of \(X\) at \(p\) by \(X_{(p)}\). Recall (_cf._[12, 8.5.1]) that two (nilpotent, finite type) spaces \(X\) and \(Y\) are said to be _in the same genus_ if \(X_{(p)}\simeq Y_{(p)}\) for every prime \(p\). We are interested in finite loop spaces \(B\) (see Definition 4.1) that are in the same genus as \(BG\) for some compact connected Lie group \(G\). Such spaces (called fake Lie groups) have been studied extensively in the literature (see, e.g., [20]), since their original discovery in [13]. This last paper gave a complete homotopy classification of spaces in the genus of \(BG\) for \(G=SU(2)\), and proposed a simple criterion to distinguish the genuine \(BSU(2)\) among these spaces: more precisely, **Theorem 4.2** (Rector).: _Let \(G=SU(2)\), and let \(B\) be a space in the genus of \(BG\). Then, for each prime \(p\), there is a homotopy invariant \(\,(B/p)\in\{\pm 1\}\,\) called the_ Rector invariant _of \(B\) at \(p\), such that_ \((1)\) _The set \(\{(B/p)\}\), where \(p\) runs over all primes, is a complete set of invariants of \(B\) in the genus of \(BG\)._ \((2)\) _Every combination of values of \((B/p)\) can occur for some \(B\). In particular, the genus of \(BG\) consists of uncountably many distinct homotopy types._ \((3)\) _The Rector invariant of \(B=BG\) equals \(1\) at all primes \(p\)._ \((4)\) _The space \(B\) admits a maximal torus4 if and only if \(B\) is homotopy equivalent to \(BG\)._ Footnote 4: We say that a finite loop space \(B\) admits a maximal torus if there is a map \(p:BT_{n}\to B\) from the classifying space of a finite-dimensional torus with homotopy fibre being a finite CW-complex (see [13]). _Remark 4.3_.: Each space \(B\) in the genus of \(BSU(2)\) defines a loop structure on \(\mathbb{S}^{3}\), i.e. \(\Omega B\simeq\mathbb{S}^{3}\). Conversely, a uniqueness theorem of Dwyer, Miller and Wilkerson [12] implies that every loop structure on \(\mathbb{S}^{3}\) belongs to the genus of \(BSU(2)\). Thus, Theorem 4.2 combined with results of [12] provides a complete classification of finite loop spaces of type \(SU(2)\). _Remark 4.4_.: It was a long-standing conjecture in homotopy theory (motivated in part by Theorem 4.2(4), _cf._[14]) that a finite loop space with a maximal torus is homotopy equivalent to the classifying space of a compact Lie group. This conjecture was eventually proved by Anderson and Grodal using the Classification Theorem of \(p\)-compact groups (see [1]). Thus, the existence of maximal tori provides a purely homotopy-theoretic characterization of compact Lie groups among finite loop spaces. Even though the spaces \(B\not\simeq BG\) do not admit maximal tori, this does not rule out the possibility that there could exist interesting maps \(f:BT\to B\) whose homotopy fibres are _not_ finite CW complexes. In his thesis (see [21]), D. Yau refined Rector's classification by describing the spaces \(B\) in the genus of \(BSU(2)\) that can occur as targets of essential (i.e., non-nullhomotopic) maps from \(BT\). Such spaces admit a beautiful arithmetic characterization: **Theorem 4.5** (Yau).: _Let \(G=SU(2)\), and let \(B\) be a space in the genus of \(BG\). Then_ \((1)\)_\(B\) admits an essential map \(f:BT\to B\) if and only if there is an integer \(k\neq 0\) such that \((B/p)=(k/p)\) for all but finitely many primes \(p\), where \((k/p)\) denotes the Legendre symbol5 of \(k\)._ Footnote 5: Recall that, for a prime \(p\), the _Legendre symbol_\((k/p)\) of an integer \(k\) is defined whenever \(\,p\nmid k\,\): for \(p\) odd, we have \((k/p)=1\) (resp., \(-1\)) if \(k\) is a quadratic residue (resp., nonresidue) mod \(p\), while for \(p=2,\ (k/2)=1\) (resp., \(-1\)) if \(k\) is quadratic residue (resp. nonresidue) mod \(8\). \((2)\) _If \(B\) satisfies condition \((1)\), then there exists a unique \((\)up to homotopy\()\) map \(p_{B}:BT\to B\) such that every essential map \(\,f:BT\to B\,\) is homotopic to \(g\circ p_{B}\) for some self-map \(g\) of \(B\)._ \((3)\) _For \(B=BG\), the map \(p_{BG}:BT\to BG\) is induced by the maximal torus inclusion._ ### 'Fake' spaces of quasi-invariants Let \(B\) be a space in the genus of \(BG\) (for \(G=SU(2)\)) that admits an essential map from \(BT\). Theorem 4.5 shows that, for such a space, there is a natural generalization of the maximal torus: namely, the'maximal' essential map \(p_{B}:BT\to B\). We let \(F(\Omega B,T)\) denote the homotopy fibre of this map and apply the Ganea construction to the associated fibration sequence: (4.1) As a result, we construct a tower of spaces \(X_{m}(\Omega B,T)\) which we will refer to as the _'fake' spaces of quasi-invariants_ associated to the Rector space \(B\). Note, if \(B=BG\), then \(\,\Omega B\simeq G\,\), and by Theorem 4.5(3), the map \(\,p_{B}:BT\to BG\,\) is the maximal torus inclusion; hence, in this case, \(\,X_{m}(\Omega B,T)\) are equivalent to the 'genuine' spaces \(X_{m}(G,T)\) of quasi-invariants (see Definition 3.10). To compute the cohomology of \(X_{m}(\Omega B,T)\) we recall (_cf._[11]) that any space \(B\) in the genus of \(BG\) can be represented as a (generalized) homotopy pullback: \[B=\operatorname{holim}_{\{p\}}\{BG_{(p)}\xrightarrow{r_{p}}\,BG_{(0)} \xrightarrow{n_{p}}\,BG_{(0)}\}\, \tag{4.2}\] where the indexing set \(\{p\}\) runs over all primes, \(\,r_{p}\) denotes the natural map from the \(p\)-localization to the rationalization of \(BG\), and the map \(n_{p}\) is induced by multiplication by an integer \(n_{p}\) which is relatively prime to \(p\) and such that \((n_{p}/p)=(B/p)\) for every \(p\) (for \(p=2\), one requires, in addition, that \(\,n_{p}\equiv 1(\operatorname{mod}4)\)). Now, if a space \(B\) admits an essential map from \(BT\), part (1) of Theorem 4.5 implies that the set of integers \(\{n_{p}\in\mathbb{Z}\,:\,p\text{ prime}\}\) appearing in (4.2) can be chosen to be finite. Hence, for such spaces, we can define the natural number \[N_{B}\,:=\,\min\{\operatorname{lcm}(n_{p})\in\mathbb{N}\,:\,B=\operatorname{ holim}_{\{p\}}(n_{p}\circ r_{p})\}\,, \tag{4.3}\] which is clearly a homotopy invariant of \(B\). Note that \(N_{B}=1\) iff \(B=BG\); however, in general, \(N_{B}\) does not determine the homotopy type of \(B\) (see [21, (1.8)] for a counterexample). **Lemma 4.6**.: _For any space \(B\) in the genus of \(BG\), \(\,H^{*}(B,\mathbb{Z})\cong\mathbb{Z}[u]\,\), where \(|u|=4\). If \(B\) admits an essential map from \(BT\), then, with natural identification \(H^{*}(BT,\mathbb{Z})\cong\mathbb{Z}[x]\) as in Theorem 3.9, the map \(p_{B}^{*}:H^{*}(B,\mathbb{Z})\to H^{*}(BT,\mathbb{Z})\) is given by \(\,p_{B}^{*}(u)=N_{B}\,x^{2}\,\), where \(N_{B}\) is defined by (4.3)._ Proof.: The first claim can be deduced easily from the fact that \(\,\Omega B\simeq\mathbb{S}^{3}\,\) by looking at the Serre spectral sequence of the path fibration \(\,\Omega B\to P_{*}B\to B\,\) (_cf._[11, SS4]). The second claim is a consequence of the last part of [21, Theorem 1.7], which shows that (4.3) equals (up to sign) the degree of the map \(\,p_{B}^{*}\,\) on \(K\)-theory with coefficients in \(\mathbb{Z}\) and hence on cohomology. **Theorem 4.7**.: _Let \(B\) be a space in the genus of \(BG\) admitting an essential map from \(BT\)._ \((i)\)_All maps \(\pi_{m}\) in (4.1) are injective on rational cohomology. For each \(m\geq 0\), the composite map \(\tilde{\pi}_{m}=\pi_{m-1}\dots\pi_{1}\pi_{0}\) induces an embedding \(H^{*}(X_{m}(\Omega B,T),\mathbb{Q})\hookrightarrow H^{*}(BT,\mathbb{Q})= \mathbb{Q}[x]\) with image \(Q_{m}(W)\subseteq\mathbb{Q}[x]\). Thus, \(\,H^{*}(X_{m}(\Omega B,T),\mathbb{Q})\cong Q_{m}(W)\,\) for all \(m\geq 0\)._ \((ii)\) _For each \(m\geq 0\), there is an algebra isomorphism_ \[H^{*}(X_{m}(\Omega B,T),\mathbb{Q})\stackrel{{\sim}}{{\to}}H^{* }(X_{m}(G,T),\mathbb{Q})\] _making commutative the diagram_ \[\begin{CD}H^{*}(B,\mathbb{Q})@>{p_{m,B}^{*}}>{}>H^{*}(X_{m}(\Omega B,T), \mathbb{Q})@>{\tilde{\pi}_{m}^{*}}>{}>H^{*}(BT,\mathbb{Q})\\ @V{}V{}V@V{}V{}V\\ H^{*}(BG,\mathbb{Q})@>{p_{m}^{*}}>{}>H^{*}(X_{m}(G,T),\mathbb{Q})@>{\tilde{ \pi}_{m}^{*}}>{}>H^{*}(BT,\mathbb{Q})\end{CD}\] _where the map \((p_{BG}^{*})^{-1}p_{B}^{*}\) is given explicitly by \(\,u\mapsto N_{B}x^{2}\)\((\)see Lemma 4.6\()\)._ Proof.: We prove part \((i)\) by induction on \(m\). First, note that for \(m=0\), \((i)\) as well as \((ii)\) follow from Lemma 4.6. To perform the induction we define the subalgebras \(\,Q_{m}^{\prime}\subseteq\mathbb{Q}[x]\,\) for \(m>0\) by \[Q_{0}^{\prime}:=Q[x]\,,\qquad Q_{m}^{\prime}:=\mathbb{Q}+N_{B}x^{2}\cdot Q_{m- 1}^{\prime}\,,\ m>0\.\] Clearly, \[Q_{m}^{\prime}=\mathbb{Q}+\mathbb{Q}\cdot N_{B}x^{2}+\dots+\mathbb{Q}\cdot(N_ {B}x^{2})^{m-1}+(N_{B}x^{2})^{m}\mathbb{Q}[x]\.\] It follows that \(Q_{m}^{\prime}=Q_{m}\) as subrings of \(\mathbb{Q}[x]\) for all \(m\). Now assume that \[H^{*}(X_{m}(\Omega B,T),\mathbb{Q})\cong Q_{m}^{\prime},,\] and that \(\tilde{\pi}_{m}^{*}\) is the inclusion \(Q_{m}^{\prime}\hookrightarrow\mathbb{Q}[x]\). To compute the cohomology of the fibre \(F_{m}(\Omega B,T)\), we use the Eilenberg-Moore spectral sequence for the fibration sequence \(F_{m}(\Omega B,T)\to X_{m}(\Omega B,T)\to B\), whose \(E_{2}\)-term is \[E_{2}^{*,*}\,=\,\operatorname{Tor}_{*,*}^{H^{*}(B)}(H^{*}(\operatorname{pt}),H ^{*}(X_{m}(\Omega B,T)))\,\cong\,\operatorname{Tor}_{\mathbb{Q}[u]}^{*,*}( \mathbb{Q},Q_{m}^{\prime})\] By Lemma 4.6, \(\operatorname{Tor}_{*,*}^{\mathbb{Q}[u]}(\mathbb{Q},Q_{m}^{\prime})\) is the (co)homology of the complex Since \(Q_{m}^{\prime}\subseteq\mathbb{Q}[x]\) is an integral domain, \(\operatorname{Tor}_{i}^{\mathbb{Q}[x]}(\mathbb{Q},Q_{m}^{\prime})=0\) for \(i>0\). The Eilenberg-Moore spectral sequence for the fibration sequence \(F_{m}(\Omega B,T)\to X_{m}(\Omega B,T)\to B\) therefore collapses to give \[H^{*}(F_{m}(\Omega B,T),\mathbb{Q})\cong\,Q_{m}^{\prime}/(N_{B}x^{2})\.\] Further, since the Eilenberg-Moore spectral sequence is multiplicative, \(j_{m,B}^{*}\) is the canonical quotient map. In particular, note that the cohomology of \(F_{m}(\Omega B,T)\) is concentrated in even degree. The long exact sequence of cohomologies associated with the cofibration sequence \(F_{m}(\Omega B,T)\to X_{m}(\Omega B,T)\to X_{m+1}(\Omega B,T)\) yields (for \(n\) even) \[\tilde{H}^{n}(X_{m+1}[\Omega B,T)]\stackrel{{\pi_{m}^{*}}}{{ \longrightarrow}}\tilde{H}^{n}[X_{m}(\Omega B,T)]\stackrel{{ j_{m,B}^{*}}}{{\longrightarrow}}\tilde{H}^{n}[F_{m}(\Omega B,T)] \stackrel{{\partial}}{{\longrightarrow}}\tilde{H}^{n+1}[X_{m+1}( \Omega B,T)]\] Since \(j_{m,B}^{*}\) is surjective, we have \[\tilde{H}^{n+1}(X_{m+1}(\Omega B,T),\mathbb{Q})=0\quad\text{for $n$ even}\,.\] Hence, \[H^{*}(X_{m+1}(\Omega B,T),\mathbb{Q})\cong\mathbb{Q}+\operatorname{Ker}(j_{m,B }^{*})\,=\,\mathbb{Q}+(N_{B}x^{2})\cdot Q_{m}^{\prime}\,=\,Q_{m+1}^{\prime}\,,\] with \(\pi_{m}^{*}\) being the inclusion \(Q^{\prime}_{m+1}\hookrightarrow Q^{\prime}_{m}\). This completes the induction step, proving part \((i)\). Part \((ii)\) follows immediately from \((i)\) combined with Lemma 4.6 (since \(p_{B}=p_{m,B}\circ\tilde{\pi}_{m}\)). **Corollary 4.8**.: _For a fixed \(m\geq 0\), all spaces \(X_{m}(\Omega B,T)\) are rationally equivalent to \(X_{m}(G,T)\)_(_and hence to each other_)_._ Proof.: This follows from Theorem 4.7 and the uniqueness part of Theorem 3.9. In Section 5.4 (see Corollary 5.12), we will show that \(X_{m}(\Omega B,T)\not\simeq X_{m}(\Omega B^{\prime},T)\) whenever \(N_{B}\neq N_{B^{\prime}}\). Thus Theorem 4.7 provides many different topological realizations6 for the algebras \(Q_{m}(W)\). However, these do not give us different solutions to our realization problem (see Section 2.4), since none of the spaces \(B\) in the genus of \(BG\) (except for \(BG\) itself) admits a maximal torus and hence none carries a natural \(W\)-action. In addition, by Ganea's Theorem 3.1, \(\operatorname{hocolim}_{m}[X_{m}(\Omega B,T)]\simeq B\), which shows that property (QI\({}_{2}\)) fails for \(X_{m}(\Omega B,T)\) when \(B\not\simeq BG\). Footnote 6: It is tempting to conjecture that the (homotopy types of the) spaces \(X_{m}(\Omega B,T)\) associated with the Rector spaces \(B\) admiting an essential map from \(BT\) constitute the set of _all_ such realizations. Unfortunately, besides Theorem 3.9(2), we do not have much evidence for this conjecture. ## 5. Equivariant \(K\)-theory In this section, we compute the \(G\)-equivariant \(K\)-theory \(K_{G}(F_{m})\) of the \(m\)-quasi-flag manifold \(F_{m}=F_{m}(G,T)\) associated to \(G=SU(2)\). We find that \(K_{G}(F_{m})\) is isomorphic to the ring \(\mathcal{Q}_{m}(W)\) of _exponential_ quasi-invariants of \(W\). By the Atiyah-Segal Theorem, the (ordinary) \(K\)-theory of \(X_{m}(G,T)\) is then isomorphic to the completion \(\widehat{\mathcal{Q}}_{m}(W)\) of \(\mathcal{Q}_{m}(W)\) with respect to the canonical augmentation ideal of \(R(G)\). For the 'fake' spaces of quasi-invariants, \(X_{m}(\Omega B,T)\), associated to Rector spaces, the \(K\)-theory rings \(K[X_{m}(\Omega B,T)]\) are new invariants that are not isomorphic to \(\widehat{\mathcal{Q}}_{m}(W)\) in general and are strong enough to distinguish the \(X_{m}(\Omega B,T)\) up to homotopy equivalence. ### Equivariant \(K\)-theory Recall that, for a compact Lie group \(G\) acting continuously on a compact topological space \(X\), the \(K_{G}(X)\) is defined to be the Grothendieck group of \(G\)-equivariant (complex topological) vector bundles on \(X\). As shown in [21], this construction extends to a \(\mathbb{Z}/2\)-graded multiplicative generalized cohomology theory \(K_{G}^{*}\) on the category of (locally compact) \(G\)-spaces that is called the \(G\)_-equivariant \(K\)-theory_. We write \(K_{G}^{*}(X):=K_{G}^{0}(X)\oplus K_{G}^{1}(X)\), with understanding that \(K_{G}^{0}(X)\cong K_{G}^{2n}(X)\) and \(K_{G}^{1}(X)\cong K_{G}^{2n+1}(X)\) for all \(n\in\mathbb{Z}\). When \(G\) is trivial, \(K_{G}^{*}(X)\) coincides with the ordinary complex \(K\)-theory \(K^{*}(X)\), while for \(X=\operatorname{pt}\), \(K_{G}^{*}(\operatorname{pt})\) is the representation ring \(R(G)\) of \(G\) (in particular, we have \(K_{G}^{1}(\operatorname{pt})=0\)). In general, by functoriality of \(K_{G}^{*}\), the trivial map \(X\to\operatorname{pt}\) gives a canonical \(R(G)\)-module structure on the ring \(K_{G}^{*}(X)\) for any \(G\)-space \(X\). The ring \(K_{G}^{*}(X)\) has nice properties for which we refer the reader to [21]. Here we only mention two technical results needed for our computations. The first result is a well-known Kunneth type formula for equivariant \(K\)-theory first studied by Hodgkin (see, e.g., [1, Theorem 2.3]). **Theorem 5.1** (Hodgkin).: _Let \(G\) be a compact connected Lie group, such that \(\pi_{1}(G)\) is torsion-free. Then, for any two \(G\)-spaces \(X\) and \(Y\), there is a spectral sequence with \(E^{2}\)-term_ \[E^{2}_{*,*}=\operatorname{Tor}^{R(G)}_{*,*}(K_{G}^{*}(X),K_{G}^{*}(Y))\] _that converges to \(K_{G}^{*}(X\times Y)\), where \(X\times Y\) is viewed as a \(G\)-space with the diagonal action._ The second result is the following Mayer-Vietoris type formula, which is also -- in one form or another -- well known to experts. **Lemma 5.2**.: _Let \(f:U\to X\) and \(g:U\to Y\) be proper equivariant maps of \(G\)-spaces. Let \(Z=\operatorname{hocolim}(X\nleftarrow U\stackrel{{ g}}{{\to}}Y)\), where \(\operatorname{hocolim}\)' is taken in the category of \(G\)-spaces. Then, the abelian groups \(K_{G}^{*}(X)\), \(K_{G}^{*}(Y)\) and \(K_{G}^{*}(Z)\) are related by the six-term exact sequence_ The proof of Lemma 5.2 can be found, for example, in [10]. ### \(K\)-theory of quasi-flag manifolds We first introduce rings \(\mathcal{Q}_{m}(W)\) of _exponential quasi-invariants_ of a Weyl group \(W\). Let \(G\) be a compact connected Lie group with maximal torus \(T\) and associated Weyl group \(W\). Let \(\hat{T}:=\operatorname{Hom}(T,U(1))\) denote the character lattice and \(R(T)\) the representation ring of \(T\). It is well known that \(R(T)\cong\mathbb{Z}[\hat{T}]\) via the canonical map induced by taking characters of representations, and \(R(T)^{W}\cong R(G)\) via the restriction map \(i^{*}:R(G)\to R(T)\) induced by the inclusion \(i:T\hookrightarrow G\) (see, e.g., [1, Chap. IX, Sect. 3]). Using the first isomorphism we identify \(R(T)=\mathbb{Z}[\hat{T}]\) and write \(e^{\lambda}\) for the elements of \(R(T)\) corresponding to characters \(\lambda\in\hat{T}\). Next, we let \(\mathcal{R}\subseteq\hat{T}\) denote the root system of \(W\) determined by \((G,T)\) and choose a subset \(\mathcal{R}_{+}\subset\mathcal{R}\) of positive roots in \(\mathcal{R}\). If \(s_{\alpha}\in W\) is the reflection in \(W\) corresponding to \(\alpha\in\mathcal{R}_{+}\), then the difference \(e^{\lambda}-e^{s_{\alpha}(\lambda)}\) in \(R(T)\) is uniquely divisible by \(1-e^{\alpha}\) for any \(\lambda\in\hat{T}\). Following [10], we define a linear endomorphism \(\,\Lambda_{\alpha}:R(T)\to R(T)\,\) for each \(\alpha\in\mathcal{R}_{+}\), such that \[(1-s_{\alpha})f=\Lambda_{\alpha}(f)\cdot(1-e^{\alpha}). \tag{5.1}\] The operator \(\Lambda_{\alpha}\) is an exponential analogue of the divided difference operator \(\Delta_{\alpha}\) introduced in Section 3.5(see (3.44)). Note that the conditions (1.2) defining the usual quasi-invariant polynomials can be written in terms of the divided difference operators as \(\Delta_{\alpha}(p)\equiv 0\,\operatorname{mod}\,(\alpha)^{2m_{\alpha}}\). This motivates the following definition of quasi-invariants in the exponential case. **Definition 5.3**.: An element \(f\in R(T)\) is called an _exponential quasi-invariant of \(W\) of multiplicity \(m\in\mathcal{M}(W)\)_ if \[\Lambda_{\alpha}(f)\,\equiv\,0\,\,\operatorname{mod}\,(1-e^{\frac{\alpha}{2} })^{2m_{\alpha}}\,\quad\forall\,\alpha\in\mathcal{R}_{+}. \tag{5.2}\] _Remark 5.4_.: In general, it may happen that \(\frac{\alpha}{2}\not\in\hat{T}\) for some \(\alpha\in\mathcal{R}_{+}\), so that \(\,e^{\frac{\alpha}{2}}\not\in R(T)\). We view (5.2) as a congruence in the extended group ring \(\mathbb{Z}[\frac{1}{2}\hat{T}]\) that naturally contains \(R(T)\). We write \(\mathcal{Q}_{m}(W)\) for the set of all \(f\in R(T)\) satisfying (5.3) for a fixed multiplicity \(m\). This set is closed under addition and multiplication in \(R(T)\), i.e. \(\mathcal{Q}_{m}(W)\) is a commutative subring of \(R(T)\). (The latter can be easily seen from the twisted derivation property of Demazure operators: \(\Lambda_{\alpha}(f_{1}f_{2})=\Lambda_{\alpha}(f_{1})\cdot f_{2}+s_{\alpha}(f_ {1})\cdot\Lambda_{\alpha}(f_{2})\) that holds for all \(\alpha\in\mathcal{R}\), see [10, Sect. 5.5].) **Example 5.5**.: We describe \(\mathcal{Q}_{m}(W)\) explicitly in the case of \(G=SU(2)\) and \(T=U(1)\) the diagonal torus. In this case \(\hat{T}\) coincides with the weight lattice \(P(\mathcal{R})\) which is generated by the fundamental weight \(\varpi:T\to U(1)\) defined by \(\varpi\left(\begin{array}{cc}t&0\\ 0&t^{-1}\end{array}\right)=t\). The corresponding (simple) root is \(\alpha=2\varpi\), and the Weyl group \(W=\langle s_{\alpha}\rangle\cong\mathbb{Z}/2\mathbb{Z}\) acts on \(\hat{T}\) by \(s_{\alpha}(\varpi)=-\varpi\). We have \[R(T)\cong\mathbb{Z}[z,z^{-1}]\,\quad R(G)=R(T)^{W}\cong\mathbb{Z}[z+z^{-1}] \tag{5.3}\] where \(z=e^{\varpi}=e^{\frac{\alpha}{2}}\). Now, with these identifications, we claim that \[\mathcal{Q}_{m}(W)=\mathbb{Z}\oplus\mathbb{Z}\cdot(z^{1/2}-z^{-1/2})^{2}\oplus \mathbb{Z}\cdot(z^{1/2}-z^{-1/2})^{4}\oplus\ldots\oplus(z^{1/2}-z^{-1/2})^{2m} \cdot\mathbb{Z}[z,z^{-1}]. \tag{5.4}\] Indeed, if \(f\in\mathbb{Z}[z,z^{-1}]\) can be written in the form (5.4), then \[f-s_{\alpha}(f)\in(z^{1/2}-z^{-1/2})^{2m}(1-s_{\alpha})\,\mathbb{Z}[z,z^{-1}]= (z^{1/2}-z^{-1/2})^{2m}\,(z-z^{-1})\,\mathbb{Z}[z,z^{-1}]\,\] which shows that \(\Lambda_{\alpha}(f)=(1-z^{2})^{-1}(f-s_{\alpha}f)\) is divisible by \((1-z)^{2m}=(1-e^{\frac{\alpha}{2}})^{2m}\) in \(\mathbb{Z}[z,z^{-1}]\). Thus \(f\in\mathcal{Q}_{m}\). To see the converse denote the right-hand side of (5.4) by \(\tilde{\mathcal{Q}}_{m}\). Note that there is a natural \(\mathbb{Q}[z+z^{-1}]\)-module decomposition \[\mathbb{Q}[z,z^{-1}]\cong\mathbb{Q}[z+z^{-1}]\oplus\mathbb{Q}[z+z^{-1}]\cdot \delta\,,\] where \(\delta:=z-z^{-1}\). Writing \(f=p+q\cdot\delta\) with \(p,q\in\mathbb{Q}[z+z^{-1}]\), we find that \(f-s_{\alpha}(f)=2q\delta\). Thus, if \(f\in\mathcal{Q}_{m}\) then \(\,f-s_{\alpha}(f)\in(z^{1/2}-z^{-1/2})^{2m}\,(z-z^{-1})\,\mathbb{Z}[z,z^{-1}]\,\) and hence \(q\in(z^{1/2}-z^{-1/2})^{2m}\,\mathbb{Q}[z,z^{-1}]\). It follows that \(f\in\tilde{\mathcal{Q}}_{m}\otimes\mathbb{Q}\). On the other hand, \((\tilde{\mathcal{Q}}_{m}\otimes\mathbb{Q})\cap\mathbb{Z}[z,z^{-1}]=\tilde{ \mathcal{Q}}_{m}\) which implies that \(\mathcal{Q}_{m}\subseteq\tilde{\mathcal{Q}}_{m}\). Let \(F_{m}=F_{m}(G,T)\) be the \(m\)-quasi-flag manifold of \(G=SU(2)\) introduced in Section 3.3 (see Definition 3.10). Recall that \(F_{m}\) is a \(G\)-space of homotopy type of a finite CW-complex. The next theorem computes the \(G\)-equivariant \(K\)-theory of \(F_{m}\), which is the main result of this section. **Theorem 5.6**.: _This is a natural isomorphism of \(\mathbb{Z}/2\)-graded commutative rings_ \[K_{G}^{*}(F_{m})\cong\mathcal{Q}_{m}(W)\] _Thus \(\,K_{G}^{0}(F_{m})\cong\mathcal{Q}_{m}(W)\,\) and \(\,K_{G}^{1}(F_{m})=0\,\) for all \(m\in\mathbb{Z}_{+}\)._ Proof.: Recall that \(K_{G}^{*}(\operatorname{pt})=R(G)\cong\mathbb{Z}[t]\), where \(t\) corresponds to the \(2\)-dimensional irreducible representation of \(G=SU(2)\). The natural map \(\,K_{G}^{*}(\operatorname{pt})\to K_{G}^{*}(G)\cong K^{*}(\operatorname{pt})\,\) is then identified with the projection \(\,\mathbb{Z}[t]\to\mathbb{Z}\,\) taking \(\,t\mapsto 2\,\). For \(m=0\), by definition, we have \(\,F_{0}=G/T\,\), and hence (_cf._ Example 5.5) \[K_{G}^{*}(G/T)\cong K_{T}^{*}(\operatorname{pt})=R(T)\cong\mathbb{Z}[z,z^{-1} ]. \tag{5.5}\] Thus \(K_{G}^{0}(F_{0})\cong\mathbb{Z}[z,z^{-1}]=\mathcal{Q}_{0}(W)\) and \(K_{G}^{1}(F_{0})=0\) as is well known. Further, the map \(\,R(G)\to R(T)\) induced on \(G\)-equivariant \(K\)-theory by \(\,G/T\to\operatorname{pt}\,\) is identified with \(\,\mathbb{Z}[t]\to\mathbb{Z}[z,z^{-1}]\,\), \(\,t\mapsto z+z^{-1}\). Now, recall that \(F_{m+1}=F_{m}*G\), which means \[F_{m+1}\simeq\operatorname{hocolim}[F_{m}\gets F_{m}\times G\to G]. \tag{5.6}\] There is a canonical \(G\)-equivariant map \(F_{m}\to F_{m+1}\) which we denote by \(i_{m,m+1}\), which is nontrivial (not null-homotopic) in the homotopy category of \(G\)-spaces (see Remark 3.11). Let \(i_{m,n}:F_{m}\to F_{n}\) denote the composite map \(i_{m,n}:=i_{n-1,n}\circ\ldots\circ i_{m,m+1}\) for \(n>m\). We claim that the map \(\,i_{0,m}^{*}:K_{G}^{*}(F_{m})\to K_{G}^{*}(G/T)\,\) induced by \(i_{0,m}:G/T\to F_{m}\) is injective, and under the isomorphism (5.5), it is identified with the inclusion of \(\mathcal{Q}_{m}(W)\) in \(\mathbb{Z}[z,z^{-1}]\). We prove our claim by induction on \(m\). For \(m=0\), this is just (5.5). Assume, for some \(m\geq 0\), that \(\,K_{G}^{*}(F_{m})\cong\mathcal{Q}_{m}(W)\,\) and that the map \(i_{0,m}^{*}:K_{G}^{*}(F_{m})\to K_{G}^{*}(G/T)\) is identified with the inclusion of \(\mathcal{Q}_{m}(W)\) in \(\mathbb{Z}[z,z^{-1}]\) as a subring. Then the image of \(\,t\in K_{G}^{*}(\operatorname{pt})\,\) in \(\,K_{G}^{*}(F_{m})\cong\mathcal{Q}_{m}(W)\,\) is \(\,z+z^{-1}\). Since \(K_{G}^{*}(G)\cong\mathbb{Z}\) has the free \(K_{G}^{*}(\operatorname{pt})\cong\mathbb{Z}[t]\)-module resolution \(\,0\to\mathbb{Z}[t]\to\mathbb{Z}[t]\xrightarrow{\,\cdot(t-2)\,}\mathbb{Z}\to 0\,\), the Tor-group \[\operatorname{Tor}_{*}^{R(G)}(K_{G}^{*}(F_{m}),K_{G}^{*}(G))\ \cong\ \operatorname{Tor}_{*}^{\mathbb{Z}[t]}(\mathcal{Q}_{m},\mathbb{Z})\] is identified with the homology of the two-term complex \(\,0\to\mathcal{Q}_{m}(W)\stackrel{{\cdot(z+z^{-1}-2)}}{{\longrightarrow}} \mathcal{Q}_{m}(W)\to 0\,\), whose first homology vanishes since \(\mathcal{Q}_{m}(W)\) is an integral domain. It follows that Hodgkin's spectral sequence (see Theorem 5.1) that \[K^{*}_{G}(F_{m}\times G)\cong\mathcal{Q}_{m}(W)/(z+z^{-1}-2)\,,\] and that the map \(K^{*}_{G}(F_{m})\to K^{*}_{G}(F_{m}\times G)\) induced by the projection \(F_{m}\times G\to F_{m}\) is the canonical quotient map \(\pi:\mathcal{Q}_{m}(W)\to\mathcal{Q}_{m}(W)/(z+z^{-1}-2)\). Next, applying Lemma 5.2 to the homotopy pushout (5.6), we obtain the four-term exact sequence \[0\to K^{0}_{G}(F_{m+1})\stackrel{{(i_{m,m+1},f)^{*}}}{{ \longrightarrow}}\mathcal{Q}_{m}(W)\oplus\mathbb{Z}\stackrel{{ i^{*}-\pi^{*}}}{{\longrightarrow}}\mathcal{Q}_{m}(W)/(z+z^{-1}-2) \stackrel{{\partial}}{{\to}}K^{1}_{G}(F_{m+1})\to 0\, \tag{5.7}\] where \(\,i:\mathbb{Z}\to\mathcal{Q}_{m}(W)\,\) is the structure map of the ring \(\mathcal{Q}_{m}(W))\) and \(\,f:G\to F_{m+1}\,\) is the natural map associated to (5.6). It follows from (5.7) that \(K^{1}_{G}(F_{m+1})=0\), and \[K^{0}_{G}(F_{m+1})\cong\operatorname{Ker}(i^{*}-\pi^{*})=\mathbb{Z}+(z+z^{-1}- 2)\cdot\mathcal{Q}_{m}(W)=\mathcal{Q}_{m+1}(W)\.\] Furthermore, the inclusion \(\mathcal{Q}_{m+1}(W)\hookrightarrow\mathcal{Q}_{m}(W)\) is identified with the map \(i^{*}_{m,m+1}\). This completes the induction step, completing the proof of the theorem. ### The equivariant Chern character Recall that the space \(X_{m}=X_{m}(G,T)\) of \(m\)-quasi-invariants is defined as the homotopy \(G\)-quotient \(X_{m}:=EG\times_{G}F_{m}\). The Borel construction yields a natural map \[\alpha:\,K^{*}_{G}(F_{m})\to K^{*}(X_{m}) \tag{5.8}\] where \(K^{*}(X)=K^{0}(X)\oplus K^{1}(X)\) is the (complex) topological \(K\)-theory defined by \(\,K^{0}(X)=[X,\,BU]\,\) and \(K^{1}(X)=[X,\,U]\,\). Theorem 5.6 shows that \(K^{*}_{G}(F_{m})\) is a finitely generated \(R(G)\)-module for all \(m\in\mathbb{Z}_{+}\). Hence, by Atiyah-Segal Completion Theorem [1], the map (5.8) extends to an isomorphism \[\widehat{K}^{*}_{G}(F_{m})_{I_{G}}\cong K^{*}(X_{m}) \tag{5.9}\] where \(\widehat{K}^{*}_{G}(F)_{I_{G}}\) denotes the (adic) completion of \(K^{*}_{G}(F)\) (as an \(R(G)\)-module) with respect to the augmentation ideal of \(R(G)\) defined as the kernel of the dimension function \(\,I_{G}:=\operatorname{Ker}[\dim:R(G)\to\mathbb{Z}]\,\). If we identify \(R(G)\cong\mathbb{Z}[z+z^{-1}]\) as the invariant subring of \(R(T)\cong\mathbb{Z}[z,z^{-1}]\) as in the proof of Theorem 5.6, then \(I_{G}=(z+z^{-1}-2)\). Thus, as a consequence of (5.9), we get **Corollary 5.7**.: _For all \(m\geq 0\), there is an isomorphism_ \[K^{*}(X_{m})\cong\widehat{\mathcal{Q}}_{m}(W)_{I}\] _where \((\widehat{\mathcal{Q}}_{m})_{I}\) denotes the completion of (5.4) with respect to the ideal \(I=(z+z^{-1}-2)\subset\mathbb{Z}[z+z^{-1}]\)._ Next, we compute a Chern character map relating equivariant \(K\)-theory to equivariant cohomology. Recall that the Chern character of an equivariant vector bundle on a \(G\)-space \(F\) is defined as the (non-equivariant) Chern character of the associated vector bundle on \(EG\times_{G}F\). This gives a natural map \[\operatorname{ch}_{G}(F):\ K^{*}_{G}(F)\,\to\,\widehat{H}^{*}_{G}(F,\mathbb{Q}) \tag{5.10}\] where \(\,\widehat{H}^{*}_{G}(F,\mathbb{Q}):=\prod_{k=0}^{\infty}H^{k}_{G}(F,\mathbb{Q })\,\). The following proposition describes the map (5.10) for \(F=F_{m}(G,T)\) explicitly, using the identifications of Theorem 3.9 and Theorem 5.6. **Proposition 5.8**.: (1) _The Chern character map \(\operatorname{ch}_{G}(F_{m}):K^{*}_{G}(F_{m})\to\widehat{H}^{*}(X_{m},\mathbb{Q})\) is given by_ \[\exp:\ \mathcal{Q}_{m}(W)\,\to\,\widehat{Q}_{m}(W)\,\quad z\,\mapsto\,\sum_{n=0}^{ \infty}\frac{x^{n}}{n!}\, \tag{5.11}\] _where \(\,\widehat{Q}_{m}(W):=Q_{m}(W)\otimes_{\mathbb{Q}[x^{2}]}\mathbb{Q}[[x^{2}]]\,\) is the completed ring of quasi-invariants of \(W=\mathbb{Z}/2\mathbb{Z}\)._ (2) _The map \(\,\operatorname{ch}_{G}(F_{m})\,\) factors through (5.8) inducing an isomorphism on rational \(K\)-theory_ \[K(X_{m})_{\mathbb{Q}}\,\cong\,\widehat{H}^{*}(X_{m},\mathbb{Q})\,\cong\, \widehat{Q}_{m}(W)\] Proof.: For \(F_{0}=G/T\), we can identify \(K^{*}_{G}(G/T)\cong R(T)\cong\mathbb{Z}[z,z^{-1}]\) and \(\widehat{H}^{*}_{G}(G/T,\mathbb{Q})=\widehat{H}^{*}(BT,\mathbb{Q})\cong \mathbb{Q}[[x]]\) as in (the proofs of) Theorem 3.9 and Theorem 5.6. With these identifications, it is well known that the equivariant Chern character is given by exponentiation (see, e.g., [10, Example A.5]): \[\operatorname{ch}_{G}(G/T):\,K^{*}_{G}(G/T)\,\to\,\widehat{H}^{*}(BT,\mathbb{Q })\,,\qquad z\mapsto\exp(x). \tag{5.12}\] Now, by functoriality of the Chern character, the maps \(\,G/T\xrightarrow{i_{0,m}}F_{m}\to\operatorname{pt}\,\) give a commutative diagram of ring homomorphisms (5.13) where the vertical maps as well as the top and the bottom horizontal maps are injective. Hence, the map in the middle, \(\operatorname{ch}_{G}(F_{m}):K^{*}_{G}(F_{m})\to\widehat{H}^{*}(F_{m},\mathbb{Q})\), is also injective, and it is given by restriction of the exponential map (5.12). This proves the first claim of the proposition. The second claim follows from the first and Corollary 5.7. ### \(K\)-theory of 'fake' spaces of quasi-invariants In this section, we compute the \(K\)-theory of 'fake' spaces of quasi-invariants \(X_{m}(\Omega B,T)\) constructed in Section 4.3. We will keep the notation \(G=SU(2)\) and \(\,T=U(1)\) and use the identification \(K^{*}(BT)\cong\mathbb{Z}[[t]]\) as in the previous section. Let \(B\) be a space in the genus of \(BG\) that admits an essential map from \(BT\). By [20, Proposition 2.1], there is an isomorphism of rings \(K^{*}(B)\cong\mathbb{Z}[[u]]\,\), such that for any essential map \(\,f:BT\to B\), the induced map \(\,f^{*}:\,K^{*}(B)\to K^{*}(BT)\,\) is given by \[f^{*}(u)=\deg(f)t^{2}+\text{higher order terms in $t$ },\] where the integer \(\deg(f)\) coincides (up to sign) with the degree of \(f\) in integral (co)homology in dimension \(4\) (_cf._ Lemma 4.6). In fact, by a general result of Notbohm and Smith (see [21, Theorem 5.2]), the assignment \(f\mapsto f^{*}\) gives a bijection between the homotopy classes of maps from \(BT\) to \(B\) an the \(\lambda\)-ring homomorphisms from \(K^{*}(B)\) to \(K^{*}(BT)\): \[[BT,B]_{*}\cong\operatorname{Hom}_{\lambda}(K^{*}(B),K^{*}(BT))\,.\] Next, recall that, by Theorem 4.5, among all essential maps \(BT\to B\), there is a'maximal' one \(p_{B}:BT\to B\), for which \(\deg(p_{B})=N_{B}\), where \(N_{B}\) is the integer defined by (4.3): the corresponding power series \[p_{B}^{*}(u)=N_{B}t^{2}+\text{higher order terms in $t$ }. \tag{5.14}\] is a useful \(K\)-theoretic invariant of \(B\) that depends on the Rector invariants \((B/p)\) (see [30]). Using (5.14), we define a sequence of subrings \(\mathcal{Q}_{m}(B)\) in \(\mathbb{Z}[[t]]\) inductively by the rule: \[\mathcal{Q}_{0}(B):=\mathbb{Z}[[t]]\,\qquad\mathcal{Q}_{m}(B):=\mathbb{Z}+p_{B}^ {*}(u)\mathcal{Q}_{m-1}(B)\,\quad m\geq 1. \tag{5.15}\] Note that there are natural inclusions \[\mathcal{Q}_{0}(B)\supseteq\mathcal{Q}_{1}(B)\supseteq\ldots\supseteq \mathcal{Q}_{m}(B)\supseteq\mathcal{Q}_{m+1}(B)\supseteq\ldots\] which are all ring homomorphisms. **Example 5.9**.: For \(B=BG\), one can easily compute the power series \(p_{B}(u)\) in an explicit form. Recall that the Atiyah-Segal completion theorem gives an isomorphism \(K^{*}(BG)\cong\widehat{K}^{*}_{G}(\mathrm{pt})_{I}\), where \(I=I_{G}\) is the ideal of virtual representations in \(K^{*}_{G}(\mathrm{pt})\cong R(G)\) of dimension \(0\). If we identify \(K^{*}_{G}(\mathrm{pt})\cong\mathbb{Z}[v]\), where \(v\) is the standard \(2\)-dimensional representation of \(G\), then \(I=(v-2)\), and \(K^{*}(BG)\cong\mathbb{Z}[[u]]\), where \(u=v-2\). Similarly, \(K^{*}(BT)\cong\mathbb{Z}[[t]]\), where \(t=z-1\), with \(z\) standing for the generating character of \(T\). The naturality of (5.8) (with respect to the \(G\)-equivariant map \(p:G/T\to\mathrm{pt}\)) yields the commutative diagram Since \(p^{*}(v)\) is the restriction of \(v\) to \(T\), we have \(p^{*}(v)=z+z^{-1}\). Hence, \[p_{B}^{*}(u)\,=\,p_{B}^{*}(v-2)\,=\,z+z^{-1}-2\,=\,(1+t)+\frac{1}{1+t}-2\,=\, \frac{t^{2}}{1+t}\] It follows that \(\mathcal{Q}_{m}(BG)\cong\widehat{\mathcal{Q}}_{m}(W)\), where the right-hand side is the completion of \(\mathcal{Q}_{m}(W)\) with respect to the ideal generated by \(z+z^{-1}-2\) (_cf._ Corollary 5.7). Now, we state the the main result of this section. **Theorem 5.10**.: _There are isomorphisms of rings_ \[K^{*}[X_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)\,\ \forall\,m\geq 0\,. \tag{5.16}\] _In particular, \(K^{1}[X_{m}(\Omega B,T)]=0\) for all \(m\geq 0\). The maps \(\pi_{m}^{*}:K^{*}[X_{m+1}(\Omega B,T)]\to K^{*}[X_{m}(\Omega B,T)]\) induced by the Ganea maps \(\pi_{m}\) in (4.1) correspond under (5.16) to the natural inclusions \(\mathcal{Q}_{m+1}(B)\hookrightarrow\mathcal{Q}_{m}(B)\), and hence are all injective._ To prove Theorem 5.10 we will use an Eilenberg-Moore spectral sequence for \(K\)-theory in the following form. **Lemma 5.11**.: _Let \(F\to E\to B\) be a \((\)homotopy\()\) fibration sequence over a base \(B\) such that \(K^{*}(\Omega B)\) is an exterior algebra in a finite number of generators of odd degrees. Then there is a multiplicative spectral sequence with \(\,E_{2}^{i,*}\cong\mathrm{Tor}_{i}^{K^{*}(B)}(\mathbb{Z},K^{*}(E))\,\) that strongly converges to \(K^{*}(F)\)._ The proof of Lemma 5.11 can be found, for example, in [10] (see Main Theorem, Part 3). Proof of Theorem 5.10.: We further claim that the ring homomorphisms \[j^{*}_{m,B}:\ K^{*}[X_{m}(\Omega B,T)]\to K^{*}[F_{m}(\Omega B,T)]\] induced by the fibre maps \(j_{B,m}\) in (4.1) are surjective, and with (5.16), they induce isomorphisms \[K^{*}[F_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)/(p^{*}_{B}(u))\] We prove these facts together with the claims of Theorem 5.10 by induction on \(m\). For \(m=0\), we need only to compute \(K^{*}[F_{0}(\Omega B,T)]\). This can be done using Lemma 5.11. Note that \(K(\operatorname{pt})\cong\mathbb{Z}\) has the obvious free resolution over \(K^{*}(B)\cong\mathbb{Z}[[u]]\) : \[0\to\mathbb{Z}[[u]]\overset{\cdot u}{\to}\mathbb{Z}[[u]]\to\mathbb{Z}\to 0 \tag{5.17}\] Hence \(\operatorname{Tor}^{K^{*}(B)}_{*}(\mathbb{Z},K^{*}(BT))\) can be identified with homology of the two-term complex \(0\to\mathbb{Z}[[t]]\overset{p^{*}_{B}(u)}{\longrightarrow}\mathbb{Z}[[t]]\to 0\), where the map in the middle is given by the power series (5.14). Since \(\mathbb{Z}[[t]]\) is an integral domain, it follows that \(\operatorname{Tor}^{K^{*}(B)}_{i}(\mathbb{Z},K^{*}(BT))=0\) for \(i>0\). The Eilenburg-Moore spectral sequence of Lemma 5.11 therefore collapses, giving an isomorphism \[K^{*}[F_{0}(\Omega B,T)]\cong\mathbb{Z}[[t]]/(p^{*}_{B}(u))\] Next, assume that \(K^{*}[X_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)\) and that \(K^{*}[F_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)/(p^{*}_{B}(u))\), with \(j^{*}_{m,B}\) being the canonical quotient map. Since \[X_{m+1}(\Omega B,T)\,\simeq\,\text{hocolim}\,[\,\operatorname{pt}\overset{i_{ m}}{\longleftarrow}F_{m}(\Omega B,T)\overset{j_{m,B}}{\longrightarrow}X_{m}( \Omega B,T)\,]\,,\] and since \(K^{1}(\operatorname{pt})=K^{1}[F_{m}(\Omega B,T)]=K^{1}[X_{m}(\Omega B,T)]=0\), Lemma 5.2 (with \(G\) trivial group) yields the four-term exact sequence \[0\to K^{0}[X_{m+1}(\Omega B,T)]\overset{(i^{*}_{m},\pi^{*}_{m})}{\longrightarrow} \mathbb{Z}\oplus\mathcal{Q}_{m}(B)\xrightarrow{p^{*}_{m}-j^{*}_{m,B}}{ \longrightarrow}\mathcal{Q}_{m}(B)/(p^{*}_{B}(u))\overset{\partial}{\to}K^{1} (X_{m+1}(\Omega B,T))\to 0\,.\] Here \(p_{m}\) is the trivial map from \(F_{m}(\Omega B,T)\) to the point. Since \(j^{*}_{m,B}\) is surjective, \(K^{1}[X_{m+1}(\Omega B,T)]=0\). The above six-term exact sequence also shows that \(K^{0}[X_{m+1}(\Omega B,T)]\cong\operatorname{Ker}(p^{*}_{m}-j^{*}_{m,B})\subseteq \mathbb{Z}\oplus\mathcal{Q}_{m}(B)\) (with isomorphism given by the map \((i^{*}_{m},\,\pi^{*}_{m})\)). Projection to \(\mathcal{Q}_{m}(B)\) identifies this kernel with \(\mathcal{Q}_{m+1}(B)=\mathbb{Z}+p^{*}_{B}(u)\mathcal{Q}_{m}(B)\subset \mathcal{Q}_{m}(B)\). It follows that \(K^{*}[X_{m+1}(\Omega B,T)]\cong\mathcal{Q}_{m+1}(B)\), with \(\pi^{*}_{m}\) being the inclusion of \(\mathcal{Q}_{m+1}(B)\) into \(\mathcal{Q}_{m}(B)\). Finally, by taking the (completed) tensor product of the resolution (5.17) with \(\mathcal{Q}_{m+1}(B)\), we see that \(\operatorname{Tor}^{K^{*}(B)}_{i}(\mathbb{Z},\mathcal{Q}_{m+1}(B))\) is the homology of the complex \[0\to\mathcal{Q}_{m+1}(B)\xrightarrow{p^{*}_{B}(u)}{\longrightarrow}\mathcal{ Q}_{m+1}(B)\to 0\] where the map in the middle is given by multiplication by the formal power series (5.14). Since \(\mathcal{Q}_{m+1}(B)\subseteq\mathbb{Z}[[t]]\) is an integral domain, \(\operatorname{Tor}^{K^{*}(B)}_{i}(\mathbb{Z},K^{*}(X_{m+1}))=0\) for \(i>0\). The spectral sequence of Lemma 5.11 associated with the fibration sequence \(F_{m+1}\to X_{m+1}\to B\) therefore collapses, giving \[K^{*}[F_{m+1}(\Omega B,T)]\cong\mathcal{Q}_{m+1}(B)/(p^{*}_{B}(u))\,\] with \(j^{*}_{m+1,B}\) being the canonical quotient map. This completes the induction step and thus finishes the proof of the theorem. Theorem 5.10 allows one to distinguish spaces of quasi-invariants of the same multiplicity associated to homotopically distinct spaces in the genus of \(BG\). First, we recall that the topological \(K\)-theory \(K^{*}(X)\) of any space \(X\) of homotopy type of a CW complex carries a natural filtration \[F^{0}K^{*}(X)\supseteq F^{1}K^{*}(X)\supseteq\ldots\supseteq F^{n}K^{*}(X) \supseteq F^{n+1}K^{*}(X)\supseteq\ldots\] where \(F^{n}K^{*}(X)\) is defined to be the kernel of the restriction map \(\,K^{*}(X)\to K^{*}(X_{n-1})\,\) corresponding to the \((n-1)\)-skeleton of \(X\). This filtration is functorial: any map \(f:X\to X^{\prime}\) of spaces, each of which has homotopy type of a CW complex, induces a morphism of filtered rings \(\,f^{*}:K^{*}(X^{\prime})\to K^{*}(X)\). Moreover, by Cellular Approximation Theorem, it is independent of the CW structure in the sense that using a different CW structure on \(X\) will not change the isomorphism type of \(K^{*}(X)\) as a filtered ring. **Corollary 5.12**.: _Let \(B\) and \(B^{\prime}\) be two spaces in the genus of \(BG\) admitting essential maps from \(BT\). Assume that \(\,N_{B}\neq N_{B^{\prime}}\,\). Then \(\,X_{m}(\Omega B,T)\not\simeq X_{m}(\Omega B^{\prime},T)\,\) for any \(m\geq 0\). In particular, \(\,X_{m}(\Omega B,T)\) is not homotopy equivalent to \(X_{m}(G,T)\) for any \(B\not\simeq BG\)._ Proof.: Let \(\,\tilde{\pi}_{m}:BT\to X_{m}(\Omega B,T)\) denote the composite map \(\pi_{m-1}\circ\ldots\circ\pi_{0}\) in (4.1). By Theorem 5.10, this map induces an embedding \[\tilde{\pi}_{m}^{*}:K^{*}[X_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)\hookrightarrow \mathbb{Z}[[t]]\cong K^{*}(BT)\] which is a morphism of filtered rings. Now, recall that \(BT\simeq\mathbb{CP}^{\infty}\); the generator \(t\) in \(\,K^{*}(BT)\cong K^{*}(\mathbb{CP}^{\infty})=\mathbb{Z}[[t]]\,\) can be taken in the form \(t=b\xi\), where \(\xi\in F^{2}K^{2}(BT)\) and \(b\in K^{-2}(\mathrm{pt})\) is the Bott element (see [20, Sect. 3]). Hence \(\,t\in F^{2}K^{0}(BT)\), and therefore, by (5.14), we have \[p_{B}^{*}(u)\equiv N_{B}t^{2}\,(\mathrm{mod}\,F^{5}K^{*}(BT))\quad\text{in } \mathbb{Z}[[t]]\] Now, by Theorem 5.10, \[K^{*}[X_{m}(\Omega B,T)]\cong\mathcal{Q}_{m}(B)=\mathbb{Z}+\mathbb{Z}\cdot p_{ B}^{*}(u)+\ldots+\mathbb{Z}\cdot p_{B}^{*}(u)^{m-1}+p_{B}^{*}(u)\cdot\mathbb{Z}[[t ]]\,.\] Hence \[K^{*}[X_{m}(\Omega B,T)]/F^{5}K^{*}[X_{m}(\Omega B,T)]\cong\mathbb{Z}+\mathbb{ Z}\cdot N_{B}t^{2}\,\] where the generator \(\,N_{B}t^{2}\,\) is square zero. It follows that if \(p\) is a prime then \[K^{*}[X_{m}(\Omega B,T)]/(p,\,F^{5}K^{*}[X_{m}(\Omega B,T)])\,\cong\,\begin{cases} \left(\mathbb{Z}/p\mathbb{Z}\right)+\left(\mathbb{Z}/p\mathbb{Z}\right)\cdot \bar{N}_{B}t^{2}&\text{if }\,\,p\nmid N_{B}\\ \left(\mathbb{Z}/p\mathbb{Z}\right)&\text{if }\,\,p\mid N_{B}\end{cases}\] where \(\,(p,\,F^{5}K^{*}(X_{m}))\,\) denotes the ideal in \(K^{*}(X_{m})\) generated by \(p\in\mathbb{Z}\) and \(F^{5}K^{*}(X_{m})\). This shows that \(X_{m}(\Omega B,T)\) is not homotopy equivalent to \(X_{m}(\Omega B^{\prime},T)\) unless \(\,N_{B}=N_{B^{\prime}}\,\). _Remark 5.13_.: The converse of Corollary 5.12 also holds true in the following sense: if two spaces \(B\) and \(B^{\prime}\) in the genus of \(BG\) have homotopy equivalent towers of spaces of quasi-invariants \(\{X_{m}(\Omega B,T),\pi_{m}\}_{m\geq 0}\) and \(\{X_{m}(\Omega B^{\prime},T),\pi_{m}^{\prime}\}_{m\geq 0}\), then \(B\simeq B^{\prime}\,\). This simply follows from the fact that \[\mathrm{hocolim}_{m\in\mathbb{Z}_{+}}X_{m}(\Omega B,T)\simeq B\,\] which is a consequence of Ganea's Theorem 3.1. ## 6. Elliptic cohomology In this section, we compute complex analytic \(T\)- and \(G\)-equivariant elliptic cohomology of the quasi-flag manifolds \(F_{m}(G,T)\). We express the result in two ways: geometrically (in terms of coherent sheaves on a given elliptic curve \(E\)) and analytically (in terms of \(\Theta\)-functions and \(q\)-difference equations). We also compute the spaces of global sections of the elliptic cohomology sheaves of \(F_{m}(G,T)\) with coefficients twisted by tensor powers of the Looijenga line bundle on \(E\). This last computation provides a motivation for our definition of _elliptic quasi-invariants_ of \(W\). ### Equivariant elliptic cohomology Complex analytic elliptic cohomology was introduced by I. Grojnowski (see [10]). We will follow the approach of [1] that relies on earlier topological results of [1] and [11]. We begin by briefly recalling the main definitions. Let \(E\) be an elliptic curve defined analytically over \(\mathbb{C}\). Given a compact connected abelian Lie group \(T\) (i.e., \(T\cong U(1)^{n}\)), we write \(\check{T}:=\operatorname{Hom}(U(1),\,T)\) for its cocharacter lattice and set \[\mathcal{M}_{T}:=\check{T}\otimes_{\mathbb{Z}}E\,,\] which is an abelian variety of rank \(n=\operatorname{rk}(T)\) defined over \(\mathbb{C}\). The _\(T\)-equivariant elliptic cohomology_ is defined as a functor on the (homotopy) category of finite \(T\)-CW complexes with values in the category of (complex-analytic) coherent sheaves on \(\mathcal{M}_{T}\): \[\mathcal{E}ll_{T}^{*}:\;\operatorname{Ho}(\operatorname{Top}_{T}^{\text{fin} })\,\to\,\operatorname{Coh}(\mathcal{M}_{T})\,. \tag{6.1}\] This functor has the basic property that \(\,\mathcal{E}ll_{T}^{*}(T/T^{\prime})\cong\mathcal{O}_{\mathcal{M}_{T^{ \prime}}}\) for any closed subgroup \(T^{\prime}\subseteq T\), where \(\mathcal{O}_{\mathcal{M}_{T^{\prime}}}=i^{*}\mathcal{O}_{\mathcal{M}_{T}}\) is the restriction of the structure sheaf of \(\mathcal{M}_{T}\) to \(\mathcal{M}_{T^{\prime}}\) (see [1, 2.1(1)]). In particular, we have \[\mathcal{E}ll_{T}^{*}(\operatorname{pt})\cong\mathcal{O}_{\mathcal{M}_{T}} \tag{6.2}\] Now, if \(G\) is a compact connected Lie group with maximal torus \(T\) and Weyl group \(W\), we define the _\(G\)-equivariant_ elliptic cohomology of a \(G\)-space \(X\) by \[\mathcal{E}ll_{G}^{*}(X):=\mathcal{E}ll_{T}^{*}(X)^{W}\,, \tag{6.3}\] where \(X\) is viewed as a \(T\)-space by restricting \(G\)-action (see [1, 3.4]). To compute the \(G\)-equivariant elliptic cohomology we thus need to compute the \(T\)-equivariant elliptic cohomology of a \(G\)-space \(X\) and take its \(W\)-invariant sections. The coherent sheaves \(\mathcal{E}ll_{T}^{*}(X)\) do not have usually many interesting global sections. To remedy this one considers a twisted version of elliptic cohomology, where the sheaves \(\mathcal{E}ll_{T}^{*}(X)\) are tensored with a certain ample line bundle on \(\mathcal{M}_{T}\). Recall that, if \(G\) is a simple, simply connected compact Lie group with a root system \(\mathcal{R}\), there is a canonical \(W\)-equivariant line bundle \(\mathcal{L}\) on \(\mathcal{M}_{T}\) associated to the invariant symmetric bilinear form \(I\) on the coroot lattice of \(\mathcal{R}\) such that \(I(\alpha,\alpha)=2\) for all roots of smallest length in \(\mathcal{R}\); this line bundle has \(I\) as its Chern class and has degree equal to the order of the center of \(G\) (see [1]). Following [1, 1], we will refer to \(\mathcal{L}\) as the _Looijenga bundle_ on \(\mathcal{M}_{T}\) and define the \(T\)- and \(G\)-equivariant elliptic cohomology of \(X\) with coefficients twisted by \(\mathcal{L}\) by \[\operatorname{Ell}_{T}^{*}(X,\mathcal{L}) := \bigoplus_{n=0}^{\infty}H^{0}_{\operatorname{an}}(\mathcal{M}_{T},\,\mathcal{E}ll_{T}^{*}(X)\otimes\mathcal{L}^{n}) \tag{6.5}\] \[\operatorname{Ell}_{G}^{*}(X,\mathcal{L}) := \bigoplus_{n=0}^{\infty}H^{0}_{\operatorname{an}}(\mathcal{M}_{T},\,\mathcal{E}ll_{T}^{*}(X)\otimes\mathcal{L}^{n})^{W} \tag{6.4}\] where \(H^{0}_{\operatorname{an}}\) stands for the global (holomorphic) sections of the coherent sheaf \(\mathcal{E}ll_{T}^{*}(X)\) twisted by the tensor powers of \(\mathcal{L}\). Note that (6.4) and (6.5) are naturally graded modules over the graded commutative rings \[\operatorname{Ell}_{T}^{*}(\operatorname{pt},\mathcal{L})=\bigoplus_{n=0}^{ \infty}H^{0}_{\operatorname{an}}(\mathcal{M}_{T},\,\mathcal{L}^{n})\quad \text{and}\quad\operatorname{Ell}_{G}^{*}(\operatorname{pt},\mathcal{L})= \bigoplus_{n=0}^{\infty}H^{0}_{\operatorname{an}}(\mathcal{M}_{T},\,\mathcal{L} ^{n})^{W} \tag{6.6}\] which we denote by \(S(E)\) and \(S(E)^{W}\), respectively. Following [1], we also write \(S(E)^{-W}\) for the subspace of \(S(E)\) consisting of all \(W\)-anti-invariant sections. The main theorem of [1] asserts that \(S(E)^{W}\) is a graded polynomial algebra generated freely by \(\,l{+}1\,\) homogeneous elements, while \(S(E)^{-W}\) is a free module over \(S(E)^{W}\) of rank one (see [10, (3.4)]). The generators of \(S(E)^{W}\) are called the _Looijenga theta functions_ on \(\mathcal{M}_{T}\). ### Elliptic cohomology of quasi-flag manifolds In the rank one case (\(T=U(1)\)), we can identify \(\mathcal{M}_{T}=E\) and take for a model of \(E\) the _Tate curve_\(E_{q}:=\mathbb{C}^{*}/q^{\mathbb{Z}}\) with \(0<|q|<1\). The latter is defined as the quotient of the punctured line \(\mathbb{C}^{*}=\mathbb{C}\setminus\{0\}\) (viewed as a complex analytic manifold) by the free action of the infinite cyclic group \(\mathbb{Z}\,\): \[\mathbb{C}^{*}\to\mathbb{C}^{*}\,\ z\mapsto q^{n}z\,. \tag{6.7}\] We write \(A:=\mathcal{O}_{\mathrm{an}}(\mathbb{C}^{*})\) for the ring of global analytic functions on \(\mathbb{C}^{*}\) and define \(\mathcal{A}_{q}:=A\rtimes_{q}\mathbb{Z}\) to be the crossed product algebra for the action of \(\mathbb{Z}\) on \(A\) induced by (6.7). Letting \(\xi\) be the (multiplicative) generator of \(\mathbb{Z}\), we can present \(\mathcal{A}_{q}\) as a skew-polynomial algebra \(A[\xi,\xi^{-1}]\) with multiplication determined by the commutation relation \(\,\xi\cdot a(z)=a(qz)\cdot\xi\,\) for \(a(z)\in A\). The left modules over \(\mathcal{A}_{q}\) can be identified with spaces of global sections of \(\mathbb{Z}\)-equivariant quasi-coherent sheaves on \(\mathbb{C}^{*}\). The natural projection \(\pi:\mathbb{C}^{*}\to E_{q}\) induces then the additive functor \[\mathrm{Coh}(E_{q})\to\mathrm{Mod}_{A}^{\mathrm{f.p.}}(\mathcal{A}_{q})\,\quad \mathcal{F}\mapsto\widetilde{\mathcal{F}}:=H^{0}_{\mathrm{an}}(\mathbb{C}^{*}, \,\pi^{*}\mathcal{F})\,, \tag{6.8}\] that maps the coherent sheaves on the analytic curve \(E_{q}\) to left \(\mathcal{A}_{q}\)-modules admitting a finite presentation \(\,A^{\oplus m}\to A^{\oplus n}\to M\to 0\,\) over the subalgebra \(A\subset\mathcal{A}_{q}\). The following proposition is a well-known result that provides a convenient algebraic description of the category \(\mathrm{Coh}(E_{q})\); its proof can be found in various places (see, for example, [22, Thm 2.2] or [23, Sect. 2]). **Proposition 6.1**.: _The functor (6.8) is an exact equivalence of abelian tensor categories._ We remark that the tensor structure on \(\mathrm{Coh}(E_{q})\) is the standard geometric one (defined by tensor product of sheaves of \(\mathcal{O}_{E_{q}}\)-modules), while the tensor structure on \(\mathrm{Mod}_{A}^{\mathrm{f.p.}}(\mathcal{A}_{q})\) is defined by tensoring \(\mathcal{A}_{q}\)-modules over the subalgebra \(A\) with the action of \(\mathcal{A}_{q}\) on \(M_{1}\otimes_{A}M_{2}\) given by \(\xi\cdot(m_{1}\otimes m_{2})=(\xi\cdot m_{1})\otimes(\xi\cdot m_{2})\). The vector bundles on \(E_{q}\) correspond under (6.8) to \(\mathcal{A}_{q}\)-modules that are free of finite rank over \(A\); such modules form a full subcategory of \(\mathrm{Mod}_{A}^{\mathrm{f.p.}}(\mathcal{A}_{q})\) closed under the tensor product. The cohomology of a coherent sheaf \(\mathcal{F}\) on \(E_{q}\) can be computed algebraically in terms of \(\mathcal{A}_{q}\)-modules as invariants and coinvariants of the induced action of \(\mathbb{Z}\) on the corresponding \(A\)-module \(\widetilde{\mathcal{F}}\) (see [23, Lemma 2.1]): \[H^{0}_{\mathrm{an}}(E_{q},\,\mathcal{F})\,\cong\,\mathrm{Ker}\,(\xi-\mathrm{ id}\,:\,\widetilde{\mathcal{F}})\,\quad H^{1}_{\mathrm{an}}(E_{q},\,\mathcal{F})\,\cong\,\mathrm{Coker}\,(\xi- \mathrm{id}\,:\,\widetilde{\mathcal{F}})\,. \tag{6.9}\] where \(\,\xi\,\) is the multiplicative generator of the copy of \(\mathbb{Z}\) in \(\mathcal{A}_{q}\) acting on the \(\mathcal{A}_{q}\)-module \(\widetilde{\mathcal{F}}\). **Example 6.2**.: The structure sheaf \(\mathcal{O}_{E_{q}}\) of \(E_{q}\) corresponds under (6.8) to the cyclic module \(\widetilde{\mathcal{O}}_{E_{q}}=\mathcal{A}_{q}/\mathcal{A}_{q}(\xi-1)\,,\) which can be identified as \(\widetilde{\mathcal{O}}_{E_{q}}\cong Ae\) with generator \(e\) satisfying the relation \(\xi e=e\). The line bundle \(\mathcal{O}_{E_{q}}([1])\) corresponds to \(\,\mathcal{A}_{q}/\mathcal{A}_{q}(\xi+z)\cong Ae\,,\) with \(e\) satisfying \(\xi e=-ze\). More generally, any line bundle on \(E_{q}\) of degree \(d\) corresponds to a cyclic \(\mathcal{A}_{q}\)-module \(Ae\), where the generator \(e\) satisfies the relation \(\xi e=cz^{d}e\) for some \(c\in\mathbb{C}^{*}\) (see [23, Example 2.2]). We now proceed with computing elliptic cohomology of the spaces \(F_{m}=F_{m}(G,T)\). For a fixed Tate curve \(E_{q}\), we first describe the \(T\)-equivariant cohomology, presenting the answer in two ways: in terms of coherent sheaves on \(E_{q}\) and in terms of \(\mathcal{A}_{q}\)-modules via the equivalence (6.8). **Theorem 6.3**.: _For all \(m\geq 0\), there are natural isomorphisms of coherent sheaves in \(\mathrm{Coh}(E_{q})\)_ \[\mathcal{E}ll^{*}_{T}(F_{m})\,\cong\,\mathcal{O}_{E_{q}}\times_{\mathcal{O}_{ E_{q}}/\mathcal{J}^{2m+1}}\mathcal{O}_{E_{q}}\, \tag{6.10}\] _where \(\mathcal{J}\) is the subsheaf of ideals in the structure sheaf \(\mathcal{O}_{E_{q}}\) vanishing at the identity of \(E_{q}\). Under the equivalence (6.8), the coherent sheaf (6.10) corresponds to the \(\mathcal{A}_{q}\)-module_ \[\widetilde{\mathcal{Ell}}^{*}_{T}(F_{m})\cong A\times_{A/\langle\Theta\rangle^{ 2m+1}}A\,, \tag{6.11}\] _where the action of \(\mathcal{A}_{q}\) on the fibre product is induced by the natural action of \(\mathcal{A}_{q}\) on \(A\) and \(\langle\Theta\rangle\) denotes the \((\)principal\()\) ideal in \(A=\mathcal{O}_{\mathrm{an}}(\mathbb{C}^{*})\) generated by the classical theta function_ \[\Theta(z):=(1-z)\,\prod_{n>0}(1-q^{n}z)(1-q^{n}z^{-1})\,=\,\frac{1}{(q;q)_{ \infty}}\,\,\sum_{n\in\mathbb{Z}}\,q^{\frac{n(n-1)}{2}}(-z)^{n}\,. \tag{6.12}\] Proof.: Recall that, by Lemma 3.12, there is a \(T\)-equivariant homeomorphism \[F_{m}\,\cong\,\Sigma\,E_{2m}(T)=\mathrm{hocolim}\,(\mathrm{pt}\gets E_{2m} (T)\to\mathrm{pt})\,,\] where \(E_{2m}(T)=T^{*(2m+1)}\) is Milnor's \(2m\)-universal \(T\)-bundle. As equivariant \(K\)-theory, the \(T\)-equivariant elliptic cohomology is known to satisfy the Mayer-Vietoris property (see, e.g., [10, Theorem 3.8]). Hence, as in Lemma 5.2, there is a six term long exact sequence of sheaves on \(E_{q}\): where the arrow on top of the diagram is given on sections by \((x_{1},x_{2})\mapsto p_{1}^{*}(x_{1})-p_{2}^{*}(x_{2})\), with \(p_{1}\) and \(p_{2}\) representing two copies of the canonical map \(E_{2m}(T)\to\mathrm{pt}\). By (6.2), we know that \(\mathcal{Ell}^{*}_{T}(\mathrm{pt})\cong\mathcal{O}_{E_{q}}\); on the other hand, by Lemma 6.4 (see below), \[\mathcal{Ell}^{*}_{T}(E_{2m}T)\cong\mathcal{O}_{E_{q}}/\mathcal{J}^{2m+1},\] where \(\mathcal{J}\subset\mathcal{O}_{E_{q}}\) is the subsheaf of sections vanishing at \(1\in E_{q}\). Hence, by exactness, the above commutative diagram shows that \(\,\mathcal{Ell}^{1}_{T}(F_{m}(G,T))=0\,\) and \[\mathcal{Ell}^{0}_{T}(F_{m})=\mathrm{Ker}(p_{1}^{*}-p_{2}^{*})\cong\mathcal{O }_{E_{q}}\times_{\mathcal{O}_{E_{q}}/\mathcal{J}^{2m+1}}\mathcal{O}_{E_{q}}\,.\] This proves the first claim of the theorem. Now, to prove the second claim we observe that the skyscraper sheaf \(\mathcal{F}:=i_{1,*}\mathbb{C}\) on \(E_{q}\) (with stalk \(\mathbb{C}\) supported at \(\{1\}\)) corresponds under (6.8) to the quotient module \(\widetilde{\mathcal{F}}\cong A/\langle\Theta\rangle\), where the action of \(\mathcal{A}_{q}\) is induced by the natural action of \(\mathcal{A}_{q}\) on \(A\). Indeed, \(\widetilde{\mathcal{F}}\) is isomorphic to the cokernel of the map \(\mathcal{O}_{E_{q}}\to\mathcal{O}_{E_{q}}([1])\), which is given (with identifications of Example 6.2) by \(e\mapsto\Theta e\). This follows from the fact that as a global analytic function on \(\mathbb{C}^{*},\ \Theta=\Theta(z)\) has simple zeroes exactly at the points \(z=q^{n}\,(n\in\mathbb{Z})\). Hence the ideal sheaf \(\mathcal{J}\subset\mathcal{O}_{E_{q}}\) corresponds to the ideal \(\langle\Theta\rangle=A\Theta\) in \(A\), and more generally, since (6.8) is a tensor functor, \(\mathcal{J}^{2m+1}\) corresponds to \(\langle\Theta\rangle^{2m+1}=A\Theta^{2m+1}\) for all \(m\geq 0\). Now, since (6.8) is an exact additive functor, it takes the fibre product \(\mathcal{O}_{E_{q}}\times_{\mathcal{O}_{E_{q}}/\mathcal{J}^{2m+1}}\mathcal{O}_ {E_{q}}\) in \(\mathrm{Coh}(E_{q})\) to the module \(A\times_{A/\langle\Theta\rangle^{2m+1}}A\) in \(\mathrm{Mod}^{\mathrm{f.p.}}_{A}(\mathcal{A}_{q})\), thus completing the proof of the theorem. **Lemma 6.4**.: _There are isomorphisms of sheaves \(\,\mathcal{Ell}^{*}_{T}(E_{n}T)\cong\mathcal{O}_{E_{q}}/\mathcal{J}^{n+1}\) for all \(\,n\geq 0\,\)._ Proof.: Note that \(T\) acts freely on \(E_{n}(T):=T^{*(n+1)}\). Recall (see [10, Sect. 3.2]) that if \(X\) is a finite \(T\)-space, the stalk at \(a\in E\) of \(\mathcal{Ell}^{*}_{T}(X)\) is isomorphic to \(H^{*}_{T}(X^{a};\mathbb{C})\otimes_{\mathbb{C}[z]}\mathcal{O}_{\mathbb{C}^{*},1}\), where \(\mathcal{O}_{E_{q},1}\) stands for the ring of germs of analytic functions at \(1\in E_{q}\). Here, \(X^{a}\) stands for the fixed point space \(X^{T_{a}}\), where \(T_{a}=\mathbb{Z}/k\mathbb{Z}\subset T\) if \(a\) is of finite order \(k\) in \(E\), and \(T_{a}=T\) if \(a\) is not of finite order in \(E\). It follows that of \(T\) acts freely on \(X\), the stalk \(\mathcal{Ell}^{*}_{T}(X)_{a}\) of \(\mathcal{Ell}^{*}_{T}(X)\) at \(a\) vanishes for \(a\neq 1\). Hence, \(\mathcal{E}ll_{T}(E_{n}T)_{a}=0\) for \(a\neq 1\), and for \(U\) a small neighborhood of \(1\) in \(E_{q}\), \(\mathcal{E}ll_{T}^{*}(X)|_{U}\cong H_{T}^{*}(E_{n}T;\mathbb{C})\otimes_{\mathbb{ C}[x]}\mathcal{O}_{E_{q}}|_{U}\), where \(\mathcal{O}_{E_{q}}|_{U}\) acquires the structure of a sheaf of \(\mathbb{C}[x]\)-modules via the map \(\mathbb{C}[x]\to\mathcal{O}_{E_{q}}(U),x\mapsto\theta\), where \(\theta\) is a generator of the maximal ideal of the local ring \(\mathcal{O}_{\mathbb{C}^{*},1}\). The desired lemma therefore, follows from the fact that \(H_{T}^{*}(E_{n}T;\mathbb{C})\cong H^{*}(B_{n}T;\mathbb{C})\cong\mathbb{C}[x]/x^ {n+1}\) (see the proof of Lemma 3.13 above). To compute the \(G\)-equivariant elliptic cohomology of \(F_{m}\) we need to refine the result of Theorem 6.3 by taking into account the action of \(W=\mathbb{Z}/2\mathbb{Z}\) on \(\,\mathcal{E}ll_{T}^{*}(F_{m})\). To this end we first refine the result of Proposition 6.1. Observe that the equivalence (6.8) extends to the category of \(W\)-equivariant coherent sheaves on \(E_{q}\): \[\operatorname{Coh}_{W}(E_{q})\,\xrightarrow{\sim}\,\operatorname{Mod}_{A}^{ \operatorname{f.p.}}(\mathcal{A}_{q}\rtimes W)\, \tag{6.13}\] where the category of \(\mathcal{A}_{q}\)-modules finitely presented over \(A\) is replaced by a similar category of modules over the crossed product algebra \(\mathcal{A}_{q}\rtimes W\) associated to the geometric action of \(W\) on \(\mathbb{C}^{*}\). The algebra \(\mathcal{A}_{q}\rtimes W\) has the canonical presentation \(A\langle\xi,\xi^{-1},s\rangle\), where the generators \(\xi\), \(s\) and \(a(z)\in A\) are subject to the relations \[s\cdot a(z)=a(z^{-1})\cdot s\,\quad s\cdot\xi=\xi^{-1}\cdot s\,\quad\xi\cdot a (z)=a(qz)\cdot\xi\,\quad s^{2}=1\] We let \(e_{+}:=(1+s)/2\) denote the symmetrizing idempotent in \(\mathcal{A}_{q}\rtimes W\) and consider the subalgebra \(e_{+}(\mathcal{A}_{q}\rtimes W)e_{+}\) of \(\mathcal{A}_{q}\rtimes W\) (with identity element \(e_{+}\)). This subalgebra can be naturally identified with the invariant subalgebra \(\mathcal{A}_{q}^{W}\) of \(\mathcal{A}_{q}\) via the isomorphism: \(\,\mathcal{A}_{q}^{W}\xrightarrow{\sim}e_{+}(\mathcal{A}_{q}\rtimes W)e_{+}\), \(a\mapsto e_{+}a\,e_{+}\,\). With this identification, we can define the additive functor \[\operatorname{Mod}(\mathcal{A}_{q}\rtimes W)\,\to\,\operatorname{Mod}( \mathcal{A}_{q}^{W})\,\quad M\mapsto e_{+}M\, \tag{6.14}\] that assigns to a \(W\)-equivariant \(\mathcal{A}_{q}\)-module its subspace of \(W\)-invariant elements viewed as a module over \(\mathcal{A}_{q}^{W}\). The next result is well known for the algebra \(\mathcal{A}_{q}^{\operatorname{alg}}:=\mathcal{O}_{\operatorname{alg}}( \mathbb{C}^{*})\rtimes_{q}\mathbb{Z}\) which is an algebraic (polynomial) analogue7 of \(\mathcal{A}_{q}=\mathcal{O}_{\operatorname{an}}(\mathbb{C}^{*})\rtimes_{q} \mathbb{Z}\). The analytic case easily reduces to the algebraic one as \(\mathcal{A}_{q}^{\operatorname{alg}}\) is naturally a subalgebra of \(\mathcal{A}_{q}\). Footnote 7: The algebra \(\mathcal{A}_{q}^{\operatorname{alg}}\) is usually referred to as a quantum Weyl algebra. **Lemma 6.5**.: _The functor (6.14) is an equivalence of categories, its inverse being given by_ \[\mathcal{A}_{q}\otimes_{\mathcal{A}_{q}^{W}}(\,\text{--}):\,\operatorname{Mod }(\mathcal{A}_{q}^{W})\,\to\,\operatorname{Mod}(\mathcal{A}_{q}\rtimes W)\] Proof.: Lemma can be restated by saying that the algebra \(\mathcal{A}_{q}\rtimes W\) is Morita equivalent to \(\mathcal{A}_{q}^{W}\). To prove this, by standard Morita theory (see [13, 3.5.6]), it suffices to check that the idempotent \(e_{+}\) generates the whole \(\mathcal{A}_{q}\rtimes W\) as its two-sided ideal. This last condition holds for \(\mathcal{A}_{q}^{\operatorname{alg}}\rtimes W\), since \(\mathcal{A}_{q}^{\operatorname{alg}}\rtimes W\) is a simple algebra (has no proper two-sided ideals), if \(q\) is not a root of unity. But then it also holds for \(\mathcal{A}_{q}\rtimes W\), since \(\mathcal{A}_{q}^{\operatorname{alg}}\rtimes W\) is a unital subalgebra of \(\mathcal{A}_{q}\rtimes W\) containing \(e_{+}\). Now, combining (6.13) with Morita equivalence (6.14), we get the equivalence \[\operatorname{Coh}_{W}(E_{q})\,\xrightarrow{\sim}\,\operatorname{Mod}_{A^{W}} ^{\operatorname{f.p.}}(\mathcal{A}_{q}^{W})\,\quad\mathcal{F}\mapsto H_{\operatorname{an}}^{0}(\mathbb{C}^{*},\,\pi^{*} \mathcal{F})^{W}\,, \tag{6.15}\] that allows us to describe the \(W\)-equivariant coherent sheaves on \(E_{q}\) in terms of \(\mathcal{A}_{q}^{W}\)-modules. Recall that \(\mathcal{E}ll_{G}^{*}(F_{m})\) is defined to be the subsheaf of \(W\)-invariant sections of the coherent sheaf \(\mathcal{E}ll_{T}^{*}(F_{m})\) (see (6.3)). In the next theorem, we describe \(\mathcal{E}ll_{G}^{*}(F_{m})\) explicitly as an \(\mathcal{A}_{q}^{W}\)-submodule of \(A\), where the action of \(\mathcal{A}_{q}^{W}\) on \(A\) is obtained by restricting the natural action of \(\mathcal{A}_{q}\). **Theorem 6.6**.: _Under the equivalence (6.15), the \(W\)-equivariant sheaf \(\,\mathcal{E}ll^{*}_{T}(F_{m})\,\) maps to the \(\mathcal{A}^{W}_{q}\)-module representing the \(G\)-equivariant elliptic cohomology of \(F_{m}:\)_ \[\widetilde{\mathcal{E}ll}^{*}_{G}(F_{m})\,\cong\,A^{W}+\,A^{W}(\Theta(z)- \Theta(z^{-1}))\,\vartheta(z)^{2m}\ \subseteq\ A\,, \tag{6.16}\] _where \(A^{W}\) is the subspace of \(W\)-invariant functions in \(A=\mathcal{O}_{\mathrm{an}}(\mathbb{C}^{*})\) and \(\vartheta(z)\in A[z^{\pm 1/2}]\) is the Jacobi theta function_ \[\vartheta(z):=(z^{1/2}-z^{-1/2})\,\prod_{n>0}(1-q^{n}z)(1-q^{n}z^{-1}) \tag{6.17}\] Proof.: Observe that the \(T\)-space \(F_{m}\) comes together with a natural \(T\)-equivariant map \[(G/T)^{T}\,\hookrightarrow\,(G/T)^{T}*E_{2m}(T)\,\cong\,F_{m}(G,T)\,, \tag{6.18}\] where \((G/T)^{T}\subset G/T\) is the set of \(T\)-fixed points in \(G/T\) (see (3.38)). On \(T\)-equivariant elliptic cohomology, the map (6.18) induces an injective map \(\,\mathcal{E}ll^{*}_{T}(F_{m})\hookrightarrow\mathcal{E}ll^{*}_{T}[(G/T)^{T}]\,\), which under the isomorphism (6.10) of Theorem 6.3, corresponds to the canonical inclusion \[\mathcal{O}_{E_{q}}\times_{\mathcal{O}_{E_{q}}/\mathcal{J}^{2m+1}}\mathcal{O} _{E_{q}}\,\hookrightarrow\,\mathcal{O}_{E_{q}}\times\mathcal{O}_{E_{q}} \tag{6.19}\] Now, the map (6.18) is also equivariant under the action of \(W\) which is given on \((G/T)^{T}=\mathbb{S}^{0}\) simply by transposition of points. It follows that (6.19) is a morphism of \(W\)-equivariant sheaves on \(E_{q}\) that, under equivalence (6.13), corresponds to the \(W\)-equivariant inclusion \(\,A\times_{A/\langle\Theta\rangle^{2m+1}}A\hookrightarrow A\times A\,\), where \(W\) acts on \(A\times A\) by \(s\cdot(f(z),g(z))=(g(z^{-1}),f(z^{-1}))\). As a \((\mathcal{A}_{q}\rtimes W)\)-module, the product \(A\times A\) is thus isomorphic to \(A[W]:=A\otimes\mathbb{C}W\), where the action of \(\mathcal{A}_{q}\rtimes W\) is given by \[a\cdot(f(z)\otimes w) = a(z)f(z)\otimes w\,\] \[\xi\cdot(f(z)\otimes w) = f(qz)\otimes w\,\] \[s\cdot(f(z)\otimes w) = f(z^{-1})\otimes sw\,. \tag{6.20}\] Choosing a basis in \(\mathbb{C}W\) consisting of the idempotents \(\{e_{+},\,e_{-}\}\), we can describe \(\widetilde{\mathcal{E}ll}^{*}_{T}(F_{m})\) as the \((\mathcal{A}_{q}\rtimes W)\)-submodule of \(A[W]\) \[\widetilde{\mathcal{E}ll}^{*}_{T}(F_{m})\,\cong\,A\,e_{+}+A\,\Theta(z)^{2m+1} \,e_{-}\,, \tag{6.21}\] where the isomorphism is explicitly given by \(\,(f,g)\mapsto(f+g)e_{+}+(f-g)e_{-}\,\). Now, applying to (6.21) the restriction functor (6.14) and using the (obvious) algebraic identities for theta functions \(\vartheta(z)=-z^{-1/2}\Theta(z)\) and \(\Theta(z^{-1})=-z^{-1}\Theta(z)\), we get \[\widetilde{\mathcal{E}ll}^{*}_{T}(F_{m})^{W} \cong e_{+}A\,e_{+}+e_{+}A\,\Theta(z)^{2m+1}\,e_{-}\] \[= e_{+}A^{W}+e_{+}A\,\Theta(z)\vartheta(z)^{2m}e_{-}\] \[= e_{+}A^{W}+e_{+}A\,\Theta(z)e_{-}\vartheta(z)^{2m}\] \[= e_{+}A^{W}+e_{+}A\,e_{+}\left(\Theta(z)-\Theta(z^{-1})\right) \vartheta(z)^{2m}\] \[= e_{+}\left(A^{W}+A^{W}\left(\Theta(z)-\Theta(z^{-1})\right) \vartheta(z)^{2m}\right)\,,\] which, with our identifications \(\widetilde{\mathcal{E}ll}^{*}_{G}(F_{m})=\widetilde{\mathcal{E}ll}^{*}_{T}(F_ {m})^{W}\) (see (6.3)) and \(\,e_{+}(\mathcal{A}_{q}\rtimes W)e_{+}=\mathcal{A}_{q}^{W}\), is precisely the isomorphism (6.16). ### Elliptic cohomology with twisted coefficients The coherent sheaves \(\mathcal{E}ll^{*}_{T}(F_{m})\) (and a fortiori \(\,\mathcal{E}ll^{*}_{G}(F_{m})\)) do not have nontrivial global sections. Indeed, by Theorem 6.3, \(\,\mathcal{E}ll^{*}_{T}(F_{m})\) fits in the short exact sequence in \(\operatorname{Coh}(E_{q})\): \[0\to\mathcal{E}ll^{*}_{T}(F_{m})\,\to\,\mathcal{O}_{E_{q}}\oplus\mathcal{O}_{ E_{q}}\,\to\,\mathcal{O}_{E_{q}}/\mathcal{J}^{2m+1}\to 0 \tag{6.22}\] that shows at once that \(H^{0}_{\operatorname{an}}(E_{q},\mathcal{E}ll^{*}_{T}(F_{m}))\cong\mathbb{C}\,\) for all \(m\geq 0\). With a little more work, using the long exact cohomology sequence associated to (6.22) we can also find that \(\,H^{1}_{\operatorname{an}}(E_{q},\mathcal{E}ll^{*}_{T}(F_{m}))\cong\mathbb{C }^{2m+2}\), which -- as a \(W\)-module -- admits decomposition \[H^{1}_{\operatorname{an}}(E_{q},\mathcal{E}ll^{*}_{T}(F_{m}))\,\cong\, \mathbb{C}^{\oplus(m+1)}_{+}\,\oplus\,\mathbb{C}^{\oplus(m+1)}_{-}\,, \tag{6.23}\] where '\(\mathbb{C}_{+}\)' and '\(\mathbb{C}_{-}\)' denote the trivial and the sign representations of \(W\), respectively. A much richer picture emerges if we twist the elliptic cohomology sheaves \(\mathcal{E}ll^{*}_{T}(F_{m})\) with the Looijenga line bundle \(\mathcal{L}\) on \(E_{q}\) (see definitions (6.4) and (6.5)). Under the equivalence (6.8), this line bundle corresponds to the rank one free \(A\)-module \(\tilde{\mathcal{L}}=A\,v\), where the action of \(\mathcal{A}_{q}\) and \(W\) are determined by the relations \(\,\xi\cdot v=q\,z^{2}\,v\) and \(s\cdot v=v\) (_cf._ Example 6.2). Since (6.8) preserves tensor products, the tensor powers \(\mathcal{L}^{n}=\mathcal{L}^{\otimes n}\) of \(\mathcal{L}\) in \(\operatorname{Coh}(E_{q})\) correspond to the \(\mathcal{A}_{q}\)-modules \(\widetilde{\mathcal{L}}^{n}=A\,v_{n}\) with \(\xi\cdot v_{n}=q^{n}\,z^{2n}\,v_{n}\) and \(s\cdot v_{n}=v_{n}\). By (6.9), we can then identify the spaces of global sections of these line bundles as \[H^{0}_{\operatorname{an}}(E_{q},\,\mathcal{L}^{n})\,\cong\,\{f(z)\in A\ :\ f(qz)=q^{-n}\,z^{-2n}\,f(z)\}\,\quad\forall\,n\geq 0\,. \tag{6.24}\] Following [11], we set \[S(E):=\bigoplus_{n\geq 0}\,H^{0}_{\operatorname{an}}(E_{q},\,\mathcal{L}^{n})\,, \tag{6.25}\] which, with identifications (6.24), is a graded subalgebra of \(A\) stable under the action of \(W\). To describe this subalgebra we decompose it as the direct sum of \(W\)-invariants and anti-invariants: \[S(E)\,=\,S(E)^{W}\oplus\,S(E)^{-W} \tag{6.26}\] Then, by Looijenga Theorem (see[11, (3.4)]), we know that \(S(E)^{W}\) is a free polynomial algebra on \(2\) generators, while \(S(E)^{-W}\) is a free module over \(S(E)^{W}\) of rank one. The generators of \(S(E)^{W}\) and \(S(E)^{-W}\) can be explicitly given in terms of the Jacobi theta function (6.17): namely, \(S(E)^{W}\) is generated (as an algebra) by \(\vartheta^{2}(z)\) and \(\vartheta^{2}(-z)\), which are both invariant functions in \(S(E)\) of degree \(1\), while \(S(E)^{-W}\) is generated (as a module) by the function \(\vartheta(z^{2})\) which is an anti-invariant in \(S(E)\) of degree \(2\). Now to state our last result in this section we recall the definitions of equivariant elliptic cohomology with twisted coefficients: see formulas (6.4) and (6.5) (with \(\mathcal{M}_{T}=E_{q}\)). For \(X=G/T\), it is well known that (see, e.g., [10]): \[\operatorname{Ell}^{*}_{G}(G/T,\mathcal{L})\cong\operatorname{Ell}^{*}_{T}( \operatorname{pt},\mathcal{L})=S(E) \tag{6.27}\] We extend this result to the quasi-flag manifolds \(F_{m}=F_{m}(G,T)\). **Theorem 6.7**.: _The natural maps_ \[G/T=F_{0}(G,T)\to F_{1}(G,T)\to\ldots\to F_{m-1}(G,T)\to F_{m}(G,T)\to\ldots\] _induce injective homomorphisms on twisted elliptic cohomology:_ \[\ldots\hookrightarrow\operatorname{Ell}^{*}_{G}(F_{m},\mathcal{L}) \hookrightarrow\operatorname{Ell}^{*}_{G}(F_{m-1},\mathcal{L})\hookrightarrow \ldots\hookrightarrow\operatorname{Ell}^{*}_{G}(G/T,\mathcal{L})\,.\] _Under the identification (6.27), the composite map \(\operatorname{Ell}^{*}_{G}(F_{m},\mathcal{L})\hookrightarrow\operatorname{Ell}^{* }_{G}(G/T,\mathcal{L})\) corresponds to the inclusion \(\,S(E)^{W}\,\oplus\,\vartheta^{2m}(z)\,S(E)^{-W}\hookrightarrow S(E)\,\), so that_ \[\operatorname{Ell}^{*}_{G}(F_{m},\mathcal{L})\,\cong\,S(E)^{W}\,\oplus\, \vartheta^{2m}(z)\,S(E)^{-W}\,, \tag{6.28}\] _where \(\,S(E)^{W}=\mathbb{C}[\vartheta^{2}(z),\vartheta^{2}(-z)]\,\) and \(\,S(E)^{-W}=\mathbb{C}[\vartheta^{2}(z),\vartheta^{2}(-z)]\,\vartheta(z^{2})\,\)._ Proof.: We use the description of \(\widetilde{\mathcal{Ell}}^{*}_{T}(F_{m})\) given in the proof of Theorem 6.6: namely, \(\,\widetilde{\mathcal{Ell}}^{*}_{T}(F_{m})=A\,e_{+}\,+\,A\,\Theta^{2m+1}\,e_{-}\,\) as an \((\mathcal{A}_{q}\rtimes W)\)-submodule of \(A[W]=A\,e_{+}+A\,e_{-}\,\). Under the equivalence (6.8), the twisted sheaves \(\,\mathcal{Ell}^{*}_{T}(F_{m})\otimes\mathcal{L}^{n}\,\) can then be described by \[\widetilde{\mathcal{Ell}}^{*}_{T}(F_{m})\otimes_{A}\widetilde{\mathcal{L}}^{ n}\,=\,A\,v_{n}\otimes e_{+}\,\oplus\,A\,\Theta^{2m+1}\,v_{n}\otimes e_{-} \tag{6.29}\] and we can compute their global sections using formula (6.9): \[H^{0}_{\operatorname{an}}(E_{q},\,\mathcal{Ell}^{*}_{T}(F_{m}) \otimes\mathcal{L}^{n}) \cong \operatorname{Ker}(\xi-\operatorname{id}:\,\widetilde{\mathcal{Ell }}^{*}_{T}(F_{m})\otimes_{A}\widetilde{\mathcal{L}}^{n}\,)\] \[\cong \operatorname{Ker}(\xi-\operatorname{id}:\,Av_{n}\otimes e_{+}) \oplus\operatorname{Ker}(\xi-\operatorname{id}:\,A\Theta^{2m+1}v_{n}\otimes e _{-})\] \[\cong H^{0}_{\operatorname{an}}(E_{q},\mathcal{L}^{n})\,e_{+}\,\oplus \,(H^{0}_{\operatorname{an}}(E_{q},\mathcal{L}^{n})\,|\,\Theta^{2m+1})\,e_{- }\,,\] where \((H^{0}_{\operatorname{an}}(E_{q},\mathcal{L}^{n})\,|\,\Theta^{2m+1})\) denotes the subspace of all sections in \(H^{0}_{\operatorname{an}}(E_{q},\mathcal{L}^{n})\) that are divisible by \(\Theta^{2m+1}\) under the identification (6.24). Summing up over all \(n\geq 0\), we find \[\operatorname{Ell}^{*}_{T}(F_{m},\mathcal{L})\,\cong\,S(E)\,e_{+}\,\oplus\,(S (E)\,|\,\Theta^{2m+1})\,e_{-}\,,\] where \(S(E)\) is the Looijenga ring (6.25). To compute \((S(E)\,|\,\Theta^{2m+1})\) we observe that an element of \(S(E)\) is divisible by \(\Theta^{2m+1}\) in \(A\) if and only if its invariant and anti-invariant parts in \(S(E)^{W}\) and \(S(E)^{-W}\) are both divisible by \(\Theta^{2m+1}\). Now, for \(f(z)\in S(E)^{W}\), \(\Theta^{2m+1}(z)\) divides \(f(z)\) if and only if \(\vartheta^{2m+2}(z)\) divides \(f(z)\), while for \(f(z)\in S(E)^{-W}\), \(\Theta^{2m+1}(z)\) divides \(f(z)\) if and only if \(\vartheta^{2m}(z)\,\vartheta(z^{2})\) divides \(f(z)\). Thus \[\operatorname{Ell}^{*}_{T}(F_{m},\mathcal{L})\,\cong\,S(E)\,e_{+}\,\oplus\,( \vartheta^{2m+2}(z)\,S(E)^{W}+\vartheta^{2m}(z)\,S(E)^{-W})\,e_{-} \tag{6.30}\] Now, applying to (6.30) the restriction functor (6.14), we get \[\operatorname{Ell}^{*}_{T}(F_{m},\mathcal{L})^{W} \cong e_{+}\,S(E)\,e_{+}\,\oplus\,e_{+}\,\left(\vartheta^{2m+2}(z)\,S (E)^{W}+\vartheta^{2m}(z)\,S(E)^{-W}\right)e_{-}\] \[\cong S(E)^{W}\,\oplus\,\vartheta^{2m}(z)\,S(E)^{-W}\] which gives (6.28) since \(\operatorname{Ell}^{*}_{T}(F_{m},\mathcal{L})^{W}=\operatorname{Ell}^{*}_{G}(F _{m},\mathcal{L})\). To complete the proof it suffices to note that the map of spaces \(G/T\to F_{m}\) induces the natural inclusion \[S(E)\,e_{+}\,\oplus\,(\vartheta^{2m+2}(z)\,S(E)^{W}+\vartheta^{2m}(z)\,S(E)^{ -W})\,e_{-}\,\hookrightarrow\,S(E)\,e_{+}\,\oplus\,(\vartheta^{2}(z)\,S(E)^{ W}+S(E)^{-W})\,e_{-}\] as a map representing \(\operatorname{Ell}^{*}_{T}(F_{m},\mathcal{L})\to\operatorname{Ell}^{*}_{T}(G/T,\mathcal{L})\) under the isomorphism (6.30). When restricted to \(W\)-invariants this yields the inclusion \[S(E)^{W}\oplus\vartheta^{2m}(z)\,S(E)^{-W}\hookrightarrow S(E)^{W}\oplus S(E) ^{-W}=S(E)\] that represents \(\operatorname{Ell}^{*}_{G}(F_{m},\mathcal{L})\hookrightarrow\operatorname{Ell }^{*}_{G}(G/T,\mathcal{L})\). _Remark 6.8_.: The above calculation of elliptic cohomology suggests a natural algebraic definition of quasi-invariants in the elliptic case (_cf._ (6.28)). This differs, however, from the definition of elliptic quasi-invariants that has already been used in the literature (see, e.g., the beautiful work of O. Chalykh on Macdonald's conjectures [10]). The difference seems to be an instance of 'elliptic-elliptic' duality studied in the theory of integrable systems (see, e.g., [11]). ## 7. Topological Gorenstein duality The realization of algebras of quasi-invariants raises natural questions about homotopy-theoretic analogues (refinements) of basic properties and structures associated with these algebras. In this section, we make first steps in this direction by showing that the spaces of quasi-invariants \(X_{m}(G,T)\) satisfy Gorenstein duality in the sense of stable homotopy theory. Our main result -- Theorem 7.1 -- should be viewed as a topological analogue of Theorem 2.3 on Gorensteinness of rings of quasi-invariants. For reader's convenience, we collect basic definitions from stable homotopy theory concerning duality and regularity properties of commutative ring spectra, in Appendix B. We refer to Appendix B for all unexplained notation used in this section. ### Gorenstein duality of spaces of quasi-invariants It is well known that, if \(X\) is a pointed connected topological space, the singular cochain complex \(C^{*}(X,\mathbb{Q})\), computing cohomology of \(X\) with coefficients in \(\mathbb{Q}\), admits a commutative DG algebra model8. When \(\mathbb{Q}\) is replaced by an arbitrary field \(k\), this last fact is no longer true: in general, the cochain complex \(C^{*}(X,k)\) is not quasi-isomorphic to any commutative DG algebra over \(k\) if \(\operatorname{char}(k)\neq 0\). A natural way to remedy this problem is to use commutative ring spectra - instead of DGAs - as models for \(C^{*}(X,k)\). Specifically, for any commutative ring \(k\), the _cochain spectrum_ of the space \(X\) with coefficients in \(k\) is defined by (_cf._[10]) Footnote 8: Such a model can be constructed in a functorial way, using, for example, piecewise polynomial differential forms on \(X\) defined over \(\mathbb{Q}\) (see [1]). \[C^{*}(X,k):=\operatorname{Map}_{\mathbb{S}}(\Sigma^{\infty}X_{+},\,Hk) \tag{7.1}\] where \(\Sigma^{\infty}X_{+}\) is the suspension spectrum associated to \(X\), \(Hk\) is the Eilenberg-MacLane spectrum of \(k\), and \(\operatorname{Map}_{\mathbb{S}}\) denotes the mapping spectrum in the category of (symmetric) spectra. By definition, (7.1) is a commutative ring spectrum with multiplication induced by the multiplication map on \(Hk\) and the diagonal map on \(X\). In addition, following [1], we introduce the _chain spectrum_ of \(X\): \[C_{*}(\Omega X,k):=Hk\wedge\Sigma^{\infty}(\Omega X)_{+} \tag{7.2}\] which is a noncommutative ring spectrum that models the singular chain complex of the based loop space of \(X\). Both \(C^{*}(X,k)\) and \(C_{*}(\Omega X,k)\) are augmented \(k\)-algebras, with augmentation on \(C^{*}(X,k)\) induced by the basepoint inclusion \(\operatorname{pt}\to X\) and on \(C_{*}(\Omega X,k)\) by the trivial map \(\Omega\to\operatorname{pt}\). For all \(i\in\mathbb{Z}\), there are natural isomorphisms \[\pi_{i}\left[C^{*}(X,k)\right]\cong H^{-i}(X,k)\,\quad\pi_{i}\left[C_{*}( \Omega X,k)\right]\cong H_{i}(\Omega X,k) \tag{7.3}\] which show that \(C^{*}(X,k)\) and \(C_{*}(\Omega X,k)\) are coconnective and connective spectra, respectively. We are now in position to state and prove the main theorem of this section. **Theorem 7.1**.: _Let \(X_{m}=X_{m}(G,T)\) be the space of \(m\)-quasi-invariants associated to \(G=SU(2)\). Let \(R_{m}:=C^{*}(X_{m},k)\) and \(E_{m}:=C_{*}(\Omega X_{m},k)\) denote the cochain and chain spectra of \(X_{m}\) with coefficients in an arbitrary field \(k\). Then, for any \(m\geq 0\),_ 1. \(R_{m}\) _and_ \(E_{m}\) _are proxy-regular_ (_Definition B.1_) _and dc-complete_ (_Definition B.4_) _with_ \[\operatorname{Map}_{R_{m}}(k,k)\simeq E_{m}\quad\text{and}\quad\operatorname{ Map}_{E_{m}}(k,k)\simeq R_{m}\] 2. \(R_{m}\) _is orientable Gorenstein of shift_ \(a=1-4m\)__(_Definition B.2_)_ 3. \(R_{m}\) _satisfies Gorenstein duality of shift_ \(a=1-4m\)__(_Definition B.3_)_ Proof.: (1) We start with Borel fibration sequence that comes from the Ganea construction of spaces of quasi-invariants (see (3.33)): \[F_{m}(G,T)\to X_{m}(G,T)\xrightarrow{p_{m}}BG \tag{7.4}\] To simplify the notation we set \[Q_{m}:=C^{*}(F_{m},k)\,\quad R_{m}:=C^{*}(X_{m},k)\,\quad S:=C^{*}(BG,k)\.\] Since \(F_{m}\) is a finite connected complex (see (3.28)), by [10, Prop. 5.3], the augmentation morphism \(Q_{m}\to k\) is cosmall (see Definition B.1). Since \(G\) is connected, the classifying space \(BG\) is simply-connected; moreover, the cohomology of \(BG\) is free, of finite type over \(\mathbb{Z}\), and hence, a fortiori, over any field \(k\) (see, e.g., [11, III.3.17]). Therefore, in terminology of [10, Sect. 4.22], the pair \((BG,k)\) is of Eilenberg-Moore type. Since \(H_{*}(\Omega BG,k)\cong H_{*}(G,k)\) is finite-dimensional over \(k\), it follows from [10, 5.5(3)] that \(S\to k\) is a regular morphism, i.e. \(k\) is small as an \(S\)-module. Next, since \((BG,k)\) is of Eilenberg-Moore type, the fibration sequence (7.4) gives an equivalence of cochain spectra (see, e.g., [1, Lemma 3.7]) \[Q_{m}\simeq k\wedge_{S}R_{m} \tag{7.5}\] Now, by [10, Prop. 4.18(1)], we conclude from (7.5) together with our earlier observations that \(S\to k\) is regular and \(Q_{m}\to k\) is cosmall that \(R_{m}\to k\) is proxy-regular. To complete the proof of part (1) it remains to note that the pair \((X_{m},k)\) is of Eilenberg-Moore type for any field \(k\). Indeed, from the fibration sequence (7.4) it follows that \(X_{m}\) is simply-connected (since so are \(F_{m}\) and \(BG\)); on the other hand, from the homotopy cofibration sequences (see (3.30)) \[F_{m}\to X_{m}\xrightarrow{\pi_{m}}X_{m+1}\] it follows (by induction) that \(X_{m}\) is of finite type over \(k\) for any \(m\geq 0\). By construction of the Eilenberg-Moore spectral sequence, for \(E_{m}=C_{*}(\Omega X_{m},k)\), we have \(E_{m}\simeq\operatorname{Map}_{R_{m}}(k,k)\), while the equivalence \(R_{m}\simeq\operatorname{Map}_{E_{m}}(k,k)\) holds in general (see remarks in [10, Sect. 4.22]). It follows that the augmented \(k\)-algebras \(R_{m}\) and \(E_{m}\) are both dc-complete, and then, by [10, Prop. 4.17], \(E_{m}\) is proxy-regular (since so is \(R_{m}\)). (2) By the proof of Theorem 3.9, we know that (7.4) is a sphere fibration with \(F_{m}\simeq\mathbb{S}^{4m+2}\). Hence, \(F_{m}\) is a Poincare duality space of dimension \(4m+2\), then its cochain spectrum \(Q_{m}=C^{*}(F_{m},k)\) satisfies Poincare duality of dimension \(a=-4m-2\) (in the sense of [10, 8.11]). Since \(Q_{m}\) is cosmall, by [10, Prop. 8.12], we conclude that \(Q_{m}\) is Gorenstein of shift \(a=-4m-2\). Further, by [10, 10.2], we also know that \(S=C^{*}(BG,k)\) is Gorenstein of dimension \(a=\dim(G)=3\). Now, consider the morphism of cochain spectra \(p_{m}^{*}:S\to R_{m}\) induced by the whisker map \(p_{m}:X_{m}\to BG\) in (7.4). We claim that \(R_{m}\) is finitely built from \(S\) via \(p_{m}^{*}\). To see this denote by \(\mathcal{E}:=C_{*}(\Omega BG,k)\cong C_{*}(G,k)\) the chain spectrum of \(BG\). Since \(G\) is simply-connected, \(\mathcal{E}\) is a connective \(k\)-algebra with \(\pi_{0}(\mathcal{E})\cong k[\pi_{1}(G)]=k\) (see (7.3)). Since \(S\) is of Eilenberg-Moore type, there is an equivalence \(S\simeq\operatorname{Map}_{\mathcal{E}}(k,k)\). Furthermore, if we set \(M_{m}:=C_{*}(F_{m},k)\), the action of \(G\) on \(F_{m}\) induces a left \(\mathcal{E}\)-module structure on \(M_{m}\), and by a standard Eilenberg-Moore spectral sequence argument there is an equivalence \(R_{m}\simeq\operatorname{Map}_{\mathcal{E}}(M_{m},k)\). Since \(\pi_{*}(M_{m})\cong H_{*}(F_{m},k)\) is finite-dimensional over \(k\), the \(\mathcal{E}\)-module \(M_{m}\) is finitely built from \(k\). Now, Proposition 3.18 of [10] implies that \(R_{m}\simeq\operatorname{Map}_{\mathcal{E}}(M_{m},k)\) is finitely built from \(S\simeq\operatorname{Map}_{\mathcal{E}}(k,k)\) as we claimed. Since \(R_{m}\) is proxy-regular and both \(S\) and \(Q_{m}\) are Gorenstein, it follows from [10, Prop. 8.10] that \(R_{m}\) is Gorenstein as well. The Gorenstein shift of \(R_{m}\) can be computed from the following equivalence of \(k\)-modules induced by (7.5) (see [1, Prop. 8.6]): \[\operatorname{Map}_{R_{m}}(k,\,R_{m}) \simeq \operatorname{Map}_{Q_{m}}(k,\,\operatorname{Map}_{S}(k,S)\wedge_{ k}Q_{m})\] \[\simeq \operatorname{Map}_{Q_{m}}(k,\,(\Sigma^{3}k)\wedge_{k}Q_{m})\] \[\simeq \Sigma^{3}\operatorname{Map}_{Q_{m}}(k,\,Q_{m})\] \[\simeq \Sigma^{3}(\Sigma^{-4m-2}k)\] \[\simeq \Sigma^{1-4m}k\] To complete part (2) it remains to note that, for a simply-connected space \(X\) of finite type over \(k\), the cochain spectrum \(C^{*}(X,k)\) is automatically orientable Gorenstein whenever it is Gorenstein. This follows from the fact that, under the above assumptions, \(k\) carries a unique action of \(E=\operatorname{Map}_{C^{*}(X,k)}(k,k)\simeq C_{*}(\Omega X,k)\) (see [1, Sect. 18.3] and also the proof of [1, Lemma 3.8]). (3) follows from (2) by a standard argument. If an augmented \(k\)-algebra \(R\) is orientable Gorenstein of shift \(a\), then \[\operatorname{Cell}_{k}(R)\,\simeq\,\operatorname{Map}_{R}(k,R)\wedge_{E}k\, \simeq\,\Sigma^{a}\operatorname{Map}_{R}(k,\,\operatorname{Map}_{k}(R,k)) \wedge_{E}k\,\simeq\,\Sigma^{a}\operatorname{Cell}_{k}[\operatorname{Map}_{k} (R,k)] \tag{7.6}\] where the first and the last equivalences are given by (B.2) and the one in the middle is induced by (B.5). For \(\,R=C^{*}(X,k)\,\) with \(\pi_{0}(R)\cong H^{0}(X,k)\cong k\), we have \(\pi_{i}\operatorname{Map}_{k}(R,k)=0\) for \(i\ll 0\). By [1, Remark 3.17], the \(R\)-module \(\operatorname{Map}_{k}(R,k)\) is then built from \(k\) and therefore \(k\)-cellular in \(\operatorname{Mod}_{R}\). Condition B.8 thus follows from (7.6). This completes the proof of the theorem. ### Generalized spaces of quasi-invariants It is natural to ask whether the result of Theorem 7.1, i.e. the topological Gorenstein property, holds for generalized ('fake') spaces of quasi-invariants introduced in Section 4. In view of Corollary 4.8, the answer is obviously affirmative when \(k\) is a field of characteristic \(0\). The next theorem shows that this is also true when \(k=\mathbb{F}_{p}\). We keep the notation \(G=SU(2)\) and \(T=U(1)\); however, as in Section 4, we do not identify \(T\) as a maximal torus in \(G\). **Theorem 7.2**.: _Let \(B\) be a space in the genus of \(BG\) that admits an essential map from \(BT\), and let \(X_{m}=X_{m}(\Omega B,T)\) be the space of \(m\)-quasi-invariants associated to \(B\). Then, for any prime \(p\), the morphism \(C^{*}(X_{m},\mathbb{F}_{p})\to\mathbb{F}_{p}\) is Gorenstein of shift \(a=1-4m\)._ Proof.: We give the part of the proof that differs from that of Theorem 7.1. First, observe that, for any space \(B\) in the genus of \(BG\), we have equivalences of cochain spectra \[C^{*}(B,\,\mathbb{F}_{p})\,\simeq\,C^{*}(B^{\wedge}_{p},\,\mathbb{F}_{p})\, \simeq\,C^{*}((BG)^{\wedge}_{p},\,\mathbb{F}_{p})\,\simeq\,C^{*}(BG,\, \mathbb{F}_{p})\,\,,\] where \((\,-\,)^{\wedge}_{p}\) denotes the \(\mathbb{F}_{p}\)-completion functor on pointed spaces. This follows from the fact that both \(B\) and \(BG\) are \(\mathbb{F}_{p}\)-good spaces (in the sense of [1]) and \(B^{\wedge}_{p}\simeq(BG)^{\wedge}_{p}\) for any prime \(p\). The above equivalences are compatible with augmentation; hence, by [1, 10.2], we conclude that \(C^{*}(B,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is a regular map, Gorenstein of shift \(\dim(G)=3\). Now, assume that \(B\) satisfies the conditions of Theorem 4.5. Let \(F=F(\Omega B,T)\) denote the homotopy fibre of the maximal essential map \(p_{B}:BT\to B\). Recall that this last space is not equivalent to a finite CW complex (unless \(B\simeq BG\)), and hence its cochain spectrum \(C^{*}(F,\mathbb{F}_{p})\) need not be cosmall (as in the case of \(BG\)). Nevertheless, we claim that \(C^{*}(F,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is always proxy-regular and satisfies the Gorenstein property of shift \((-2)\). To see this consider the homotopy fibration sequence \(\,\Omega B\to F\to BT\,\) associated to the map \(p_{B}:BT\to B\). Since \(BT\simeq\mathbb{C}\mathbb{P}^{\infty}\) is of Eilenberg-Moore type (see [1, 4.22]), we have \[C^{*}(\Omega B,\mathbb{F}_{p})\simeq C^{*}(F,\mathbb{F}_{p})\wedge_{C^{*}(BT, \,\mathbb{F}_{p})}\mathbb{F}_{p}\] In view of the fact that \(\Omega B\simeq\mathbb{S}^{3}\), the map \(C^{*}(\Omega B,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is cosmall, and hence, by [1, Prop. 4.18], \(C^{*}(F,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is proxy-regular. Furthermore, since \(\mathbb{F}_{p}\) is small over \(C^{*}(BT)=C^{*}(BT,\mathbb{F}_{p})\), we have a natural equivalence of \(C^{*}(BT)\)-modules \[\operatorname{Map}_{C^{*}(BT)}(\mathbb{F}_{p},\,C^{*}(BT))\,\wedge_{C^{*}(BT)} \,C^{*}(F)\,\xrightarrow{\sim}\,\operatorname{Map}_{C^{*}(BT)}(\mathbb{F}_{p },\,C^{*}(F))\,,\] which, by the proof of [1, Prop. 8.6], implies that \(C^{*}(F,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is Gorenstein of shift \(a=1+(-3)=-2\). The rest of the proof is parallel to that of Theorem 7.1. In brief, by Theorem 3.1, the fibre of the \(m\)-th Ganea fibration \(F_{m}\to X_{m}\to B\) defining the space \(X_{m}=X_{m}(\Omega B,T)\) has the homotopy type of \(\,\Sigma^{4m}F\). Hence its cochain spectrum \(C^{*}(F_{m},\mathbb{F}_{p})\) is Gorenstein of shift \(a=-2-4m\). By induction, each space \(X_{m}\) is of finite type over \(\mathbb{F}_{p}\). Since \(C^{*}(B,\mathbb{F}_{p})\to\mathbb{F}_{p}\) is a regular Gorenstein map of shift \(3\), it follows from the above fibration sequence that \(C^{*}(X_{m},\mathbb{F}_{p})\to\mathbb{F}_{p}\) is Gorenstein of shift \(a=-2-4m+3=1-4m\). _Remark 7.3_.: We point out that the topological Gorenstein shifts \(a\) of Theorem 7.1 and Theorem 7.2 agree with the algebraic one of Theorem 2.3: to see this it suffices to change the standard polynomial grading on \(Q_{m}(W)\) to the cohomological one (by 'doubling' degrees of the generators). ## Appendix A Milnor bundles Recall that, if \(G\) is a topological group, its _classifying space_\(BG\) is defined to be the basespace of a principal \(G\)-bundle \(EG\to BG\) that is universal among all (numerable) principal \(G\)-bundles over pointed spaces. This universal property determines the space \(BG\) uniquely up to homotopy, i.e. as a unique (up to unique isomorphism) object in the homotopy category \(\operatorname{Ho}(\operatorname{\texttt{Top}}_{*})\) of pointed spaces. For a general9\(G\), there are two classical models for the classifying space: the Milgram-Segal model [10, Seg68] that defines \(BG\) as the geometric realization \(|B_{*}G|\) of a simplicial space \(\,B_{*}G\,\) (topological bar construction) and the Milnor model [10] that represents \(BG\) as a quotient of an infinite join of spaces homeomorphic to \(G\). The Milnor model will play a key role in our construction of spaces of quasi-invariants; we therefore review it in some detail beginning with the classical topological operation of a join. Footnote 9: For special groups (for example, classical Lie groups), there are also nice geometric models representing \(BG\) as infinite-dimensional manifolds (Grassmannians). Recall that the _join_\(X*Y\) of two spaces is defined to be the space of all line segments joining points in \(X\) to points in \(Y\): i.e., \(X*Y\) is the quotient space of \(X\times I\times Y\) under the identifications \((x,0,y)\sim(x^{\prime},0,y)\,\) and \((x,1,y)\sim(x,1,y^{\prime})\) for all \(x,x^{\prime}\in X\) and \(y,y^{\prime}\in Y\). If \(X\) and \(Y\) are both (well) pointed, it is convenient to work with a _reduced_ version of the join obtained by collapsing to a point the line segment joining the basepoints in \(X\) and \(Y\) (i.e., by imposing on \(\,X*Y\,\) the extra identification \((*,t,*)\sim(*,t^{\prime},*)\) for all \(t,t^{\prime}\in I\)). Note that inside \(X*Y\), there are two cones \(CX\) and \(CY\) embedded via the canonical maps \(\,CX\hookrightarrow X*Y\,,\,(x,t)\mapsto(x,t,*)\), and \(CY\hookrightarrow X*Y\,,\,(y,t)\mapsto(*,1-t,y)\). Collapsing these cones converts \(X*Y\) into the suspension of the smash product of spaces: \(\Sigma(X\wedge Y)=(X*Y)/(CX\lor CY)\). Since \(CX\) and \(CY\) are both contractible in \(X*Y\), the quotient map \(X*Y\to\Sigma(X\wedge Y)\) is a homotopy equivalence. Thus, in the homotopy category \(\operatorname{Ho}(\operatorname{\texttt{Top}}_{*})\) of pointed spaces, we have natural isomorphisms (A.1) \[X*Y\,\cong\,\Sigma(X\wedge Y)\,\cong\,(\Sigma X)\wedge Y\,\cong\,X\wedge( \Sigma Y)\] These are useful in practice for computing the homotopy types of joins. Using standard notation, we will write the points of \(X*Y\) as formal linear combinations \(t_{0}x+t_{1}y\), where \(x\in X\), \(y\in Y\) and \((t_{0},t_{1})\in\Delta^{1}:=\{(t_{0},t_{1})\in\mathbb{R}^{2}:\,t_{0}+t_{1}=1,\,t _{0},t_{1}\geq 0\}\). The identification with topological presentation is given by \((x,t,y)=tx+(1-t)y\). The advantage of this notation is that it naturally extends to 'higher dimensions': the _iterated joins_ of spaces (A.2) \[X_{0}*X_{1}*\ldots*X_{n}=\{t_{0}x_{0}+t_{1}x_{1}+\ldots+t_{n}x_{n}\,:\,(t_{0}, \ldots,t_{n})\in\Delta^{n},\,x_{i}\in X_{i}\}/\sim\] where the equivalence relation is defined by \(\sum_{i=0}^{n}t_{i}x_{i}\sim\sum_{i=0}^{n}t_{i}^{\prime}x_{i}^{\prime}\) if and only if \(t_{i}=t_{i}^{\prime}\) (for all \(i\)) and \(x_{i}=x_{i}^{\prime}\) whenever \(\,t_{i}=t_{i}^{\prime}>0\). Note that, under this equivalence relation, if \(t_{i}=0\) for some \(i\), the point \(x_{i}\) in \(\,t_{0}x_{0}+\ldots+0x_{i}+\ldots+t_{n}x_{n}\in X_{0}*\ldots*X_{n}\) can be chosen arbitrarily (or simply omitted). There is also a convenient way to represent joins by homotopy colimits. For example, it is well-known that the join of two spaces is represented by the homotopy pushout (A.3) \[X*Y=\operatorname{hocolim}[X\gets X\times Y\to Y]\] where the maps are canonical projections and the "hocolim" is taken either in the category of pointed or unpointed spaces depending on whether we consider reduced or unreduced joins. Formula (A.3) generalizes to iterated joins (see, e.g. [10, Prop. 5.1]) (A.4) \[X_{0}*X_{1}*\ldots*X_{n}=\operatorname{hocolim}_{\mathcal{P}(\Delta^{n})}(F_{X})\] where \(\mathcal{P}(\Delta^{n})\) is the poset of all non-empty faces of the \(n\)-simplex \(\Delta^{n}\) (ordered by reversed inclusions) and the diagram \(F_{X}:\mathcal{P}(\Delta^{n})\to\mathtt{Top}\) is defined by assigning to a face \(\Delta_{I}\in\mathcal{P}(\Delta^{n})\) the product of spaces \(\prod_{i\in I}X_{i}\) (with indices corresponding to the vertices of \(\Delta_{I}\)) and to an inclusion of faces \(\Delta_{J}\subset\Delta_{I}\) the canonical projection \(\prod_{i\in I}X_{i}\to\prod_{j\in J}X_{j}\). It is easy to see that formula (A.4) boils down to (A.3) in case of two spaces. Now, we can describe the Milnor model. For integer \(n\geqslant 0\), we define a sequence of spaces \(E_{n}G\) by taking the (unreduced) iterated joins of copies of \(G\): (A.5) \[E_{n}G:=G*G*\ldots*G\qquad(n+1\,\text{ times})\.\] Each space \(E_{n}G\) carries natural (diagonal) left and right \(G\)-actions each of which is free. We will use the right \(G\)-action \(E_{n}G\times G\to E_{n}G\) that can be written explicitly (with notation (A.2)) as (A.6) \[(t_{0}g_{0}+t_{1}g_{1}+\ldots+t_{n}g_{n})\cdot g=t_{0}g_{0}g+t_{1}g_{1}g+ \ldots+t_{n}g_{n}g\] where \(g_{0},\,\ldots\,,g_{n},\,g\in G\,\). Moreover, there are natural \(G\)-equivariant maps \(E_{n}G\hookrightarrow E_{n+1}G\): \[t_{0}g_{0}+\ldots+t_{n}g_{n}\mapsto t_{0}g_{0}+\ldots+t_{n}g_{n}+0\cdot e\] making \(\{E_{n}G\}_{n\geqslant 0}\) into a direct system of (right) \(G\)-spaces. We set \(B_{n}G=E_{n}G/G\) and define (A.7) \[EG:=\varinjlim E_{n}G\quad\text{and}\quad BG:=\varinjlim B_{n}G\.\] By construction, the spaces \(EG\) and \(BG\) come equipped with canonical filtrations (A.8) \[E_{0}G\hookrightarrow\ldots\hookrightarrow E_{n}G\hookrightarrow E_{n+1}G \hookrightarrow\ldots\hookrightarrow EG\] (A.9) \[B_{0}G\hookrightarrow\ldots\hookrightarrow B_{n}G\hookrightarrow B_{n+1}G \hookrightarrow\ldots\hookrightarrow BG\] with consecutive terms (at each level \(n\)) forming the principal \(G\)-bundles (A.10) \[G\to E_{n}G\to B_{n}G\.\] The main observation of [14] (see _loc. cit._, Theorem 3.1) is that the principal \(G\)-bundle (A.10) is _n-universal_ in the sense that its total space is \((n-1)\)-connected (i.e., \(\pi_{i}(E_{n}G)=0\) for all \(i<n\)). In the inductive limit, this gives **Theorem A.1** (Milnor).: _For any topological group \(G\) the natural (quotient) map \(EG\to BG\) is a numerable principal \(G\)-bundle, which is universal among all such \(G\)-bundles._ A detailed proof of Theorem A.1 can be found in [10] (see Chap. 4, Theorem 11.2). We only recall one basic topological fact behind this proof that we will use repeatedly in this paper. **Lemma A.2** ([12], Lemma 2.3).: _If each space \(X_{i}\) in the iterated join (A.4) is \((c_{i}-1)\)-connected, then the space \(X_{0}*X_{1}*\ldots*X_{n}\) is \((\sum c_{i}+n-1)\)-connected._ ## Appendix B Duality of commutative ring spectra In this Appendix, we collect basic definitions from stable homotopy theory concerning duality and regularity properties of commutative ring spectra. Our main references are the paper [1] by Dwyer, Greenlees and Iyengar, where many concepts that we need were originally introduced, and the lecture notes of Greenlees [11] that supplement [1] with motivation and examples. As in [1], we will work in the (stable model) category of _symmetric spectra_, which can be succinctly described as the category \(\operatorname{Mod}_{\mathbb{S}}\) of modules10 over the symmetric sphere spectrum \(\mathbb{S}=((S^{1})^{\wedge n})_{n\geq 0}\) (see [10]). The category \(\operatorname{Mod}_{\mathbb{S}}\) is equipped with a symmetric monoidal product which is denoted as a smash \(A\wedge B\) or tensor product \(A\otimes_{\mathbb{S}}B\) (depending on the context). A _ring spectrum_ is then, by definition, an \(\mathbb{S}\)-algebra, i.e. an \(\mathbb{S}\)-module \(R\) given with two structure maps \(\mathbb{S}\to R\) and \(R\wedge R\to R\) satisfying the usual unitality and associativity properties. We denote the category of ring spectra by \(\operatorname{Alg}_{\mathbb{S}}\). There is a natural (Eilenberg-MacLane) functor \(H:\operatorname{Alg}_{\mathbb{Z}}\to\operatorname{Alg}_{\mathbb{S}},\ k \mapsto Hk\) that embeds the category \(\operatorname{Alg}_{\mathbb{Z}}\) of usual (discrete) associative rings into \(\mathbb{S}\)-algebras by identifying a ring \(k\) with its symmetric Eilenberg-MacLane spectrum \(Hk=(K(k,n))_{n\geq 0}\) (see [10, 1.2.5]). The category \(\operatorname{Alg}_{\mathbb{S}}\) can be thought of as a homotopical refinement ('thickening') of \(\operatorname{Alg}_{\mathbb{Z}}\) in the same way as the category \(\operatorname{Mod}_{\mathbb{S}}\) is a homotopical refinement of the category \(\operatorname{Mod}_{\mathbb{Z}}\) of (discrete) abelian groups. Footnote 10: Unfortunately, the term ‘\(\mathbb{S}\)-module’ in reference to spectra is very ambiguous: apart from symmetric, other popular types of spectra (e.g., orthogonal and EKMM ones) are also \(\mathbb{S}\)-modules. A nice recent survey comparing properties and applications of different types of spectra can be found in [11]. For a ring spectrum \(R\in\operatorname{Alg}_{\mathbb{S}}\), we let \(\operatorname{Mod}_{R}\) denote the category of left module spectra over \(R\). This is a stable model category enriched over \(\operatorname{Mod}_{\mathbb{S}}\). The latter means that, for two \(R\)-modules \(A\) and \(B\), there is a mapping spectrum of \(R\)-module maps \(A\to B\) that we denote \(\operatorname{Map}_{R}(A,B)\). Moreover, if \(A\) is a right \(R\)-module and \(B\) is a left \(R\)-module, there is an associated smash product \(A\wedge_{R}B\) defined as the (homotopy) coequalizer \(A\wedge R\wedge B\rightrightarrows A\wedge B\) of structure maps \(A\wedge R\to A\) and \(R\wedge B\to B\) in \(\operatorname{Mod}_{\mathbb{S}}\). Note that both \(\operatorname{Map}_{R}(A,B)\) and \(A\wedge_{R}B\) are understood as 'derived' objects in the sense that their first arguments are (replaced by) cofibrant objects in \(\operatorname{Mod}_{R}\). In particular, if \(A\) and \(B\) are usual (discrete) modules over a usual (discrete) ring \(R\), viewed as symmetric spectra via the Eilenberg-MacLane functor, then \(\pi_{i}\operatorname{Map}_{R}(A,B)\cong\operatorname{Ext}_{R}^{-i}(A,B)\) and \(\pi_{i}(A\wedge_{R}B)\cong\operatorname{Tor}_{i}^{R}(A,B)\), where \(\pi_{i}\) stand for the (stable) homotopy groups of spectra. If \(R\) is a commutative ring spectrum, then both \(\operatorname{Map}_{R}(A,B)\) and \(A\wedge_{R}B\) are naturally \(R\)-modules, i.e. objects in \(\operatorname{Mod}_{R}\). Next, we recall that a subcategory of a (stable) model category \(\mathcal{M}\) is called _thick_ if it is closed under weak equivalences, cofibration sequences (distinguished triangles) and retracts in \(\mathcal{M}\). Further, a subcategory of \(\mathcal{M}\) is called _localizing_ if it is thick and, in addition, closed under arbitrary coproducts (and hence homotopy colimits) in \(\mathcal{M}\). Given two objects \(A\) and \(B\) in \(\mathcal{M}\), we say that \(B\) is _built_ from \(A\) if \(B\) belongs to the localizing subcategory of \(\mathcal{M}\) generated by \(A\), and \(B\) is _finitely built_ from \(A\) if it belongs to the thick subcategory generated by \(A\) ([1, 3.15]). Now, if \(\mathcal{M}=\operatorname{Mod}_{R}\), an \(R\)-module \(A\) is called _small_ if it is finitely built from \(R\) in \(\operatorname{Mod}_{R}\). This agrees with the usual definition of small (compact) objects in \(\operatorname{Mod}_{R}\): an \(R\)-module \(A\) is small iff \(\operatorname{Map}_{R}(A,\,-\,)\) commutes with arbitrary coproducts. The notion of a localizing subcategory is closely related to that of cellularization. For a fixed object \(A\in\operatorname{Mod}_{R}\), we say that a morphism \(f:M\to N\) in \(\operatorname{Mod}_{R}\) is an _\(A\)-cellular equivalence_ if \(f\) induces a (weak) equivalence on mapping spectra: \[f_{*}:\,\operatorname{Map}_{R}(A,M)\,\xrightarrow{\sim}\,\operatorname{Map}_{R} (A,N)\] Note that every equivalence in \(\operatorname{Mod}_{R}\) is automatically an \(A\)-cellular equivalence, but the converse, in general, is not true. Now, an \(R\)-module \(B\) is called _\(A\)-cellular_ if any \(A\)-cellular equivalence \(f:M\to N\) induces an equivalence \(\,\operatorname{Map}_{R}(B,M)\xrightarrow{\sim}\operatorname{Map}_{R}(B,N)\,\). This terminology is motivated by the fact that the \(A\)-cellular modules are precisely those objects of \(\operatorname{Mod}_{R}\) that are built from \(A\) (see [11, 5.1.15]). Moreover, for any \(R\)-module \(B\), there is a \(A\)-cellular module \(\operatorname{Cell}_{A}^{R}(B)\) together with a \(A\)-equivalence in \(\operatorname{Mod}_{R}\): \[\operatorname{Cell}_{A}^{R}(B)\to B\] called an _\(A\)-cellular approximation11_ of \(B\). Such an approximation is determined by \(B\) uniquely up to canonical equivalence; we will use the simpler notation \(\operatorname{Cell}_{A}(B)\) for \(\operatorname{Cell}_{A}^{R}(B)\) when the ring spectrum \(R\) is understood. Footnote 11: Cellularization is an example of a general model-categorical construction called right Bousfield localization (colocalization) with respect to an object \(A\). In this language, \(A\)-cellular equivalences are called \(A\)-colocal equivalences, \(A\)-cellular objects are \(A\)-colocal objects, and \(A\)-cellular approximations are functorial cofibrant replacements in the \(A\)-colocal model structure on \(\operatorname{Mod}_{R}\) (see [11, 3.1.19]). The above categorical notions can be used to impose some finiteness and regularity conditions on commutative ring spectra. First, we say that a morphism of commutative ring spectra \(R\to k\) is called _regular_ if \(k\) is small as an \(R\)-module. This definition is motivated by the fact that, in classical commutative algebra, a local Noetherian ring \(R\) with residue field \(k=R/\mathfrak{m}\) is regular iff \(k\) has a finite length resolution by f.g. free \(R\)-modules (see [20]); for the associated Eilenberg-MacLane spectra, the latter means that \(Hk\) is finitely built from \(HR\). A more flexible and technically useful condition is obtained by weakening the regularity assumption on \(R\to k\) in the following way. **Definition B.1** ([16], 4.6).: A morphism of commutative ring spectra \(R\to k\) is called _proxy-regular_ if \(k\) is a _proxy-small_\(R\)-module via \(R\to k\) in the sense that there is a small \(R\)-module \(K\) that builds \(k\) and is finitely built from \(k\) in \(\operatorname{Mod}_{R}\). Note that if \(K=k\), then \(R\to k\) is _regular_. On the other extreme, if \(K=R\) then \(R\to k\) is called _cosmall_. Let \(E:=\operatorname{Map}_{R}(k,k)\) denote the endomorphism ring spectrum of \(k\) viewed as a left \(R\)-module via the morphism \(R\to k\). There is a standard Quillen adjunction relating right \(E\)-modules to left \(R\)-modules: (B.1) \[(\,\text{-}\,)\wedge_{E}k\,:\,\operatorname{Mod}_{E^{\text{op}}}\, \leftrightarrows\,\operatorname{Mod}_{R}\,:\,\operatorname{Map}_{R}(k,\,\text{ -}\,)\] If \(R\to k\) is regular, the functors (B.1) induce an equivalence between \(\operatorname{Ho}(\operatorname{Mod}_{E^{\text{op}}})\) and the full subcategory of \(\operatorname{Ho}(\operatorname{Mod}_{R})\) consisting of \(k\)-cellular \(R\)-modules (see [12, Theorem 6.1]). If \(R\to k\) is proxy-regular, (B.1) does not induce an equivalence in general, but the counit of this adjunction still provides a \(k\)-cellular approximation for modules in \(\operatorname{Mod}_{R}\) (see [12, Lemma 6.3]): (B.2) \[\operatorname{Cell}_{k}(M)\simeq\operatorname{Map}_{R}(k,M)\wedge_{E}k\] Moreover, for all \(R\)-modules \(M\), there is a natural equivalence (see [12, Lemma 6.6]) (B.3) \[\operatorname{Cell}_{k}(M)\simeq\operatorname{Cell}_{k}(R)\wedge_{R}M\] Formula (B.2) shows that when \(R\to k\) is proxy-regular, the \(k\)-cellular approximation \(\operatorname{Cell}_{k}(M)\) is functorial and effectively constructible in \(\operatorname{Mod}_{R}\) (_cf._[16, Definition 4.3]). Now, we come to the key definition of a Gorenstein ring spectrum that we state under the regularity assumptions of Definition B.1 (which is a slightly less general form than in [1]): **Definition B.2** (_cf._[1], 8.1 and 8.4).: A morphism of commutative ring spectra \(R\to k\) is called _Gorenstein of shift \(a\in\mathbb{Z}\)_, if \(R\to k\) is proxy-regular and there is an equivalence of \(k\)-modules (B.4) \[\operatorname{Map}_{R}(k,R)\,\simeq\,\Sigma^{a}k\] where \(\Sigma\) denotes the suspension functor on \(\operatorname{Mod}_{k}\). We will be mostly interested in ring spectra \(R\) that are _augmented \(k\)-algebras_ over a field \(k\). For such algebras, we will always assume that \(R\to k\) is the given augmentation morphism on \(R\), and we will simply say that \(R\) is Gorenstein if so is \(R\to k\). The Gorenstein condition (B.4) can be slightly refined in this case. Note that, if \(R\) is a \(k\)-algebra, using the \(k\)-module structure on \(R\), we can rewrite (B.4) in the form (B.5) \[\operatorname{Map}_{R}(k,R)\,\simeq\,\Sigma^{a}\operatorname{Map}_{R}(k, \,\operatorname{Map}_{k}(R,k))\] Both sides of (B.5) have natural right module structures over the endomorphism ring \(E=\operatorname{Map}_{R}(k,k)\) but, in general, these module structures need not to agree under the equivalence (B.5). Following [1] (see also [11, Section 18.2])), we say that an augmented \(k\)-algebra \(R\) is _orientable Gorenstein_ if (B.5) is an equivalence of right \(E\)-modules. If \(R\) is a local Noetherian ring of Krull dimension \(d\) with residue field \(k=R/\mathfrak{m}\), then \(R\) is Gorenstein (in the sense of commutative algebra) iff (B.6) \[\operatorname{Ext}_{R}^{i}(k,R)\cong\left\{\begin{array}{ll}k&i=d\\ 0&\text{otherwise}\end{array}\right.\] The isomorphism (B.6) can be written as an equivalence \(\operatorname{RHom}_{R}(k,R)\simeq\Sigma^{d}k\,\) in the derived category \(\mathcal{D}(R)\) and thus corresponds to the Gorenstein condition (B.4) of Definition B.2. In classical commutative algebra, there is another well-known characterization of Gorenstein rings in terms of local cohomology: (B.7) \[H_{\mathfrak{m}}^{i}(R)\cong\left\{\begin{array}{ll}\operatorname{Hom}_{k}( R,k)&i=d\\ 0&\text{otherwise}\end{array}\right.\] which can be viewed as a special case of Grothendieck's local duality theorem. The following definition is a topological analogue of (B.7). **Definition B.3**.: An augmented \(k\)-algebra \(R\) satisfies _Gorenstein duality of shift \(a\)_ if there is an equivalence of \(R\)-modules (B.8) \[\operatorname{Cell}_{k}(R)\,\simeq\,\Sigma^{a}\operatorname{Map}_{k}(R,k)\] While the algebraic conditions (B.6) and (B.7) are known to be equivalent, their topological analogues (B.4) and (B.8) are, in general, not (see, e.g., [1, Remark 2.11] for a counterexample). This necessitates two separate definitions for Gorensteinness of commutative ring spectra. The last property of ring spectra that we want to review is concerned with double centralizers. Recall, for a morphism \(R\to k\), the _double centralizer of \(R\)_ is defined to be \(\hat{R}:=\operatorname{Map}_{E}(k,k)\), where \(E=\operatorname{Map}_{R}(k,k)\) is the endomorphism spectrum of \(k\) in \(\operatorname{Mod}_{R}\). The left multiplication on \(k\) gives a morphism of ring spectra \(\,R\to\hat{R}\,\), and following [1], we say **Definition B.4** ([1], 4.16).: \(R\to k\) is _dc-complete_ if \(R\overset{\sim}{\to}\hat{R}\) is an equivalence in \(\operatorname{Alg}_{\text{g}}\). Note that, in algebra, a surjective homomorphism \(R\to k\) from a Noetherian commutative ring \(R\) to a field \(k\) is dc-complete iff \(R\cong\tilde{R}_{I}\), where \(\tilde{R}_{I}:=\varprojlim R/I^{n}\) is the \(I\)-adic completion of \(R\) with respect to the ideal \(I=\operatorname{Ker}(R\to k)\). This motivates the above terminology. One can show that if \(R\to k\) is dc-complete, the regularity properties of the ring spectra \(R\) and \(E\) are strongly connected (see, e.g., [1, Proposition 4.17]).
2310.05901
Trans-Planckian censorship constraints on properties and cosmological applications of axion-like fields
We use the Trans-Planckian Censorship Conjecture (TCC) to constrain the decay constants $f$ characterizing a set of N identical axion-like fields with cosine potentials, improving upon the precision of other Swampland conjectures and existing string-theoretic arguments. We find that consistency with the TCC requires any such set of axion-like fields to satisfy $f\sqrt{N} \lesssim 0.6M_{pl}$, where $M_{pl}$ is the reduced Planck mass. We show that this bound makes models of axion-driven inflation incapable of simultaneously producing the required number of e-foldings and the observed scalar spectral tilt. In contrast, we find that models of axion quintessence can be simultaneously compatible with the TCC and observational data, provided that the axions' initial field values are set near the maxima of their potentials to within roughly $\pm \frac{\pi}{5}f$.
David Shlivko
2023-10-09T17:41:51Z
http://arxiv.org/abs/2310.05901v2
# Trans-Planckian censorship constraints on properties and ###### Abstract We use the Trans-Planckian Censorship Conjecture (TCC) to constrain the decay constants \(f\) characterizing a set of \(\mathcal{N}\) identical axion-like fields with cosine potentials, improving upon the precision of other Swampland conjectures and existing string-theoretic arguments. We find that consistency with the TCC requires any such set of axion-like fields to satisfy \(f\sqrt{\mathcal{N}}\lesssim 0.6M_{pl}\), where \(M_{pl}\) is the reduced Planck mass. We show that this bound makes models of axion-driven inflation incapable of simultaneously producing the required number of e-foldings and the observed scalar spectral tilt. In contrast, we find that models of axion quintessence can be simultaneously compatible with the TCC and observational data, provided that the axions' initial field values are set near the maxima of their potentials to within roughly \(\pm\frac{\pi}{5}f\). keywords: axion, swampland, inflation, dark energy + Footnote †: journal: Physics Letters B ## 1 Introduction In the context of effective field theories, an axion-like field (henceforth, "axion") is a pseudoscalar angular degree of freedom that emerges as the Nambu-Goldstone mode of a complex scalar field with a spontaneously broken chiral U(1) symmetry. The axion's continuous shift symmetry can be explicitly broken at sufficiently low energies by non-perturbative effects, such as instanton transitions within a non-Abelian gauge sector coupled to the axion. These effects give the axion \(\varphi\) a periodic effective potential that takes the form \(V_{\text{eff}}\propto\cos(\varphi/f)\) in the dilute instanton gas approximation, up to an additive constant [1]. Here the decay constant \(f\) is set by the scale of spontaneous symmetry breaking. The scope of this work will be restricted to potentials of this form. Except for a brief discussion related to axion quintessence, we will also assume that the net vacuum energy is negligible compared to the amplitude of the axion potential, setting \[V(\varphi)=m^{2}f^{2}[\cos(\varphi/f)+1], \tag{1}\] with \(m=|V^{\prime\prime}(n\pi f)|,\ n\in\mathbb{Z}\), denoting the axion's nominal mass. The mass is exponentially sensitive to the instanton action and can therefore take on a wide range of values depending on the gauge sector to which the axion is coupled [1]. Moreover, the axion's mass is shielded from large perturbative loop corrections due to its (weakly broken) shift symmetry [2]. These factors have motivated the use of axions as candidates for a wide range of cosmological models postulating the existence of spin-zero fields with low masses and flat potentials, including models of inflation and quintessence (see, e.g., [3] for a review). In addition to being phenomenologically interesting, axions are on strong theoretical footing due to their generic emergence in the low-energy limit of string theory [4]. These "string axions" can acquire a similarly wide range of masses from worldsheet or membrane instantons [4; 5], though contributions from other non-perturbative effects remain uncertain [6; 7]. Despite this uncertainty, attempts to model the accelerated expansion of the universe using string axions have continued for decades [8; 9; 10; 11; 12; 13; 14]. Independently, string-theoretic arguments (and considerations of quantum gravity more broadly) have led to the development of Swampland conjectures that place limits on viable effective field theories and on the possible dynamics of cosmic expansion [15; 16; 17; 18]. One such conjecture is the Trans-Planckian Censorship Conjecture (TCC), which states that a phase of accelerated expansion will never last long enough for sub-Planckian perturbation modes to be stretched to super-Hubble length scales [18]. In addition to being consistent with explicit constructions in string theory, the TCC has been supported by general arguments from holography and gravitational renormalizability, and it is connected to many of the other Swampland conjectures [19; 20; 21]. As a result, the scope of the TCC extends to systems containing axions of any origin compatible with quantum gravity, whether string-theoretic or otherwise. In this work, we analyze the implications of the TCC for axions in general and for models of axion-driven cosmic acceleration in particular. Our central goal will be to place constraints on the decay constants characterizing systems of \(\mathcal{N}\) identical axions with masses \(m\ll M_{pl}\), where \(M_{pl}=\sqrt{\hbar c/(8\pi G)}\) is the reduced Planck mass. axion systems are frequently used in models of cosmic acceleration [9; 14; 22; 23; 24; 25; 26], motivated by the failure of string theory to produce an axion with a sufficiently large decay constant (\(f\gtrsim M_{pl}\)) to drive single-field inflation [27; 28; 29] or quintessence [2; 9; 22]. The goal of these models is typically to use many axions with small decay constants to mimic a single axion with a large "effective" decay constant, \(f_{\rm eff}\equiv f\sqrt{\cal N}\), that could drive accelerated expansion [22; 23]. Later arguments from string theory and the Swampland program, however, have suggested that even \(f_{\rm eff}\) is bounded from above by the Planck scale [30; 31; 32; 33]. In Section 2, we improve upon the precision of these bounds by showing that \({\cal N}\)-axion systems are only compatible with the TCC if \(f_{\rm eff}\lesssim 0.6M_{pl}\). In Section 3, we show that this bound rules out models of \({\cal N}\)-axion inflation to a much greater degree than the existing \(2\sigma\) tensions between measurements of the cosmic microwave background (CMB) and predictions from axion-driven "natural inflation" scenarios [34]. On the other hand, we find in Section 4 that the TCC is compatible with \({\cal N}\)-axion quintessence models, as long as the initial field values lie sufficiently close to the maxima of the axions' cosine potentials. Finally, in Section 5, we summarize our results, elaborate on the different implications for quintessence models when the axions are string-theoretic vs. non-string-theoretic in origin, and comment on possible extensions of this study. Note that throughout the remainder of this work, we will work in units where \(M_{pl}=1\). ## 2 TCC Constraints on Axions The TCC restricts the duration of any phase of accelerated expansion of the universe by forbidding sub-Planckian perturbation modes from being stretched to super-Hubble length scales. Mathematically, this statement can be written as \[\frac{a_{\rm end}}{a_{0}}\leq\frac{M_{pl}}{H_{\rm end}}, \tag{2}\] where \(a_{0}\equiv 1\) is the scale factor at the onset of accelerated expansion, and \(a_{\rm end}\) and \(H_{\rm end}\) are the scale factor and Hubble parameter at its completion. This restriction can be used to constrain the potential of any scalar field by requiring that Eq. (2) is obeyed throughout the field's evolution for _all_ physically allowed initial conditions [18]. Note that where quantum fluctuations or tunneling events are concerned, the TCC has been interpreted as a probabilistic statement, with Eq. (2) holding for the expected amplitude of fluctuations or expected tunneling time [18]. In this section, we will first develop an approximate analytic constraint on the effective decay constant \(f_{\rm eff}\equiv f\sqrt{\cal N}\) characterizing the potential of an \({\cal N}\)-axion system, and we will then tighten this constraint using more accurate numerical methods. For the analytic calculation, we choose to give each field \(\varphi_{n}\) (where \(n=1,\,...\,,\,{\cal N}\)) an initial value \[\varphi_{n}(0)=\frac{mf_{\rm eff}^{2}}{\pi\sqrt{6}} \tag{3}\] and initial velocity \[\dot{\varphi}_{n}(0)=\frac{m^{2}f_{\rm eff}}{6\pi} \tag{4}\] at some initial time \(t=0\). We also restrict our analysis to axions with masses satisfying \(m\ll 1/({\cal N}f)\) or \(m\ll 1\), whichever is stricter. This ensures that \(\varphi_{n}(0)\lesssim f\) and \(\dot{\varphi}_{n}(0)^{2}\ll V(\varphi_{n}(0))\), allowing us to approximate \[V^{\prime}(\varphi_{n})\approx-m^{2}\varphi_{n} \tag{5}\] and, per the Friedmann Equation, \[3H^{2}\approx\sum_{n=1}^{N}V(\varphi_{n})\approx 2m^{2}f_{\rm eff}^{2}. \tag{6}\] Our choice of initial conditions--which, no matter how contrived, must still obey the TCC--places the fields sufficiently close to the hilltop (\(\varphi_{n}=0\)) to drive accelerated expansion, but sufficiently far from the hilltop (and with sufficiently large velocities) to make quantum fluctuations subdominant to classical evolution. We derive this latter statement in A. Using the initial conditions (3-4) and the approximations (5-6), we solve the equations of motion \[\ddot{\varphi}_{n}+3H\dot{\varphi}_{n}+V^{\prime}(\varphi_{n})=0 \tag{7}\] to find an exponential growing mode \[\varphi_{+}(t)=Ae^{\omega_{+}t}, \tag{8}\] where \[A \equiv \varphi_{n}(0)\cdot\frac{3+f_{\rm eff}^{-2}+\sqrt{9+6f_{\rm eff}^ {-2}}}{2\sqrt{9+6f_{\rm eff}^{-2}}}, \tag{9}\] \[\omega_{+} \equiv \frac{H}{2}\left(\sqrt{9+6f_{\rm eff}^{-2}}-3\right). \tag{10}\] Note that \(A\approx\varphi_{n}(0)\) for any \(f_{\rm eff}\gtrsim 0.1\), and the inverse time constant is \(\omega_{+}\approx H/(2f_{\rm eff}^{2})\) in the limit \(f_{\rm eff}\gg 1\) or alternatively \(\omega_{+}\approx m\) in the limit \(f_{\rm eff}\ll 1\). Accelerated expansion continues at least until \(\varphi_{n}\sim f\), where one also has from Eq. (8) that \(\dot{\varphi}_{n}\sim\omega_{+}f\). This can be verified by computing the equation of state, \(\epsilon\equiv\frac{3}{2}(1+w)\equiv\frac{3}{2}(1+P/\rho)\), where \(P\) and \(\rho\) are respectively the total pressure and energy density. In particular, for a collection of identical scalar fields, we have that \[\epsilon=\frac{3\dot{\varphi}_{n}^{2}}{\dot{\varphi}_{n}^{2}+2V(\varphi_{n})} \approx\frac{3}{1+3(m/\omega_{+})^{2}(f/\varphi_{n})^{2}}. \tag{11}\] When \(f_{\rm eff}\ll 1\), we see that \(\epsilon\approx\frac{3}{1+3(f/\varphi_{n})^{2}}\) approaches \({\cal O}(1)\) as \(\varphi_{n}\to f\), signaling the end of acceleration of the scale factor (which obeys \(\ddot{a}\propto(1-\epsilon)\)). In the opposite limit (\(f_{\rm eff}\gtrsim 1\)), the equation of state is still well below unity when \(\varphi_{n}\approx f\), meaning acceleration will go on for a bit longer--but it will certainly end before the fields reach the minima at \(\varphi_{n}=\pi f\), where \(V=0\) and \(\epsilon=3\). Therefore, in either case, we can conservatively assume that acceleration lasts from \(\varphi_{n}\sim\varphi_{n}(0)\) until \(\varphi_{n}\sim f\), and the scale factor at the end of acceleration satisfies \[\ln(a_{\rm end})=\int_{t=0}^{t_{\rm end}}H(t)dt\gtrsim H_{\rm end}t_{\rm end }\approx\frac{H_{\rm end}}{\omega_{+}}\ln\left(\frac{f}{A}\right). \tag{12}\] The Friedmann equation gives \[H_{\rm end}=\sqrt{\frac{\rho_{\rm end}}{3}}\geq\sqrt{\frac{V_{\rm end}}{3}} \sim\frac{mf_{\rm eff}}{\sqrt{2}}, \tag{13}\] and the TCC constraint (2) thus requires the parameters of this model to satisfy \[\frac{mf_{\rm eff}}{\sqrt{2}\omega_{+}}\ln\left(\frac{f}{A}\right)<\ln\left( \frac{\sqrt{2}}{mf_{\rm eff}}\right). \tag{14}\] It is straightforward to check that for a broad range of masses (any \(m\lesssim 0.01\)) and axion count (any \(\mathcal{N}\lesssim 1000\)), the constraint (14) places an upper bound on \(f_{\rm eff}\) that is at most \(\mathcal{O}(1)\) and asymptotes toward \(\sim 0.7\) as the mass decreases. This constraint in the low-mass regime is already somewhat tighter than other Swampland conjectures and string-theoretic arguments, which broadly disfavor scenarios with \(f_{\rm eff}\gtrsim 1\)[30; 31; 32; 33]. Importantly, factors of \(\mathcal{O}(1)\) in the logarithm on the right-hand side do not meaningfully affect this asymptotic behavior, so our results remain true under slight modifications to the cutoff scales in the TCC. Due to our choice of initial conditions and analytic approximations, the above constraint on \(f_{\rm eff}\) is a conservative one. We can produce an even tighter bound on \(f_{\rm eff}\) by numerically simulating \(\mathcal{N}\) axions beginning from rest at precisely \(\varphi_{n}=0\). To maximize the accuracy of these simulations, we replace the approximations from Eqs. (5-6) with the exact expressions \[V^{\prime}(\varphi_{n})=-m^{2}f\sin(\varphi_{n}/f) \tag{15}\] and \[3H^{2}=\sum_{n=1}^{N}\left[V(\varphi_{n})+\frac{1}{2}\dot{\varphi}_{n}^{2} \right]. \tag{16}\] We account for quantum fluctuations in the fields by supplementing their classical evolution (governed by Eq. 7) with stochastic jumps in the field values applied independently to each field (see A for details). Since the simulations are random in nature, we bound \(f_{\rm eff}\) by the highest value for which fewer than half of the simulations violate the TCC. This bound is illustrated as a function of \(m\) and \(\mathcal{N}\) in Fig. (1), and it closely matches the conclusions from our analytic calculation in the low-mass limit (relevant, e.g., for quintessence models), constraining \[f_{\rm eff}\lesssim 0.6\quad({\rm for}\ m\lesssim GeV). \tag{17}\] At higher masses (relevant, e.g., for inflationary models), the constraint on \(f_{\rm eff}\) is even tighter, reflecting the shrinking hierarchy between the Hubble scale and the Planck scale. The constraints are also generally tighter for systems with lower axion count \(\mathcal{N}\), as predicted by the analytic constraint (14). ## 3 Constraints on Inflation Inflationary models require several features to be successful, including a period of accelerated expansion lasting sufficiently many e-folds to establish a Bunch-Davies vacuum and generate inhomogeneous modes that are just today re-entering the Hubble horizon and are observable in the CMB. These modes must also have an amplitude, tilt, and tensor-to-scalar ratio compatible with observational bounds. In this section, we show that models of axion inflation using the potential (1) and obeying the TCC constraint (17) cannot satisfy all of these criteria at once, regardless of initial conditions. The modes re-entering the Hubble horizon today were produced \(N_{e}^{*}\sim 30-60\) e-folds of the scale factor prior to the end of inflation, depending on the reheating temperature [35]. Note that even protracted reheating scenarios, which have previously been used to adjust \(N_{e}^{*}\) and improve compatibility between natural inflation and data [36], cannot push \(N_{e}^{*}\) below \(\mathcal{O}(30)\) without interfering with Big Bang Nucleosynthesis (BBN). This leads to a constraint on the total number of e-folds of accelerated expansion, \(N_{e}^{\rm tot}\equiv\ln(a_{\rm end})\gtrsim 30\). It is straightforward to show that this condition cannot be satisfied in \(\mathcal{N}\)-axion inflation models with \(f_{\rm eff}\lesssim 1\) when the fields begin from rest in the lower half of the cosine potential. In this case, the equation of motion (7) is solved by damped oscillations Figure 1: Maximum values of \(f_{\rm eff}\equiv f\sqrt{\mathcal{N}}\) allowed by the TCC as a function of axion mass \(m\) and axion count \(\mathcal{N}\), according to numerical simulations with \(\pm 0.01M_{pl}\) precision on \(f_{\rm eff}\). around the minimum scaling roughly as \[[\varphi_{n}(t)-f\pi]\sim e^{-mf_{\rm eff}t}\cos(mt)\quad(\text{trough}). \tag{18}\] It is clear to see that acceleration cannot last for time scales longer than \(t\sim m^{-1}\), leading to a severely limited \(N_{e}^{\rm tot}=\int Hdt\lesssim f_{\rm eff}\lesssim 1\). This rules out inflationary models with initial conditions in the lower half of the potential. On the other hand, when the axion field values begin near the maxima of their potentials, it is possible to achieve a large number of e-folds while satisfying the TCC bound on \(f_{\rm eff}\), as long as the axion mass \(m\) satisfies \[mf_{\rm eff}\lesssim 10^{-19}. \tag{19}\] This upper limit comes from a model-independent bound on the energy scale of TCC-compliant inflation [37; 38]. Even if this condition is satisfied, however, one still runs into severe inconsistencies between \(\mathcal{N}\)-axion inflation models and measurements of the spectral tilt. (For earlier constraints on inflation driven by a single axion, which had already begun to show tensions with observational data, see Refs. [39; 40; 34; 41].) We know from Eq. (8) and the following discussion that each field's trajectory near the hilltop scales roughly as \[\varphi_{n}(t)\propto e^{mt}\quad(\text{hilltop}). \tag{20}\] Since the scale factor grows approximately as \[a(t)\propto e^{Ht}\approx e^{\sqrt{2/3}f_{\rm eff}mt}, \tag{21}\] \(N_{e}^{*}\) e-folds of the scale factor correspond to about \(N_{e}^{*}/f_{\rm eff}\) e-folds of the field value, implying that \(\varphi_{n}\ll f\) at the time the large-scale CMB modes were being created. Then, since the equation of state (11) evaluated at \(\varphi_{n}\ll f\) and \(f_{\rm eff}\lesssim 1\) is \[\epsilon\approx\frac{\varphi_{n}^{2}}{f^{2}}, \tag{22}\] the equation of state would have been \(\epsilon^{*}\approx e^{-2N_{e}^{*}/f_{\rm eff}}\lesssim e^{-60}\) when those CMB modes were created. The tensor-to-scalar ratio \(r^{*}=16\epsilon^{*}\) is therefore similarly small and consistent with the observational upper bound [42]. On the other hand, the scalar tilt is given by \[1-n_{s}=\frac{d\ln(\epsilon)}{d\ln(a)}+2\epsilon\approx\frac{d\ln(\epsilon)}{ d\ln(a)}, \tag{23}\] and since \(\epsilon\propto\varphi_{n}^{2}\), we have that \[1-n_{s}\approx 2\frac{d\ln(\varphi_{n})}{d\ln(a)}=2\frac{d\ln(\varphi_{n})}{ Hdt}\approx 2f_{\rm eff}^{-1}, \tag{24}\] up to multiplicative factors of order unity. Clearly, any \(f_{\rm eff}\lesssim 1\) will be inconsistent with the observed spectral tilt \((1-n_{s}\approx 0.03)\) by at least two orders of magnitude [34]. As a result, even when the number of e-folds is sufficient and consistent with the TCC, models of \(\mathcal{N}\)-axion inflation obeying Eq. (17) still fail. The example of axion inflation considered here illustrates a much more general tension between inflation and the TCC. It has been shown in Refs. [37; 38] that for TCC-compliant inflationary models to be consistent with the perturbation amplitude observed in the CMB, the equation of state \(N_{e}^{*}\) e-folds before the end of inflation must satisfy \(\ln(\epsilon^{*})\lesssim-71\). On the other hand, the observed tilt tells us that \(d\ln(\epsilon)/d\ln(a)\sim 0.03\) at that same time. In order to end inflation by achieving \(\epsilon=1\) within \(\mathcal{O}(30)\) e-folds of the scale factor, it is necessary for \(d\ln\epsilon/d\ln(a)\) to increase _rapidly_ to at least \(\mathcal{O}(1)\); however, as we have demonstrated, this does not occur naturally on cosine (or any approximately inverse-parabolic) potentials. This issue has been noticed already in previous proposals for TCC-compliant models of inflation [35; 37; 43]. In certain models, it was resolved by a sharp and finely tuned cliff in the inflaton's potential [37] or a finely tuned waterfall phase transition that ends acceleration as soon as the inflaton exits the slow-roll regime [35]. In any case, it is evident that the TCC requires inflationary models to be supplemented with a "kill switch" mechanism that ends acceleration at just the right time to match observational constraints. ## 4 Constraints on Quintessence Unlike inflationary models, a successful model of axion quintessence only needs to achieve \(\mathcal{O}(1)\) e-fold of accelerated expansion with a low (quasi-de Sitter) equation of state. Moreover, observational constraints on the time-dependence of the equation of state are significantly weaker than in the case of inflation. As a result, it is possible in principle for axion quintessence models to be simultaneously consistent with the TCC and with all available observational data. Indeed, it was shown in Refs. [2; 44; 22] that models of a single axion with \(f\gtrsim 0.5\) can successfully reproduce the behavior of dark energy for a broad range of initial conditions. The primary issue with these models, however, is that they are difficult to construct within string theory. This is because the existence of a supersymmetry breaking sector gives rise to instanton-generated potentials with \[V_{SSB}(\varphi)=m_{S}^{2}\cdot e^{-S_{inst}}\cos(\varphi/f), \tag{25}\] where \(m_{S}\) is the scale of supersymmetry breaking and \(S_{inst}\) is the instanton action [9; 45]. As a result, an axion whose potential is of the same order as the present-day critical energy density must have \[m_{S}^{2}\cdot e^{-S_{inst}}\lesssim H_{0}^{2}\implies S_{inst}\gtrsim 2\ln \left(\frac{m_{S}}{H_{0}}\right)\gtrsim 200, \tag{26}\] assuming \(m_{S}\gtrsim\) TeV. Because a string axion's decay constant is related to the instanton action via \[f\sim 1/S_{inst}\lesssim 0.005, \tag{27}\] any string axion sufficiently light to be quintessence will have a decay constant that falls far short of the \(f\sim 0.5\) threshold [9]. This line of reasoning suggests that if axions are string-theoretic in origin, one needs at least \(N\sim 10^{4}\) of them to produce a satisfactory effective decay constant, \(f_{\rm eff}\equiv f\sqrt{N}\gtrsim 0.5\). As it happens, depending on the particular Calabi-Yau compactification, models of string theory can contain up to \({\cal O}(10^{2}-10^{6})\) light axions in their low-energy limits [9; 46]. It is natural to ask whether a collection of this many axions can successfully mimic dark energy if their initial field values are spread in a random uniform distribution across the cosine potential. Unfortunately, numerical simulations show that this scenario would require \(f_{\rm eff}\gtrsim 1.4\), which violates the TCC by at least \({\cal O}(\ln(m^{-1}))\) e-folds of accelerated expansion. In Fig. (2), we compare the dark energy equation of state \(w_{\varphi}\equiv P_{\varphi}/\rho_{\varphi}\) as a function of redshift \(z\) for a scenario with a random uniform distribution of initial field values (red curve) versus one with identical field values near the hilltop (blue curve), each with \(f_{\rm eff}=0.6\). Only the model with initial field values near the hilltop is compatible with the observational upper limit on \(w_{\varphi}\). While requiring that each initial field value satisfies \(|\theta_{n}(0)|\equiv|\varphi_{n}(0)|/f\lesssim\pi/5\) would be a modest constraint for single-axion models, it is a much more demanding one for models with \(N\gtrsim 10^{4}\) axions. This result suggests that mechanisms for dynamically positioning axions near the hilltop, such as the maximal-misalignment mechanism used in _N-essence_[31], may be essential for models of string-axion quintessence. If the axions do not saturate the TCC bound and instead have a lower \(f_{\rm eff}\), it is still possible to comply with observational constraints, but only for more finely tuned initial values of \(\varphi_{n}\). We can estimate the necessary tuning by adapting the result from Eq. (12) to variable initial conditions \(\{\varphi_{n}(0),~{}\dot{\varphi}_{n}(0)\}\) and taking the limit \(f_{\rm eff}\ll 1\), finding that the number of e-folds of acceleration accrued during classical evolution is given by \[N_{\epsilon}^{\rm tot}\approx f_{\rm eff}\ln\left(\frac{2f}{\varphi_{n}(0)+ \dot{\varphi}_{n}(0)/m}\right). \tag{28}\] The argument of the logarithm depends on whether the axion begins in slow-roll1 (with \(\dot{\varphi}_{n}(0)/m\sim\varphi_{n}(0)/f_{\rm eff}\)) or with negligible field velocity (\(\dot{\varphi}_{n}(0)/m\ll\varphi_{n}(0)\)), though this difference will not be too important. In order to achieve at least \({\cal O}(1)\) e-fold of accelerated expansion, \(\varphi_{n}(0)\) must satisfy Footnote 1: We caution that even if the axion is slowly rolling at the onset of acceleration, much of its later trajectory will occur _outside_ of the slow-roll regime. Indeed, one can check using Eqs. (8) and (10) that a necessary condition for slow-roll, \(\ddot{\varphi}\ll H\dot{\varphi}\), breaks down when the growing mode is dominant in models with \(f_{\rm eff}\lesssim 1\). \[\varphi_{n}(0)/f\lesssim e^{-f_{\rm eff}^{-1}}, \tag{29}\] where we have neglected the sub-exponential scaling with factors dependent on the initial field velocity. Note that in the limit of a single axion, this upper bound reproduces the result from Ref. [22] and roughly represents the probability that an axion with a random initial field value can drive quintessence. This bound differs from the probability found in Ref. [13] for two important reasons. First, we used the full definition of the equation of state (Eq. 11) to determine when acceleration ends, rather than the approximate slow-roll expression \(\epsilon\approx\frac{1}{2}|\nabla V/V|^{2}\). Second, we required acceleration to continue for \({\cal O}(1)\) e-fold to satisfy observational constraints for quintessence, whereas the limits in Ref. [13] included cases where the duration of accelerated expansion is arbitrarily shorter. From Eq. (29), we see that extreme fine-tuning can classically compensate for arbitrarily low values of \(f_{\rm eff}\) in axion quintessence models. In practice, however, quantum fluctuations can destabilize extremely fine-tuned configurations. Whether fluctuations in the field are on the order of some TCC-compliant inflationary energy scale, the present Hubble scale, or even several orders of magnitude lower, they impose a lower bound on \(\varphi_{n}(0)/f\) and in turn constrain \[f_{\rm eff}\gtrsim 0.01. \tag{30}\] This bound is slightly looser than the analogous calculation for a single axion in Ref. [22] (which assumed a higher, TCC-violating inflationary energy scale), but it still disfavors models of quintessence driven by a single string axion with \(f\lesssim 0.005\). Figure 2: Comparison of theoretical predictions (solid/dotted curves) to the observational upper bound (dashed black curve, adapted from Ref. [47]) on the dark energy equation of state \(w_{\varphi}\) as a function of redshift \(z\). The solid-curve theoretical predictions are generated under the assumption of \(N\gtrsim 10^{4}\) identical axions with \(f_{\rm eff}=0.6\), while the dotted curves illustrate sensitivity to \(f_{\rm eff}\in\{0.55,0.65\}\). The blue curve, which is consistent with the observational constraints, assumes all of these axions have initial field values satisfying \(\theta_{n}(0)\equiv\varphi_{n}(0)/f=\pi/5\), while the red curve, which violates observational constraints, assumes the initial positions are randomly, uniformly distributed across their domain. Note that these theoretical predictions are generated by numerically simulating the axions’ classical evolution, starting from zero initial velocity at early times (\(z\gg 1\)), in the presence of ordinary (dust-like) matter. Present-day (\(z=0\)) is defined by reaching the fractional energy densities \(\Omega_{m}=0.3\) and \(\Omega_{\varphi}=0.7\). Finally, we comment on the possibility of a nonzero cosmological constant and its effect on axion quintessence models. Obviously, if the vacuum energy density is small and positive, there is no need for quintessence in the first place. However, vacua in string theory have a notorious preference for negative energy densities [48], and explicit models have been constructed where the negative vacuum energy density \(\rho_{\rm vac}\) is smaller in magnitude than the present dark energy density \(\rho_{DE}\sim H_{0}^{2}\)[49]. In the limit \(|\rho_{\rm vac}|\ll\rho_{DE}\), the phenomenology of axion quintessence models would be indistinguishable from that arising in the present work under the assumption \(\rho_{\rm vac}=0\). Moreover, numerical calculations show that assuming a more comparable vacuum energy density, such that \(V(\varphi_{n})=m^{2}f^{2}\cos(\varphi_{n}/f)\) with no constant offset, would only change the constraints on \(f_{\rm eff}\) (calculated in Section 2) and \(|\theta_{n}(0)|\) (calculated in this section) by \({\cal O}(10\%)\). These conclusions are consistent with Ref. [50], which found axion quintessence models to be compatible with the presence of a small cosmological constant of either sign. ## 5 Conclusions & Discussion The central finding in this work is that the TCC constrains any system of \({\cal N}\) identical axions with simple cosine potentials to have decay constants satisfying \(f_{\rm eff}\equiv f\sqrt{N}\lesssim 0.6\) in reduced Planck units. This bound is even tighter for axions with masses near the Planck scale (see Fig. 1). Because the TCC must hold for any physically allowed initial conditions (and not just the initial conditions in our own observable universe), these constraints apply to all such systems of axions, regardless of whether or not they are responsible for driving accelerated expansion. We have shown that this constraint rules out models of axion-driven inflation, as larger values of \(f_{\rm eff}\) are required to achieve sufficiently many e-folds of inflation and produce the correct spectral tilt. We have also argued that reconciling _any_ inflationary model with the TCC--axionic or otherwise--requires a mechanism for ending inflation via a sharp and sudden increase of the equation of state \(\epsilon\), rather than the traditional graceful exit. Some examples in the literature show that this can be accomplished in principle, but they rely on an extraordinary amount of fine-tuning [35; 37]. In contrast to axion-driven inflation, models of axion quintessence _can_ be simultaneously compatible with the TCC and observational data, as long as \(0.01\lesssim f_{\rm eff}\lesssim 0.6\) and the axions' initial field values are near the top of the potential at the level of tuning specified by Eq. (28). In the case of a single axion with \(f\approx 0.6\), the necessary tuning is relatively modest, requiring the initial dimensionless field value to satisfy \(|\theta|\equiv|\varphi|/f\lesssim\pi/5\). Ultralight string axions, however, typically have much lower decay constants \(f\lesssim 0.005\), and one would therefore need \({\cal N}\gtrsim 10^{4}\) of them, _each_ aligned to within \(\pm\pi/5\) radians of the maximum, in order to achieve the same effect. Alternatively, one could have fewer string axions (resulting in a lower \(f_{\rm eff}\)) with more finely tuned initial conditions, up to the limit set by quantum fluctuations. In either case, axions that are specifically string-theoretic in origin appear to require a mechanism to perch them near the hilltop. We emphasize that the bounds and constraints derived in this work apply specifically to models of \({\cal N}\) identical axions with cosine potentials. Models of inflation making use of multiple non-identical axions [51; 52] or axions coupled to a bath of radiation [53; 54; 55] are not directly constrained by the present work, but they are unlikely to overcome the general obstacles for TCC-compliant inflation outlined above. Models of quintessence using axions with a small range of masses and decay constants would likely still be feasible with some additional tuning, but more elaborate models of axion quintessence, which may incorporate non-trivial interactions between axions, monodromies, contributions from higher-order instanton corrections, or interactions with dynamical moduli, require their own independent analysis. ## 6 Acknowledgements I wish to thank Paul Steinhardt for providing guidance during this study and suggestions for the manuscript. I am also grateful to Alek Bedroya, Anna Ijjas, and Anirudh Prabhu for their helpful feedback on the manuscript. This work was supported in part by the DOE grant number DEFG02-91ER40671 and by the Simons Foundation grant number 654561. ## Appendix A Quantum Fluctuations While the axion fields are located at the hilltops of their potentials with sufficiently low classical velocities, their evolution may be dominated by quantum fluctuations. The initial conditions used for the analytic calculation in Section 2, namely \(\varphi_{n}(0)=\frac{mf_{\rm eff}^{2}}{\pi\sqrt{6}}\) and \(\dot{\varphi}_{n}(0)=\frac{m^{2}f_{\rm eff}}{6\pi}\), are specifically chosen to avoid this regime. To see this, we can consider the RMS fluctuation [56] of a massive field in a de Sitter background, \[(\Delta\varphi_{n})_{\rm rms}=\sqrt{\frac{H^{2}}{8\pi^{2}\eta}(e^{2\eta\ln(a )}-1)}, \tag{10}\] and compare its time-derivative \[(\dot{\Delta\varphi}_{n})_{\rm rms}=\frac{H^{3}e^{2\eta\ln(a)}}{8\pi^{2}( \Delta\varphi_{n})_{\rm rms}} \tag{11}\] to the field's classical velocity. Here, \(a\) is the scale factor, \(\eta=-m^{2}/(3H^{2})\), and we have taken \(H\) to be approximately constant. Since \(\eta<0\), we have that \[(\dot{\Delta\varphi}_{n})_{\rm rms}<\frac{H^{3}}{8\pi^{2}(\Delta\varphi_{n})_ {\rm rms}}, \tag{12}\] so we can conservatively estimate that \(\left(\Delta\dot{\varphi}_{n}\right)_{\rm rms}\lesssim\dot{\varphi}_{n}(0)\) when \[\frac{H^{3}}{8\pi^{2}(\Delta\varphi_{n})_{\rm rms}}\lesssim\frac{m^{2}f_{\rm eff }}{6\pi}\iff\left(\Delta\varphi_{n}\right)_{\rm rms}\gtrsim\frac{mf_{\rm eff}^{ 2}}{\pi\sqrt{6}}. \tag{10}\] In other words, the initial conditions we chose ensure that quantum effects offset the initial field values by a factor less than \(\mathcal{O}(1)\). Additionally, the classical equations of motion (7) ensure that \(\ddot{\varphi}_{n}\approx m^{2}\varphi_{n}-3H\dot{\varphi}_{n}\) remains positive when starting from these initial conditions, amplifying the fields' classical velocities while \(\left(\Delta\dot{\varphi}_{n}\right)_{\rm rms}\) falls off. To simulate the effects of quantum fluctuations numerically, we employ a random walk with time step \(dt\), where at each time step, the field value of each axion changes by \[\delta\varphi_{n}=\pm\frac{H}{2\pi}\sqrt{Hdt}\cdot e^{\imath\ln(a)}. \tag{11}\] At each time step, the scale factor changes according to \(d\ln(a)=Hdt\), and so the variance of this random walk at some future time with scale factor \(a\) will be, indeed, \[\langle\varphi_{n}^{2}\rangle =\sum_{j=0}^{\frac{\ln(a)}{Hdt}}\frac{H^{3}}{4\pi^{2}}dt\cdot e^{ 2\eta(jHdt)} \tag{12}\] \[\approx\int_{j=0}^{\frac{\ln(a)}{Hdt}}\frac{H^{3}}{4\pi^{2}}dt \cdot e^{2\eta(jHdt)}dj\] (13) \[=\frac{H^{2}}{8\pi^{2}\eta}(e^{2\jmath_{1}\ln(a)}-1). \tag{14}\]
2304.07051
The Second Monocular Depth Estimation Challenge
This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC). This edition was open to methods using any form of supervision, including fully-supervised, self-supervised, multi-task or proxy depth. The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth. This includes complex natural environments, e.g. forests or fields, which are greatly underrepresented in current benchmarks. The challenge received eight unique submissions that outperformed the provided SotA baseline on any of the pointcloud- or image-based metrics. The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%. Supervised submissions generally leveraged large collections of datasets to improve data diversity. Self-supervised submissions instead updated the network architecture and pretrained backbones. These results represent a significant progress in the field, while highlighting avenues for future research, such as reducing interpolation artifacts at depth boundaries, improving self-supervised indoor performance and overall natural image accuracy.
Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, Yufei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao
2023-04-14T11:10:07Z
http://arxiv.org/abs/2304.07051v3
# The Second Monocular Depth Estimation Challenge ###### Abstract This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC). This edition was open to methods using any form of supervision, including fully-supervised, self-supervised, multi-task or proxy depth. The challenge was based around the SYNSPatches dataset, which features a wide diversity of environments with high-quality dense ground-truth. This includes complex natural environments, e.g. forests or fields, which are greatly underrepresented in current benchmarks. The challenge received eight unique submissions that outperformed the provided SotA baseline on any of the pointcloud- or image-based metrics. The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%. Supervised submissions generally leveraged large collections of datasets to improve data diversity. Self-supervised submissions instead updated the network architecture and pre-trained backbones. These results represent a significant progress in the field, while highlighting avenues for future research, such as reducing interpolation artifacts at depth boundaries, improving self-supervised indoor performance and overall natural image accuracy. 1 Footnote 1: University of Surrey \({}^{2}\)Aston University \({}^{3}\)University of Southampton \({}^{4}\)Amazon \({}^{5}\)York University \({}^{6}\)times-University of Antwerp \({}^{7}\)Zhejiang University \({}^{8}\)DJI Technology 2 Footnote 2: University of Science and Technology of China \({}^{10}\)Northwestern Polytechnical University \({}^{11}\)DeltaX \({}^{12}\)Independent \({}^{13}\)VIVO \({}^{14}\)University of Bologna \({}^{15}\)East China University of Science and Technology ## 1 Introduction Monocular depth estimation (MDE) refers to the task of predicting the distance from the camera to each image pixel. Unlike traditional geometric correspondence and triangulation techniques, this requires only a single image. Despite the ill-posed nature of the problem, deep learning has shown rapid improvements in this field. Unfortunately, many existing approaches have focused solely on training and evaluating in an automotive urban setting. This puts into question their ability to adapt to previously unseen environments. The proposed Monocular Depth Estimation Challenge (MDEC) aims to mitigate this by evaluating models on a complex dataset consisting of natural, agricultural, urban and indoor scenes. Furthermore, this is done in a zero-shot fashion, meaning that the models must be capable of generalizing. The first edition of MDEC [77] focused on benchmarking self-supervised approaches. The submissions outperformed the baseline [25, 78] in all image-based metrics (AbsRel, MAE, RMSE), but provided slightly inferior pointcloud reconstructions [62] (F-Score). The second edition of MDEC, detailed in this paper, ran in conjunction with CVPR2023. This edition was open to any form of supervision, supervised, self-supervised or multi-task. The aim was to evaluate the state of the field as a whole and determine the gap between different supervision strategies. The challenge was once again centered around SYNS-Patches [1, 78]. This dataset was chosen due its diversity, which includes urban, residential, industrial, agricultural, natural and indoor scenes. Furthermore, SYNS-Patches contains dense high-quality LiDAR ground truth, which is exceedingly rare in outdoor environments. This ensures that the evaluations accurately reflect the capabilities of each model. Eight teams out of the 28 final submissions outperformed the State-of-the-Art (SotA) baseline in either pointcloud- or image-based metrics. Half of these submission were supervised using ground-truth depths, while the remaining half were self-supervised with the photometric reconstruction loss [25, 28]. As expected, supervised submissions typically outperformed self-supervised ones. However, the novel self-supervised techniques generally outperformed the provided baseline, even in pointcloud reconstructions. The remainder of the paper will provide the technical details of each submission, analyze their results on SYNS-Patches and discuss potential directions for future research. ## 2 Related Work **Supervised.** Eigen _et al_. [22] introduced the first end-to-end CNN for MDE, which made use of a scale-invariant loss and a coarse-to-fine network. Further improvements to the network architecture included the use of CRFs [53, 100], regression forests [72], deeper architectures [67, 88], multi-scale prediction fusion [60] and transformer-based encoders [9, 15, 66]. Alternatively, depth estimation was formulated as a discrete classification problem [7, 8, 24, 49]. In parallel, novel losses were proposed in the form of gradient-based regression [51, 84], the berHu loss [47], an ordinal relationship loss [14] and scale/shift invariance [67]. Recent approaches focused on the generalization capabilities of MDE by training with collections of datasets [7, 66, 67, 68, 23, 69, 82]. This relied on the availability of ground-truth annotations, including automotive data LiDAR [27, 32, 38], RGB-D/Kinect [16, 61, 79], SfM reconstructions [50, 51], optical flow/disparity estimation [67, 88] or crowdsourced annotations [14]. These annotations varied in accuracy, which may have impacted the final model's performance. Furthermore, this increased the requirements for acquiring data from new sources, making it challenging to scale to larger amounts of data. **Self-Supervised.** Instead of relying on costly annotations, Garg _et al_. [25] proposed an algorithm based on view synthesis and the photometric consistency across stereo pairs. Monodepth [28] incorporated differentiable bilinear interpolation [42], virtual stereo prediction and a SSIM+L\({}_{1}\) reconstruction loss. SfM-Learner [108] required only monocular video supervision by replacing the known stereo transform with a pose estimation network. Artifacts due to dynamic objects were reduced by incorporating uncertainty [65, 93, 45], motion masks [31, 20, 12], optical flow [57, 68, 98] or the minimum reconstruction loss [29]. Meanwhile, robustness to unreliable photometric appearance was improved via feature-based reconstructions [76, 99, 105] and proxy-depth supervision [65, 73, 86]. Developments in network architecture design included 3D (un-)packing blocks [32], positional encoding [30], transformer-based encoders [106, 2], sub-pixel convolutions [64], progressive skip connections [58] and self-attention decoders [91, 107, 43]. **Challenges & Benchmarks.** The majority of MDE approaches have been centered around automotive data. This includes popular benchmarks such as Kitti [81, 27] or the Dense Depth for Autonomous Driving Challenge [32]. The Robust Vision Challenge series [104], while generalization across multiple datasets, has so far consisted only of automotive [27] and synthetic datasets [10, 70]. More recently, Ignatov _et al_. introduced the Mobile AI Challenge [40], investigating efficient MDE on mobile devices in urban settings. Finally, the NTIRE2023 [102] challenge, concurrent to ours, targeted high-resolution images of specular and non-lambertian surfaces. The Monocular Depth Estimation Challenge series [77]--the focus of this paper--is based on the MonoDepth Benchmark [78], which provided fair evaluations and implementations of recent SotA self-supervised MDE algorithms. Our focus lies on zero-shot generalization to a wide diversity of scenes. This includes common automotive and indoor scenes, but complements it with complex natural, industrial and agricultural environments. ## 3 The Monocular Depth Estimation Challenge The second edition of the Monocular Depth Estimation Challenge1 was organized on CodaLab [63] as part of a CVPR2023 workshop. The initial development phase lasted four weeks, using the SYNS-Patches validation split. The leaderboard for this phase was anonymous, where all method scores were publicly available, but usernames remained hidden. Each participant could see the metrics for their own submission. Footnote 1: [https://codalab.lisn.upsaclay.fr/competitions/10031](https://codalab.lisn.upsaclay.fr/competitions/10031) The final challenge stage was open for two weeks. In this case, the leaderboard was completely private and participants were unable to see their own scores. This encouraged evaluation on the validation split rather than the test split. Combined with the fact that all ground-truth depths were withheld, the possibility of overfitting due to repeated evaluations was severely limited. This edition of the challenge was extended to any form of supervision, with the objective of providing a more comprehensive overview of the field as a whole. This allowed us to determine the gap between different techniques and identify avenues for future research. We report results only for submissions that outperformed the baseline in any pointcloud-/image-based metric on the Overall dataset. **Dataset.** The challenge is based on the SYNS-Patches dataset [1, 78], chosen due to the diversity of scenes and environments. A breakdown of images per category and some representative examples are shown in Table 1 and Figure 2. SYNS-Patches also provides extremely high-quality dense ground-truth LiDAR, with an average coverage of 78.20% (including sky regions). Given the dense ground-truth, depth boundaries were obtained using Canny edge-detection on the log-depth maps. This allows us to compute additional fine-grained metrics for these challenging regions. As outlined in [78], the images are manually checked to remove dynamic object artifacts. **Evaluation.** Participants provided the unscaled disparity prediction for each dataset image. The evaluation server bilinearly upsampled the predictions to the target resolution and inverted them into depth maps. Self-supervised methods trained with stereo pairs and supervised methods using LiDAR or RGB-D data should be capable of predicting met \begin{table} \begin{tabular}{l c c c c c c c c|c} \hline \hline & Agriculture & Indoor & Industry & Misc & Natural & Recreation & Residential & Transport & Woodland & Total \\ \hline **Val** & 104 & 67 & 36 & 72 & 36 & 14 & 13 & 4 & 54 & 400 \\ **Test** & 211 & 81 & 71 & 0 & 147 & 48 & 110 & 17 & 90 & 775 \\ \hline **Total** & 315 & 148 & 107 & 72 & 183 & 62 & 123 & 21 & 144 & 1,175 \\ \hline \hline \end{tabular} \end{table} Table 1: SYNS-Patches. Distribution of images per category in the val/test splits. Figure 1: **Depth Distribution Per Scene Type.** Indoor scenes are limited to 20m, while outdoor scenes reach up to 120m. Natural and Agriculture scenes contain a larger percentage of long-range depths (20-80m), while urban scenes focus on the mid-range (20-40m). Figure 2: **SYNS-Patches.** Sample images from the diverse dataset scenes, including complex urban, natural and indoor settings. The dataset contains high-quality ground-truth with 78.20% coverage. Depth boundaries were computed as Canny edges in the log-depth maps. ric depth. Despite this, in order to ensure comparisons are as fair as possible, the evaluation aligned predictions with the ground-truth using the median depth. We set a maximum depth threshold of 100 meters. **Metrics.** We follow the metrics used in the first edition of the challenge [77], categorized as image-/pointcloud-/edge-based. Image-based metrics represent the most common metrics (MAE, RMSE, AbsRel) computed using pixel-wise comparisons between the predicted and ground-truth depth map. Pointcloud-based metrics [62] (F-Score, IoU, Chamfer distance) instead evaluate the reconstructed pointclouds as a whole. In this challenge, we report reconstruction F-Score as the leaderboard ranking metric. Finally, edge-based metrics are computed only at depth boundary pixels. This includes image-/pointcloud-based metrics and edge accuracy/completion metrics from IBims-1 [46]. ## 4 Challenge Submissions We outline the technical details for each submission, as provided by the authors. Each submission is labeled based on the supervision used, including ground-truth (**D**), proxy ground-truth (**D***) and monocular (**M**) or stereo (**S**) photometric support frames. The first half represent supervised methods, while the remaining half are self-supervised. **Baseline - S** _J. Spencer\({}^{1}\)_ [email protected]_ _C. Russell\({}^{4}\)_ [email protected]_ _S. Hadfield\({}^{1}\)_ [email protected]_ _R. Bowden\({}^{1}\)_ [email protected]_ Challenge organizers submission from the first edition. **Network.** ConvNeXt-B encoder [56] with a base Monodepth decoder [28, 59] from [78]. **Supervision.** Self-supervised with a stereo photometric loss [25] and edge-aware disparity smoothness [28]. **Training.** Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of \(192\times 640\). **Team 1: DJI&ZJU - D** _W. Yin\({}^{8}\)_ [email protected]_ _K. Cheng\({}^{9}\)_ [email protected]_ _G. Xu\({}^{9}\)_ [email protected]_ _H. Chen\({}^{7}\)_ [email protected]_ _B. Li\({}^{10}\)_ [email protected]_ _K. Wang\({}^{8}\)_ [email protected]_ _X. Chen\({}^{8}\)_ [email protected]_ **Network.** ConvNeXt-Large [56] encoder, pretrained on ImageNet-22k [21], and a LeReS decoder [97] with skip connections and a depth range of \([0.3,150]\) meters. **Supervision.** Supervised using ground-truth depths from a collection of datasets [103, 13, 6, 17, 36, 32, 36, 90, 10, 92]. The final loss is composed of the SILog loss [22], pairwise normal regression loss [97], virtual normal loss [95] and a random proposal normalization loss (RPNL). RPNL enhances the local contrast by randomly cropping patches from the predicted/ground-truth depth and applying median absolute deviation normalization [75]. **Training.** The network was trained using a resolution of \(512\times 1088\). In order to train on mixed datasets directly with metric depth, all ground-truth depths were rescaled as \(\hat{y}^{\prime}=\nicefrac{{bf_{c}}}{{f}}\), where \(f\) is the original focal length and \(f_{c}\) is an arbitrary focal length. This way, the network assumed all images were taken by the same pinhole camera, which improved convergence. **Team 2: Pokemon - D** _M. Xiang\({}^{10}\)_ [email protected]_ _J. Ren\({}^{10}\)_ [email protected]_ _Y. Wang\({}^{10}\)_ [email protected]_ _Y. Dai\({}^{10}\)_ [email protected]_ **Network.** Two-stage architecture. The first part was composed of a SwinV2 backbone [54] and a modified NeWCRFs decoder [100] with a larger attention window. The second stage used an EfficientNet [80] with 5 inputs (RGB, low-res depth and high-res depth) to refine the high-resolution depth. **Supervision.** Supervised training using LiDAR/synthetic depth and stereo disparities from a collection of datasets [5, 11, 16, 17, 18, 34, 37, 38, 39, 61, 83, 89, 92, 96, 94, 96]. Losses included the SILog loss [22] (\(\lambda=0.85\)) for metric datasets, SILog (\(\lambda=1\)) for scale-invariant training, the Huber disparity loss for Kitti disparities and an affine disparity loss [67] for datasets with affine ambiguities. **Training.** The final combination of losses depended on the ground-truth available from each dataset, automatically mixed by learning an uncertainty weight for each dataset [44]. Since each dataset contained differently-sized images, they were resized to have a shorter side of 352 and cropped into square patches. Some datasets used smaller crops of size \(96\times 352\), such that the deepest feature map fell entirely into the self-attention window (\(11\times 11\)). A fusion process based on [60] merged low-/high-resolution predictions into a consistent high-resolution prediction. **Team 3: cv-challenge - D** _C. Li\({}^{13}\)_ [email protected]_ _Q. Zhang\({}^{13}\)_ [email protected]_ _Z. Liu\({}^{13}\)_ [email protected]_ _Y. Wang\({}^{13}\)_ [email protected]_ **Network.** Based on ZoeDepth [9] with a BEiT384-L backbone [4]. **Supervision.** Supervised with ground-truth depth from Kitti and NYUD-v2 [61] using the SILog loss. **Training.** The original ZoeDepth [9] and DPT [66] were pretrained on a collection of 12 datasets. The models were then finetuned on Kitti (\(384\times 768\)) or NYUD-v2 (\(384\times 512\) for outdoor/indoor scenes, respectively. Different models were deployed on an automatic scene classifier. The fine-tuned models were combined with a content-adaptive multi-resolution merging method [60], where patches were combined based on the local depth cue density. Since the transformer-based backbone explicitly captured long-term structural information, the original double-estimation step was omitted. ### Team 4: DepthSquad - D \begin{tabular}{l l} _M. Nam\({}^{11}\)_ & [email protected]_ \\ _H. T. Hoa\({}^{11}\)_ & [email protected]_ \\ _K. M. Umair\({}^{11}\)_ & [email protected]_ \\ _S. Hossain\({}^{11}\)_ & [email protected]_ \\ _S. M. N. Uddin\({}^{11}\)_ & [email protected]_ \\ \end{tabular} **Network.** Based on the PixelFormer architecture [2] which used a Swin [55] encoder and self-attention decoder blocks with cross-attention skip connections. Disparity was predicted as a discrete volume [7], with the final depth map given as the weighted average using the bin probabilities. **Supervision.** Supervised using the SILog loss w.r.t. the LiDAR ground-truth. **Training.** The model was trained on the Kitti Eigen-Zhou (KEZ) split using images of size \(370\times 1224\) for 20 epochs. Additional augmentation was incorporated in the form of random cropping and rotation, left-right flipping and CutDepth [41]. When predicting on SYNS-Patches, images were zero-padded to \(384\times 1248\) to ensure the compatibility of the training resolution. These borders were remove prior to submission. ### Team 5: imec-IDLab-UAntwerp - MS \begin{tabular}{l l} _L. Trinh\({}^{6}\)_ & [email protected]_ \\ _A. Anwar\({}^{6}\)_ & [email protected]_ \\ _S. Mercelis\({}^{6}\)_ & [email protected]_ \\ \end{tabular} **Network.** Pretrained ConvNeXt-v2-Huge [87] encoder with an HR-Depth decoder [58], modified with deformable convolutions [19]. The pose network instead used ResNet-18 [35]. **Supervision.** Self-supervised using the photometric loss [29] and edge-aware smoothness. **Training.** Trained on the Kitti Eigen-Benchmark (KEB) split with images of size \(192\times 640\). The network was trained for a maximum of 30 epochs, with the encoder remaining frozen after 6 epochs. ### Team 6: Gmd - Ms \begin{tabular}{l l} _B. Li\({}^{12}\)_ & [email protected]_ \\ _J. Huang\({}^{12}\)_ & [email protected]_ \\ \end{tabular} **Network.** ConvNeXt-XLarge [56] backbone and an HR-Depth [58] decoder. **Supervision.** Self-supervised based on the photometric loss [29]. **Training.** Trained on KEZ using a resolution of \(192\times 640\). **Team 7: MonoViTear - MSD* \begin{tabular}{l l} _C. Zhao\({}^{15}\)_ & [email protected]_ \\ _M. Poggi\({}^{14}\)_ & [email protected]_ \\ _F. Tosi\({}^{14}\)_ & [email protected]_ \\ _Y. Tang\({}^{15}\)_ & [email protected]_ \\ _S. Mattoccia\({}^{14}\)_ & [email protected]_ \\ \end{tabular} **Network.** MonoViT [106] architecture, composed of MPViT [48] encoder blocks and a self-attention decoder. **Supervision.** Self-supervised on Kitti Eigen (KE) using the photometric loss [29] (stereo and monocular support frames) and proxy depth regression. Regularized using edge-aware disparity smoothness [28] and depth gradient consistency w.r.t. the proxy labels. **Training.** Proxy depths were obtained by training a self-supervised RAFT-Stereo network [52] on the trinocular Multiscopic [101] dataset. The stereo network was trained for 1000 epochs using \(256\times 480\) crops. The monocular network was trained on KE for 20 epochs using images of size \(320\times 1024\). ### Team 8: USTC-IAT-United - MS \begin{tabular}{l l} _J. Yu\({}^{9}\)_ & [email protected]_ \\ _M. Jing\({}^{9}\)_ & _jing\[email protected]_ \\ _X. Qi\({}^{9}\)_ & [email protected]_ \\ \end{tabular} **Network.** Predictions were obtained as a mixture of multiple networks: DiffNet [107], FeatDepth [74] and MonoDEVS-Net [33]. DiffNet and FeatDepth used a ResNet backbone, while MonoDEVSNet used DenseNet [38]. **Supervision.** Self-supervised using the photometric loss [29]. **Training.** The three models were trained with different resolutions: \(320\times 1024\), \(376\times 1242\), \(384\times 1248\), respectively. All predictions were interpolated to \(376\times 1242\) prior to ensembling using a weighted average with coefficients \(\{0.35,0.3,0.35\}\). ## 5 Results Participant submissions were evaluated on SYNS-Patches [1, 78]. As previously mentioned, this paper only discusses submissions that outperformed the baseline in any pointcloud-/image-based metric across the Overall dataset. Since both challenge phases ran independently and participants were responsible for generating the predictions, we cannot guarantee that the testing/validation metrics used the same model. We therefore report results only for the test split. All methods were median aligned w.r.t. the ground-truth, regardless of the supervision used. This ensures that the evaluations are identical and comparisons are fair. ### Quantitative Results Table 2 shows the overall performance for each submission across the whole dataset, as well as each category. Each subset is ordered using F-Score performance. We additionally show the ranking order based on Overall F-Score for ease of comparison across categories. The Overall top F-Score and AbsRel were obtained by Team DJI&ZJU, supervised using ground-truth depths from a collection of 10 datasets. This represents a relative improvement of 27.62% in F-Score (13.72% - Baseline) and 18% in AbsRel (29.66% - OPDAI) w.r.t. the first edition of the challenge [77]. The top-performing self-supervised method was Team imec-IDLab-UAntwer, which leveraged improved pretrained encoders and deformable decoder convolutions. This submission provided relative improvements of 16.61% F-Score and 4.04% AbsRel over the first edition. As expected, supervised approaches using ground-truth depth generally outperformed self-supervised approaches based on the photometric error. However, it is interesting to note that supervising a model with only automotive data (Team DepthQuad, trained on KEZ) was not sufficient to guarantee generalization to other scene Figure 3: **SYNS-Patches Depth Visualization. Best viewed in color and zoomed in. Most methods struggle with thin structures, such as branches and railings. Object boundaries are also characterized by “halos”, caused by interpolation between foreground and background objects. Notable improvements can be seen in Natural and Agricultural scenes, where the top submissions provide much higher levels of detail than the baseline.** types. Meanwhile, as discussed in [78], improving the pre-trained backbone (Teams imec-IDLab-UAntwerp & GMD) is one of the most reliable ways of increasing performance. Alternative contributions, such as training with proxy depths (MonoViTeam) or ensembling different architectures (USTC-IAT-United), can improve traditional image-based results but typically result in slightly inferior reconstructions. The top submission (DJI&ZJU) consistently outperformed the other submissions across each scene category, demonstrating good generalization capabilities. However, Teams Pokemon & cv-challenge provided slightly better pointcloud reconstructions in Natural scenes. We theorize this might be due to the use of additional outdoor datasets, while DJI&ZJU primarily relies on automotive data. It is further interesting to note that self-supervised approaches such as Teams imec-IDLab-UAntwerp & GMD outperformed even some supervised methods in Urban reconstructions, despite training only on Kitti. Finally, supervised methods provided the largest improvement in Indoor scenes, since self-supervised approaches were limited to urban driving datasets. DJI&ZJU relied on Taskonomy and DIML, Pokemon on ScanNet, SceneNet, NYUD-v2 and more and cv-challenge made use of ZoeDepth [9] pretrained on the DPT dataset collection [66]. This demonstrates the need for more varied training data in order to generalize across multiple scene types. ### Qualitative Results Figure 3 shows visualizations for each submission's predictions across varied scene categories. Generally, all approaches struggle with thin structures, such as the railings in images two and five or the branches in image four. Models vary between ignoring these thin objects (Baseline), treating them as solid objects (USTC-IAT-United) and producing inconsistent estimates (cv-challenge). Self-supervised methods are more sensitive to image artifacts (_e.g_. saturation or lens flare in images one and three) due to their reliance on the photometric loss. Meanwhile, supervised methods can be trained to be robust to the artifacts as long as the ground-truth is correct. Object boundaries still present challenging regions, as demonstrated by the halos produced by most approaches. Even Team DJI&ZJU, while reducing the intensity of these halos, can sometimes produce over-pixelated boundaries. However, it is worth pointing out that many submissions significantly improve over the Baseline predictions [78]. In particular, Teams cv-challenge, imec-IDLab-UAntwerp & GMD show much greater levels of detail in Urban and Agricultural scenes, reflected by the improved Edge-Completion metric in Table 2. This is particularly impressive given the self-supervised nature of some of these submissions. Unfortunately, self-supervised approaches show significantly inferior performance in Indoor settings, as they lack the data diversity to generalize. This can be seen by the fact that many self-supervised approaches produce incorrect scene geometry and instead predict ground-planes akin to outdoors scenes. Images six, thirteen and sixteen highlight some interesting complications for monocular depth estimation. Transparent surfaces, such as the glass, are not captured when using LiDAR or photometric constraints. As such, most approaches ignore them and instead predict the depth for the objects behind them. However, as humans, we know that these represent solid surfaces and obstacles that cannot be traversed. It is unclear how an accurate supervision signal could be generated for these cases. This calls for more flexible depth estimation algorithms, perhaps relying on multimodal distributions and discrete volumes. ## 6 Conclusions & Future Work This paper has summarized the results for the second edition of MDEC. Most submissions provided significant improvements over the challenge baseline. Supervised submissions typically focused on increasing the data diversity during training, while self-supervised submissions improved the network architecture. As expected, there is still a performance gap between these two styles of supervision. This is particularly the case in Indoor environments. This motivates the need for additional data sources to train self-supervised models, which are currently only trained on automotive data. Furthermore, accurate depth boundary prediction is still a highly challenging problem. Most methods frequently predicted "halos", representative of interpolation artifacts between the foreground and background. Future challenge editions may introduce additional tracks for metric _vs_. relative depth prediction, as predicting metric depth is even more challenging. We hope this competition will continue to bring researchers into this field and strongly encourage any interested parties to participate in future editions of the challenge. ## Acknowledgments This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1.
2308.16289
Time-Bin CKA as a tool for blockchain technology
We explore the potential of Time-Bin Conference Key Agreement (TB CKA) protocol as a means to achieve consensus among multiple parties. We provide an explanation of the underlying physical implementation, i.e. TB CKA fundamentals and illustrate how this process can be seen as a natural realization of the global common coin primitive. Next, we present how TB CKA could be embodied in classical consensus algorithms to create hybrid classical-quantum solutions to the Byzantine Agreement problem.
Marta Misiaszek-Schreyner, Miriam Kosik, Mirek Sopek
2023-08-30T19:36:50Z
http://arxiv.org/abs/2308.16289v1
# Time-Bin CKA as a tool for blockchain technology ###### Abstract We explore the potential of Time-Bin Conference Key Agreement (TB CKA) protocol as a means to achieve consensus among multiple parties. We provide an explanation of the underlying physical implementation, i.e. TB CKA fundamentals and illustrate how this process can be seen as a natural realization of the _global common coin_ primitive. Next, we present how TB CKA could be embodied in classical consensus algorithms to create hybrid classical-quantum solutions to the Byzantine Agreement problem. Quantum Blockchains Inc., Ogrodowa 8, 91-062 Lodz, Poland, [http://quantum.io/](http://quantum.io/) ## 1 Blockchain technology A blockchain is an architecture that enables data to be stored in a decentralized network [1]. The primary distinctions between conventional databases and blockchains encompass decentralization, distribution, the implementation of cryptographic protocols resulting in the linkage of data across blocks, and the inherent immutability of records. In a traditional database, data is stored in a centralized location controlled by a single entity or organization. This creates the need for trust - users must trust this central entity to maintain the data accurately and securely. In a blockchain, data is stored in a decentralized network of computers, referred to as nodes. Each node in the network holds a copy of the entire blockchain and participates in the validation and verification of transactions. There is no central authority, and consensus mechanisms ensure that all nodes agree on the state of the data. Data in a traditional database is typically linked through relationships established between tables using keys. In a blockchain, data is linked through blocks. Each block in a blockchain contains some data and a hash, which is a unique fingerprint that identifies the block. Any change inside the block will cause the hash to change. Each block also contains the hash of the previous block, which leads to a chain of blocks. This linking creates a chronological and immutable transaction history. The kind of data stored inside a block depends on the type of blockchain (for example, the Bitcoin blockchain stores transaction details, such as the sender, receiver, and number of coins). Blockchain blocks need not necessarily be in the form of uniform binary data blocks. Modern solutions allow for much richer data structures to be linked [2] to form the chain. What is essential is that the entire system represents a consistent generalized transaction history on which all nodes achieve eventual agreement about the linked data. The main components of blockchain software are consensus and validation algorithms that provide transparency and data security. Unlike ordinary databases, a public blockchain does not rely on a centralized model of trust because it is fully accessible to anyone who wants to participate as a node. Such node gets a full copy of the blockchain and can even use the copy of the blockchain to verify that everything is in order. Therefore the security of a blockchain comes not only from the creative use of encryption, hashing and consensus mechanisms, but also from being distributed and decentralized. Despite the common features of the blockchain software, there are various consensus mechanisms, for example: Proof of Work (PoW), Proof of Stake (PoS), Delegated Proof of Stake (DPoS), Proof of Authority (PoA), Proof of Capacity (PoC) and many others [3]. All of them are mathematical operations through which nodes from the network validate creation of new blocks, how ever, they differ in the type of algorithm that is used. The most popular and most famous is PoW (used in Bitcoin, early Ethereum and other networks), despite the fact that it needs high computational effort that results in high energy consumption. Despite the numerous advantages of blockchain, it also comes with significant drawbacks stemming from its distributed architecture. In theoretical considerations of distributed systems there are two fundamental theorems that limit desirable properties of blockchain architecture. One of them is known as **CAP theorem**[5], the other is **FLP impossibility result**[6]. CAP theorem [5] states that any distributed system can have at most two of the following three properties: * every read receives the most recent write; * each request eventually receive a response; * the system operates despite an arbitrary number of messages being dropped between nodes due to communication breakdowns or any other reasons. Unfortunately, the CAP theorem oversimplifies the balance between these properties. Due to that this formulation is not genuinely true. CAP theorem states only that perfect availability and consistency in the presence of partitions is not possible. Therefore the designers of distributed systems do not need to choose between consistency and availability when partitions are present. The goal is rather to find a trade-off between them. The FLP impossibility result [6], named after its authors (Fischer, Lynch and Patterson), comes from consideration on achieving consensus in distributed systems. It shows that in an asynchronous setting, there is no deterministic distributed algorithm that always solves the consensus problem, even if only one node of the system is fault. The limits ensuing from both CAP and FLP theorems translate to the phenomenon called the **blockchain trilemma**: _it is impossible for any classical blockchain to simultaneously guarantee security, scalability and decentralization_. Various blockchain consensus algorithms attempted to find a balance between these three features, resembling the trade-offs made by the designers of standard distributed systems. One of the approaches to minimize the negative effects of the trilemma is to prioritize data availability (i.e. scalability) and agree that the data may not be consistent on all nodes at the same time, but to demand that it is eventually consistent, i.e. after some time of the system life. Since these problems are crucial for the blockchain technology, it is important to analyse new proposals and test new algorithms. Luckily, recent works [7] show that the use of quantum mechanical laws can be beneficial for the reduction of negative consequences of the trilemma and could lead to an entirely new class of blockchain architectures. However, at the same time the emerging quantum computers pose a threat to the security of modern blockchains, which are built mostly as P2P networks and assume heavy use of the classical asymmetric cryptography with public-private keys playing a pivotal role. Therefore, it is wise to explore the possibilities of integration of quantum cryptography and quantum devices in the blockchain architecture. ### Quantum secured blockchain As it was mentioned earlier, quantum computers pose a threat to any classical encryption algorithms that are used nowadays. Therefore, blockchain protection with quantum cryptography is a sensible step in further development of this technology. This development may take many different paths. * One of them is to simply use a quantum random number generator for creating the encryption keys. Since such keys have higher degree of randomness than keys generated using any available algorithms, or obtained using any classical physical processes, this way of communication is way more secure than the communication that is currently provided. * The second is the use of quantum key distribution (QKD) devices that are available on the market and setting quantum channels between each node of the blockchain network (one to one architecture). The use of these devices for obtaining consensus significantly increases the security and ensures the validity of a newly created blocks. Also, as shown in Ref. [4], another type of consensus mechanism can be used, which, by using one-time pad encryption keys further improves the security of the blockchain. * The third method is to explore different quantum communication protocols that enable the use of other network architectures and favorably affect scalability of such network. The third development path offers the most novelty, therefore, it is elaborated upon further in this work. ## 2 Quantum Key Distribution Similarly to classical data encryption, quantum cryptography is also based on key distribution. The difference is that in quantum cryptography the key is generated by a non-deterministic purely random process. This process, which occurs in accordance with quantum mechanical laws, ensures the security of a key itself and also the security of its distribution between involved parties. For example, since a quantum state collapses when measured, the eavesdropping of transmission can be easily detected. Also, due to the no-cloning theorem [9], it is impossible to copy the data that is encoded in a quantum state. All of this makes quantum key distribution (QKD) _an information-theoretically secure_ solution to the key exchange problem. There are many protocols used for QKD. One of the best known is BB84 [10], named after Charles Bennett and Gilles Brassard who presented it in 1984. In this protocol, a secret key is encoded in photons' polarization states, randomly chosen from two available bases. Each photon represents a single bit of data. Its value is established after the transmission of a photon through the quantum channel and the measurement of its polarization state. Since the measurement is done in two bases, that are also randomly chosen, the outcome of the measurements need to be reconciled by communicating parties. It is done through a classical channel (such as phone, mail, HTTP or any similar way of communication). Unfortunately, as a result of the reconciliation procedure, information is leaked and hence, up to a half of the sent bits may need to be removed from the key. Moreover, there are also other losses, decoherence and measurement imperfections that may influence the key generation rate. Furthermore, the standard QKD protocols such as that presented above require to set quantum channels between all communicating parties. It means that for N parties, \(\frac{N(N-1)}{2}\) connections are needed. All of this, combined with the high cost of available QKD systems, results in slowdown in development of the commercial use-cases of quantum cryptography. Fortunately, there is a novel QKD protocol that allows to decrease the number of connections in the system to N for N communicating parties. It is called Quantum Conference Key Agreement (CKA). ### Quantum Conference Key Agreement The Quantum Conference Key Agreement is a protocol that enables multiparty quantum key exchange. Thanks to this protocol, it is possible to exchange a key between many parties in a secure manner. The infrastructure that allows to achieve such consensus consists of two parts. One is a clas Figure 1: Schematical visualisation of a four-node network that allows to realize the CKA protocol. The purple dashed lines indicate classical connections, green solid lines denote quantum connections. sical Internet cloud that connects all parties using classical communication in classical channels, the other is a quantum server that is responsible for preparing and distributing shared qubits between all parties at once. The scheme of such infrastructure is presented in Fig. 1. CKA is based on sharing N qubits with N communicating parties. These qubits are in a specific entangled state called \(|GHZ\rangle\). The CKA protocol was experimentally demonstrated in a 4-node network in 2021 [12]. As it can be seen, the nodes are connected to the quantum server. They are also connected with each other by classical channels, which is not presented in the figure. The quantum server is responsible for the distribution of an entangled state, which is here of the form \[|GHZ\rangle=\frac{1}{\sqrt{2}}\Big{(}|0000\rangle+|1111\rangle\Big{)}\;.\] Here, such quantum state is generated using two SPDC sources in Sagnac mode. The single PPLN crystal, pumped by pulsed Ti:Sapphire laser generates a quantum state that can be written in the form of \[|\psi\rangle=\frac{1}{\sqrt{2}}\big{(}|0\rangle+|1\rangle\big{)}\;, \tag{1}\] where \(|0\rangle\) and \(|1\rangle\) are orthogonal polarization states. Generated qubits are correlated with each other using PBS (polarization beamsplitter) and distributed using long single-mode fibers. Then, each node measures its qubit similarly to the method used for standard BB84 protocol (collecting detections for different settings of quarter and half waveplates). On the one hand, although the experimental realization was presented only for 4 nodes separated from each other by 50 km at maximum, due to the scalability of this method, it is a promising solution worth to consider in further development of quantum consensus mechanism in distributed systems. On the other hand, due to decoherence, encoding information in the means of photon polarization state is not the best choice for long-range communication using optical fiber links. Therefore, other implementation of CKA protocol should be addressed. ### Time-bin CKA One of them could be the use of time-bin encoding. Such qubits may be created in a simple way by single photons traveling along paths of different lengths. As a result, photons' arrival time, compared to external clock, differ and may be assigned as the early- and late-time-bin, as presented in Figure 2. Figure 3 shows the proposal of experimental setup that enables to introduce CKA protocol with time-bin encoding for four node network. Let us follow a beam path inside the setup. At the beginning, the beam from a pulsed pump laser (PL) enters a set of beamsplitters and mirrors, resulting in creation of time bins, that may be denoted as "states" \(|0\rangle\) and \(|1\rangle\) (dark green line). Then, the beam is focused on first SPDC nonlinear crystal (SPDC1), which creates a polarization qubit of the form \[|\phi\rangle=\frac{1}{\sqrt{2}}\big{(}|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! second nonlinear crystal (SPDC2), where another pair of orthogonally polarized photons are generated, so the light green beam here is converted into orange one in such a way: \[\big{(}|0_{\clubsuit}\rangle+|1_{\clubsuit}\rangle\big{)}\,\big{(}|0_{\clubsuit}0_{ \clubsuit}\rangle+|1_{\clubsuit}1_{\clubsuit}\rangle\big{)}. \tag{4}\] Then, the orange beam passes through half-wave plate that rotates the polarization state of each photon, resulting in a similar state \(\big{(}|0_{\clubsuit}0_{\clubsuit}\rangle+|1_{\clubsuit}1_{\clubsuit}\rangle \big{)}\,\). This photon beam is then separated by the dichroic mirror (DM), passes through another PBS and are detected in detectors D1 and D2. In the other arm, beam with vertical polarization (\(\clubsuit\)) is reflected from PBS, and passes through HWP that changes its polarization to horizontal one (\(\clubsuit\)). It enables to convert photon beam (light green line) in the same manner for both beams that create the Sagnac source arms, resulting in a state \(\big{(}|0_{\clubsuit}0_{\clubsuit}\rangle+|1_{\clubsuit}1_{\clubsuit}\rangle \big{)}\,\big{(}\)orange line). Later, the beams that differ with polarization are separated in PBS and detected in D3 and D4. It should be noted here that lines with different colors mark each state of beam conversion (wavelength change). Finally, taking into account both conversion processes, the overall photon state before the measurement can be written as \[|\Psi\rangle=\frac{1}{\sqrt{2}}\big{(}|0_{\clubsuit}0_{\clubsuit}0_{\clubsuit }0_{\clubsuit}\rangle+|1_{\clubsuit}1_{\clubsuit}1_{\clubsuit}1_{\clubsuit} \rangle\big{)}\;. \tag{5}\] Such photon state shows so-called hyperentanglement, where in correlation between photons various properties (degrees of freedom) are engaged. Let us also check the kind of photon states that are prepared for detection by respective detectors: \[\begin{split} D1:&\;\;|0_{\clubsuit}\rangle+|1_{\clubsuit} \rangle\;,\\ D2:&\;\;|0_{\clubsuit}\rangle+|1_{\clubsuit} \rangle\;,\\ D3:&\;\;|0_{\clubsuit}\rangle+|1_{\clubsuit} \rangle\;,\\ D4:&\;\;|0_{\clubsuit}\rangle+|1_{\clubsuit} \rangle\;.\end{split} \tag{6}\] Since single-photon detectors are not usually sensitive to the photon polarization state (they are more sensitive to photon wavelength), it may be concluded that in all detectors the same shared time-bin qubit is measured. Thanks to this it is possible to establish the key simultaneously with all four nodes. ## 3 Quantum distributed consensus algorithm and FLP impossibility As it was mentioned earlier, the **GHZ state** is one of the quantum mechanical tools that enables to achieve distributed consensus [7, 8]. In general, GHZ state is an ensemble of N entangled qubits, whose state may be mathematically written as \[|GHZ\rangle=\frac{1}{\sqrt{2}}\Big{(}|0\rangle^{\otimes N}+|1\rangle^{\otimes N }\Big{)}\;.\] Similarly to the experiment present in previous section, each node in a network receives a single qubit and measures it, choosing "0" if measured state is \(|0\rangle\) and 1 otherwise. Due to that, the single measurement causes collapse of qubit state to \(|0\rangle\) or \(|1\rangle\) for each participant of communication, not only those who made the measurement, this allows for obtaining a consensus on single bit of information between multiple nodes. Scheme of a protocol that allows to obtain consensus over a proposed block of data \(d_{1}\) may look as follows: 1. the CKA protocol provides each participating node with two random bits: \(b_{1}\) and \(b_{2}\); if the CKA protocol succeeded without errors, then every participating node should have the same value of b1 and b2 as other nodes; 2. every node performs the calculation: \(b_{1}\) XOR \(b_{2}=b_{3}\); this step ensures that in Figure 3: Proposal of the experimental setup for implementation of TB CKA protocol. Symbols: PL – pulsed laser, BS – beamsplitter, M –mirror, L – lens, F – filter, DM – dichroic mirror, PBS – polarization beamsplitter, SPDC1, SPDC2 – nonlinear crystals, D1,D2,D3,D4 – single-photon detectors. Converted photon beams are marked with different color lines. later communication no-one will expose the bare value of \(b_{1}\) or \(b_{2}\); 3. every node shares \(b_{3}\) with all other nodes using an authenticated classical communication channel * nodes with the same value of \(b_{3}\) (the majority) perform an operation on the data block \(d_{1}\): \(b_{1}\) XOR \(d_{1}=d_{2}\) * nodes with different value of \(b_{3}\) do nothing (they are temporarily excluded from adding data to the blockchain); 4. every node shares \(d_{2}\) with others, again _via_ the classical authenticated channel 5. every node waits until it has received the value of \(d_{2}\) from all other nodes; if all the values of \(d_{2}\) which it has received are identical, then it accepts the block \(d_{1}\) as a new block in the blockchain; It should be noted here, that only the first step of a protocol requires access to the quantum channel (bits \(b_{1}\) and \(b_{2}\) come from the measurement of a quantum state that is shared between parties). All other steps are performed using classical communication layers. The presented method of obtaining consensus provides all properties of distributed consensus [3, 6]: * provided by quantum mechanics (measurement of any entangled qubit cause all other qubits to collapse into an identical state); * provided by proposing either "0" or "1" after the measurement done by first node; * provided by the entanglement, i.e. the quantum state of all qubits collapses simultaneously. It is worth to mention that faulty nodes do not influence achievement of the consensus, because the consensus is obtained after any measurement performed by any node. Summarizing, the use of GHZ state enables to achieve the consensus in distributed systems and, what is more, overcome FLP impossibility result. ## 4 CKA as a common coin for randomized protocols As stated before, reaching consensus by means of a deterministic protocol is impossible [6]. However, several randomized protocols have been invented, which allow to overcome the limitations of FLP [15, 16, 17, 18]. The new element in randomized protocols is that they require nodes to perform a coin-toss as an operation. In other words, they assume that each node has access to a random number generator. This novelty, no matter how useful, also constitutes a big practical challenge. The first problem is that in a randomized protocol, all participating nodes require access to perfect randomness. In a classical setting, this requirement on its own is already a big problem. However, there is more. Randomized consensus protocols require a much stronger notion, namely that all participating nodes have simultaneous access to _shared_ random bits. The concept of a random bit which is shared among many parties is known as the _common coin_[15, 19]. **Definition 1** (Ref. [19], p. 3).: _Let \(G\) be a protocol for \(n\) players (with no input) where each player \(P_{i}\) outputs a (classical) bit \(v_{i}\in\{0,1\}\). We say that the protocol \(G\) is a **t-resilient common coin** protocol with fairness \(p>0\), if in a system with no more than \(t\) faulty nodes \(v_{i}=b\) for any value \(b\in\{0,1\}\), with probability at least \(p\), for all good players \(P_{i}\)._ _A common coin with fairness \(1/2\) is called a **strong** common coin._ The importance of the common coin primitive stems from the fact that the ability to establish a \(t\)-resilient weak common coin between many parties immediately implies the existence of a \(t\)-resilient Byzantine Agreement protocol (Theorem 2 in Ref. [20]). Recent results by Ittai _et al._[23] state even stronger facts - not only a common coin implies the existence of any consensus protocol but the resulting protocols are time efficient. Consider the problem of asynchronous Binary Agreement with adaptive security, optimal resilience, asymptotically optimal message complexity for a net work of N parties, where \(t<\frac{N}{2}\) parties may be faulty. Then, given a strong \(t\)-resilient common coin, there exists a protocol that reaches termination in 7 communication rounds in expectation (Theorem 1.1. in Ref. [23] ). However, the creation of a shared common coin (especially a strong one!) is not a trivial task. Classical methods include the use of Shamir's secret sharing scheme [15] or a Verifiable Secret Sharing Scheme [20, 21]. These are complicated protocols and moreover they require an additional randomness source to guarantee perfect fairness of the resulting coin. Quantum physics provides a natural solution to this problem. The quantum CKA protocol is a straightforward realization of the _common coin_ concept. Moreover, due to the inherent randomness of quantum mechanics it immediately provides the strong version of a common coin. ## 5 Other quantum protocols There have been several notable ideas for quantum consensus protocols in the recent years [19, 24, 25, 26, 27]. Below, we briefly outline their main ideas, indicating the differences between the existing works and our approach. * Ben-Or, Hassidim (2005) [19]: This was the first quantum approach to the consensus problem. It requires all-to-all communication, which may not be practical for large-scale systems. Also, the paper does not discuss the practical realization of the proposed method. * Rahaman, Wiesniak, Zukowski (2015) [25]: This interesting paper provides a solution for the pure version of the Byzantine Agreement problem. However, the protocol is only presented for three parties and lacks generalization for more participants. Moreover, there is no mention of an experimental realization of the proposed solution. * Luo, Feng, Zheng (2019) [26]: This approach uses \(d\)-dimensionally entangled states to reach consensus. The list of states to entangle is defined before the protocol begins. When new nodes are added, longer lists need to be generated, potentially impacting the scalability of the method. It is capable of achieving detectable Byzantine Agreement, meaning that an abort action may be required in some steps. The protocol works for up to \(N/3\) dishonest parties but, again, the paper lacks details on practical realization. Moreover, it requires the use of a third trusted party (quantum server). * Cholvi (2022) [27]: Similarly to the previous paper, this approach achieves detectable Byzantine Agreement using Q-correlated lists. However, this protocol works for arbitrary number of dishonest parties, which is a strong result. However, it also lacks details on the practical realization of the method and involves the use of a quantum TTP or quantum server. In summary, the works outlined above focus primarily on theoretical aspects of reaching consensus but tend to overlook experimental realization details. The experimental part is covered in [24], this paper however only describes a protocol for three parties and offers no generalization to a setting with more participants. Our work, on the other hand, aims to address both the theoretical and experimental realization aspects to bridge the gap between theory and practical implementation in achieving consensus. ## 6 Further work and summary All information presented above strongly suggests that the advancement of blockchain technology can greatly benefit from leveraging the laws of quantum mechanics. Therefore, it becomes important not only to switch the type of communication from classical to quantum but also to investigate novel models of consensus algorithms, which reflect the unique challenges posed by quantum networks' topologies and architectures. In light of this, although certain amendments are necessary for its application in commercial products, the CKA protocol emerges as an interesting solution to the consensus problem in distributed systems. This is particularly true if it has the potential to challenge the long-standing FLP impossibility result, as demonstrated above. It is important to note that, similarly to some works mentioned in Section 5, our solution requires the use of a quantum server, meaning it requires some trust. In the future, we plan to explore the possibility to reformulate our approach in a way where the generation of entanglement can be performed by the participating nodes themselves (e.g. in each round a randomly chosen node in the network acts as the quantum server). This would lead to a truly decentralized consensus protocol. Moreover, considering the recent advancements in developing quantum networks [28] and simulators [29, 30], we aim to adapt our protocol in a way that would make it suitable for testing within such frameworks. This would allow us to attain practical validation of its efficacy. ### Author contributions MMS conceived the general idea for the manuscript, formulated the main problem. MS contributed to the development of the concept within the classical part and acquired funding for the research. MMS authored Sections 1, 2 and 3, formulated the protocol in Section 3 and created Figures 2 and 3. MK authored Sections 4 and 5, and created Figure 1. All authors reviewed and validated the entire manuscript, and approved the submission of the manuscript. ### Acknowledgements Research reported in this paper was partially funded by the Polish Agency for Enterprise Development, grant PARP-POPW.01.01.02-06-0031/21, and by the National Centre for Research and Development (Action 1.3/1.3.1) as part of the Bridge Alfa investment project managed by LT Capital VC fund. We wish to thank our grantors and investors for their support.
2303.03290
AmQA: Amharic Question Answering Dataset
Question Answering (QA) returns concise answers or answer lists from natural language text given a context document. Many resources go into curating QA datasets to advance robust models' development. There is a surge of QA datasets for languages like English, however, this is not true for Amharic. Amharic, the official language of Ethiopia, is the second most spoken Semitic language in the world. There is no published or publicly available Amharic QA dataset. Hence, to foster the research in Amharic QA, we present the first Amharic QA (AmQA) dataset. We crowdsourced 2628 question-answer pairs over 378 Wikipedia articles. Additionally, we run an XLMR Large-based baseline model to spark open-domain QA research interest. The best-performing baseline achieves an F-score of 69.58 and 71.74 in reader-retriever QA and reading comprehension settings respectively.
Tilahun Abedissa, Ricardo Usbeck, Yaregal Assabie
2023-03-06T17:06:50Z
http://arxiv.org/abs/2303.03290v2
# AmQA: Amharic Question Answering Dataset ###### Abstract Question Answering (QA) returns concise answers or answer lists from natural language text given a context document. To advance robust models' development, large amounts of resources go into curating QA datasets. There is a surge of QA datasets for languages like English, however, this is not the case for Amharic. Amharic, the official language of Ethiopia, is the second most spoken Semitic language in the world. There is no published or publicly available Amharic QA dataset. Hence, to foster the research in Amharic QA, we present the first Amharic QA (AmQA) dataset. We crowdsourced 2628 question-answer pairs over 378 Wikipedia articles. Additionally, we run an XLMRLlarge-based baseline model to spark open-domain QA research interest. The best-performing baseline achieves an F-score of 69.58 and 71.74 in reader-retriever QA and reading comprehension settings respectively. Question Answering, Amharic Question Answering, Dataset, QA Dataset 1 Footnote 1: Amharic is written using Ge’ez script known as _&A_ (Fidel) ## 1 Introduction The task of Question Answering (QA) is to find an accurate answer to a natural language question from a certain underlying data source (Usbeck et al., 2016). To get an as concise answer as possible for a natural language question, a plethora of QA approaches has been proposed (Chen and Yih, 2020). The scientific direction, of curating standard QA datasets is being applied to evaluate models' question synthesis ability, answer accuracy, and stimulate the research in the field (Cambazoglu et al., 2020; Kwiatkowski et al., 2019; Rogers et al., 2021). The existing QA datasets in different languages are commonly curated using either crowdsourcing or automatic generation approaches. In the first approach, crowd-workers formulate question-answer pairs over a given context. This allows for creating high-quality question-answer pairs, but very expensive. In the latter approach, question-answer pairs are formulated using language generation models, machine translation, or manual/learned templates. The main challenge in automatic generation is gold answer extraction. Mostly accomplished using existing QA models. But getting a dependable model, as perfect as a human, that can produce a correct answer is challenging. So, to minimize the generation of trivial and un-grammatical question-answer pairs, aside from improving the performance of the generation models, experts paraphrase the generated question-answer pairs (Cambazoglu et al., 2020). The distinction between the existing datasets lies in the question types (factoid vs non-factoid) and answer formulation sub-task (extractive vs abstractive). Factoid extractive QA datasets like SQuAD (Rajpurkar et al., 2016), come up with a challenge to measure a QA model competency in identifying the span of an answer from a context for factoid questions. Factoid questions like 'What is the capital city of Ethiopia?' (Answer: Addis Ababa) seeks a factual answer that appears as a named entity such as date, location, proper noun, other short noun phrases, or short sentence. Unlike that, abstractive QA datasets contain questions whose answer is a comprehension of a context, not a direct copy (Fan et al., 2019). Recently, the QA field of study is getting too many datasets in mono, cross, and multi-lingual settings (Asai et al., 2021; Clark et al., 2020; Gupta et al., 2018; Lewis et al., 2020; J. Liu et al., 2019). However, Amharic1 is not included yet in the map of the QA datasets. Specific to Amharic there are attempts to develop datasets for other Natural Language Processing (NLP) tasks like sentiment analysis (Yimam et al., 2020), morphologically annotated corpus (Yreshambel et al., 2020), contemporary Amharic corpus (Gezmu et al., 2018), and parallel corpora for machine translation Abate et al. (2018). But still, no publicly available dataset can be used for training and/or testing Amharic QA models. In Amharic, interrogative sentences can be formulated using information-seeking pronouns like "*"?" (what), "*"?" (when), "*"?" (who), "*" (where), "*"?" (which), etc. and prepositional interrogative phrases like "*"?" [**A**-**?"]** (why), "*" [**I**-**?"]** (by what), etc. Besides, a verb phrase could be used to pose questions Getahun (2013); Baye (2009). As shown in Figure 1, the AmQA dataset contains context, question, and answer triplets (also see Figure 3 in Appendix A). The contexts are articles collected from Amharic Wikipedia1. The question-answer pairs are created by crowd workers using the Haystack 2 QA annotation tool. 2628 question and answer pairs are created from 378 documents. For example, for the question given in Figure 1, the answer is the span **A**+ **?? The lack of standard public Amharic QA datasets along with the scarcity of different add-in Amharic Natural Language Processing (NLP) tools like part-of-speech-daggerger, stemmer, anaphora resolver, etc. hindered the development of Amharic QA approaches. Hence, in this work, we provide an AmQA data set that can be used as a testbed for Amharic QA models as well as cross-lingual and/or multi-lingual QA models. ## 3 The AmQA Dataset The AmQA dataset is created following three phases: article gathering, crowdsourcing question-answer pairs, and question-answer pair validation. ### Article collection and cleaning The Amharic articles used as contexts are collected from the Amharic Wikipedia dump4 file and those articles whose sizes are greater than 2 KB are kept. Articles under the 'proverb' and 'food preparation' categories are removed. Proverb articles are favorable for creating reasoning questions. Besides, 'food preparation' articles mostly contain steps of the preparation of food, which are suitable for creating 'how is the step...', and 'list the steps | ingredients added to...' questions. In both cases, even the answer may not be a span of a text in the article. The remaining articles after filtration are further pre-processed by the wiki_dump_reader5 tool to get clean texts. At last, since long articles do not motivate to create questions exhaustively, each article is chunked using the sub-topics in it. Then, we randomly select 378 cleaned articles. Footnote 4: [https://dumps.wikimedia.org/amwiki/20210801/](https://dumps.wikimedia.org/amwiki/20210801/) last accessed Aug. 18, 2021 Footnote 5: [https://pypi.org/project/wiki-dump-reader/](https://pypi.org/project/wiki-dump-reader/) ### Question-Answer Pair Crowdsourcing In the question-answer pair formulation, the cleaned contexts along with sample examples are distributed to native Amharic speaker crowd workers who have at least Bachelor's degree. Training6 is given on how to create questions that can be answered in each context. Since the articles are randomly selected from Wikipedia, the crowd workers are advised to report when they found an article with offensive content. The crowd workers are free to formulate as many questions as possible from a given context. Footnote 6: We follow the guideline given in the annotation tool handbook. ### Question-Answer Pair Validation and Annotation The validation of the formulated question-answer pairs is about their correctness and completeness. When we say correctness, the posed questions should be answerable by the given context and their answer should be precise. For example, a question like 'How many parks are there in our country?', is ambiguous due to the possessive adjective 'our', such questions are paraphrased according to the context. Questions that do not explicitly state the subject/object are paraphrased. Ambiguous, too long, and questions with non-consecutive string answers are excluded from the annotation. Then, the validated question-answer pairs are annotated using the Haystack7 annotation tool. The annotation tool provides the annotated question-answer pairs as JSON files in SQuAD format. Since the annotator introduces the 'in' character in the exported file, it is removed. Footnote 7: Haystack Annotation Tool (deepset.ai) ## 4 Dataset Analysis This section presents the analysis of the dataset. Also, provides different statistics that show the features of the dataset. ### Data statistics Table 1 shows the number of articles, questions, and answers along with the average word length of documents, questions, and answers. The contexts in the AmQA dataset on average contain 172 words. Most questions' average word length is 9.22; whereas the answers are short, and their average word length is 2.66. ### Questions Expected Answer Type To compute the percentage of the expected answer types 300 questions are selected randomly. Then, the questions are categorized into a person, \begin{table} \begin{tabular}{l l l l} \hline \hline & **Article** & **Question** & **Answer** \\ \hline size & 378 & 2628 & 2628 \\ word len & & & \\ (avg) & 172.07 & 9.22 & 2.66 \\ \hline \hline \end{tabular} \end{table} Table 1: Sample question from AmQA dataset. location, time, organization, number, description, and other classes based on the interrogative terms and the answer phrase. As shown in Table 2, we found that most of the questions are about Location, Number, and Time, where each type has above 18% coverage. Description questions take 13% of the share and questions that look for a person's name as an answer are 14.38%. 10.7% of questions, expected answer type are entities that cannot be included in the existing categories, and fall into the 'OTHER' group. Among the questions, list (3.01%) and organization (2.67%) are the smallest. In addition, Figure 2 (See Appendix A), shows the distribution of the interrogative terms over the randomly selected questions. ## 5 Experiment ### Baseline Model Since the AmQA dataset contains a set of contexts along with question-answer pairs, it can be considered a reading comprehension (RC) task Dzendzik et al. (2021); Lewis et al. (2020). That is, given a question Q and a context consisting of words, the goal of the model is to identify a word or group of consecutive words that answers question Q. Hence, based on this assumption we have set a baseline value for the AmQA using XLM-R Conneau et al. (2020) based QA model that was fine-tuned on SQuAD 2.0 dataset Rajpurkar et al. (2018). The Cross-Lingual Language Model-RoBERTa (XLM-R) is a multilingual pre-trained transformer model based on the RoBERTa architecture and trained using 2.5 TB of data across 100 languages including Amharic Conneau et al. (2020); Y. Liu et al. (2019). On the other hand, since retriever-reader-based QA models first retrieve relevant passages, then and end positions of the answer, we have implemented a retriever-reader (RR) QA model using the Farm Haystack8 open-source framework. For the retriever part, we have used BM25 and XLM-R\({}_{\text{Lang}}\)9 is used as a reader. Footnote 8: [https://haystack.deepset.ai/](https://haystack.deepset.ai/) Footnote 9: [https://huggingface.co/deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) ### Evaluation The evaluation stage measures the performance of a QA model (F-Score), the accuracy of the returned answers (EM), as well as the difficulty level of a QA dataset Clark et al. (2020); Kwiatkowski et al. (2019); Usbeck et al. (2016). A baseline value over the AmQA dataset is computed using F-score and exact match (EM) metrics. As shown in Table 3, on the reading comprehension setting the XLM-R\({}_{\text{Large}}\) F1 score is 71.74, whereas the XLM-R\({}_{\text{Base}}\) F1 score is 64.69. This shows that XLM-R\({}_{\text{Large}}\) performs better than the XLM-R\({}_{\text{Base}}\) model. In addition, since the F1 score of the XLM-R\({}_{\text{Large}}\) on the AmQA dataset is comparable to the average F1 score of the XLM-R on the MLQA dataset for seven different languages (70.7), we have decided to use it as a reader component in the RR QA model. From our observation, we have noticed that some returned \begin{table} \begin{tabular}{l c c} \hline **EAT** & **\%** & **Example** \\ \hline Perso & 14.38 & \begin{tabular}{c} _XLM-R answers by the models contain the gold answer but have affixes, additional strings, unnecessary blank spaces, and/or punctuations. So, we have created a pre-processor that normalizes characters and removes punctuation, quotation marks, and spaces. As a result, the RR QA model shows some improvement with the pre-processor. ## 6 Summary & Outlook In this paper, we presented an Amharic Question Answering dataset that contains triplets of documents, questions, and answers curated using Amharic Wikipedia. In addition, we have set baseline values in reading comprehension and retriever-reader settings. We hope the introduction of the AmQA dataset will stimulate researchers to test monolingual and/or multilingual QA models. Besides, if the equivalent translation of the curated data is obtained, this data can be used for cross-lingual QA models. ### Limitations AmQA is only a small dataset due to the expensive labor involved in creating it. Thus, data-intensive methods are disadvantaged. Also, the annotations were done by a limited number of human annotators and thus may have inherent biases or systematic annotation errors. We will investigate this in future funded work on low-resource languages. Also, the choice of baselines was limited by available computing resources. There might be better out-of-the-box baselines, such as Huggingface's Bloom, which perform better.
2305.07378
Surfacing Biases in Large Language Models using Contrastive Input Decoding
Ensuring that large language models (LMs) are fair, robust and useful requires an understanding of how different modifications to their inputs impact the model's behaviour. In the context of open-text generation tasks, however, such an evaluation is not trivial. For example, when introducing a model with an input text and a perturbed, "contrastive" version of it, meaningful differences in the next-token predictions may not be revealed with standard decoding strategies. With this motivation in mind, we propose Contrastive Input Decoding (CID): a decoding algorithm to generate text given two inputs, where the generated text is likely given one input but unlikely given the other. In this way, the contrastive generations can highlight potentially subtle differences in how the LM output differs for the two inputs in a simple and interpretable manner. We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations.
Gal Yona, Or Honovich, Itay Laish, Roee Aharoni
2023-05-12T11:09:49Z
http://arxiv.org/abs/2305.07378v1
# Surfacing Biases in Large Language Models ###### Abstract Ensuring that large language models (LMs) are fair, robust and useful requires an understanding of how different modifications to their inputs impact the model's behaviour. In the context of open-text generation tasks, however, such an evaluation is not trivial. For example, when introducing a model with an input text and a perturbed, "contrastive" version of it, meaningful differences in the next-token predictions may not be revealed with standard decoding strategies. With this motivation in mind, we propose Contrastive Input Decoding (CID): a decoding algorithm to generate text given two inputs, where the generated text is likely given one input but unlikely given the other. In this way, the contrastive generations can highlight potentially subtle differences in how the LM output differs for the two inputs in a simple and interpretable manner. We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations. ## 1 Introduction Large pre-trained language models (LMs) have revolutionized natural language processing in recent years (Radford et al., 2019; Raffel et al., 2020). However, their practical applicability remains hindered by their extreme sensitivity to minor input perturbations (natural and adversarial), including ones that humans deem insignificant (Belinkov and Bisk, 2017; Sun et al., 2018). Consider using an LM to answer medical questions, such as _"What happens if listeria is left untreated?"_, as in the HealthSearchQA dataset (Singhal et al., 2022). What is the effect of specifying demographic information (e.g. _"left untreated in men?"_ vs _"left untreated in women?"_)? In classification tasks (e.g. select one option from a list), we could directly evaluate whether the model's prediction is changed. But in open-text generation tasks, it is not directly clear how to test the impact of the perturbation, as the relevant outcome space is now huge.1 We could deterministically generate several likely responses given both inputs (e.g. using greedy decoding or beam search) and compare them, but this may only scratch the surface: meaningful differences in model behaviour may not be revealed with this comparison, which only looks at a small set of highly probable sequences. Such differences, while subtle, are important to understand and quantify (for example, a malicious user may attempt to amplify them to trigger a problematic behaviour even with greedy decoding methods). Alternatively, we could stochastically generate likely responses given each input (e.g. using temperature sampling), but then it is less clear how to compare the outputs we obtained with each input. Footnote 1: e.g. Med-PaLM generates a one-paragraph answer to this question; see Singhal et al. (2022), Table 10. Beyond the issues of fairness and robustness, it was shown that success on many well-defined tasks is highly sensitive to small changes in phrasing (Srivastava et al., 2022; Efrat et al., 2022), especially now that "prompt-engineering" became a standard practice. Given that, understanding the impact of input/prompt modifications is highly important. In this work, we take a step towards addressing these challenges by introducing a new decoding strategy: Contrastive Input Decoding (CID). Our decoding algorithm accepts two inputs: a regular input \(x\) and a "contrastive" input \(x^{\prime}\), with the objective of generating sequences that are likely given \(x\) but unlikely given \(x^{\prime}\). These contrastive generations highlight the differences in how the model treats these two inputs in an interpretable manner. CID is parameterized by a hyper-parameter \(\lambda\in\mathbb{R}\) that controls the degree of contrasting (\(\lambda=0\) recovers standard, non-contrastive, decoding). In this way, increasing \(\lambda\) can be used to surface differences that may otherwise be difficult to detect (Figure 1). We demonstrate two applications for CID. **(1) Surfacing context specific biases in auto-regressive LMs:** In Section 4 we show how CID can be used to audit LMs for fairness properties such as counterfactual fairness Kusner et al. (2017), sometimes revealing biases that are otherwise difficult to detect; **(2) Quantifying the effect of different input perturbations:** Even if sensitivity to minor input modifications is eventually unavoidable at the language modeling level, an important part of establishing trust is ensuring the magnitude of the sensitivity aligns with expectations of users. In Section 5 we show how CID can be used to quantify the relative effect of different perturbations types (e.g. syntactic vs. semantic). ## 2 Related work **Robustness to input perturbations.** Testing the sensitivity of neural language models to different input perturbations has been studied both from the perspective of model fairness (when the input perturbations correspond to individuals) and model robustness (when the perturbations correspond to conditions which the system may likely experience at test time, such as spelling mistakes or even adversarial modifications). For example, Prabhakaran et al. (2019) evaluate the sensitivity of text classification models to perturbations that replace one real-world entity with another entity of the same type and Moradi and Samwald (2021) evaluate the robustness to various types of character-level and word-level perturbations. Common to all of these works is that the robustness is evaluated w.r.t downstream classification tasks and not directly for text generation, as is our focus here. **Decoding with a contrastive flavour** was previously suggested as a means to improve the quality of text generation. Schick et al. (2021) show that by contrasting the input from a prompt that is crafted to induce toxic text generation (e.g., "This text is racist"), LLMs generate less toxic text. Similarly, Li et al. (2022) show that contrasting the predictions of two different models ("amateur" and "expert" models) on the same input produces higher-quality generations. Our approach is inspired by this line of work but conceptually different: we contrast the input from a perturbed version of it, with the goal of understanding the impact of the perturbation (rather than improving generation quality). **Contrastive explanations** are used in Jacovi et al. (2021) to interpret text classification models and in Yin and Neubig (2022) for interpretable language modeling. These works differ from ours since their objective is to explain, given a _single_ input, why the model preferred \(y\) to \(y^{\prime}\); i.e., contrasting is w.r.t outcomes, not inputs. ## 3 Method Given a pre-trained autoregressive language model \(M\) and a sequence of tokens \(w_{1},\dots,w_{k}\) in the vocabulary \(V\), let \(p_{M}(w|w_{1},\dots,w_{k})\) denote the probability that the language model assigns to \(w\in V\) being the next token. Decoding is the process of iteratively generating one token at a time by conditioning on the preceding context (the input text, and any text generated by the process so far). For example, greedy decoding simply selects the next token as the argmax of \(p_{M}\). We propose a _contrastive_ decoding procedure, that uses an additional contrastive input to inform the generation. Let \(x=x_{1}\cdots x_{k}\) be an input text for which we want to produce a continuation, and let \(x^{\prime}=x^{\prime}_{1}\cdots x^{\prime}_{k^{\prime}}\) denote the contrastive input. Intuitively, our objective is to generate text that is likely under \(x\) but less likely under \(x^{\prime}\). We propose to do this by using the contrastive input to modify the next-token distribution, as follows. Let \(x_{k+1},\dots,x_{k+i}\) denote the tokens generated so far (in the beginning of the decoding process \(i=0\)). At this point, we have two probability distributions over the vocabulary \(V\); we use \(\Delta(w;x,x^{\prime})\) to denote their difference: \(\Delta(w;x,x^{\prime})=p_{M}(w|x_{1}\cdots x_{k},x_{k+1},\dots,x_{k+i})-p_{M} (w|x^{\prime}_{1}\cdots x^{\prime}_{k^{\prime}},x_{k+1},\dots,x_{k+i})\).2 When \(x,x^{\prime}\) are clear from context we use \(\Delta(w)\) as shorthand notation. Denoting \(x_{\text{pre}}=x_{1}\cdots x_{k+i}\), we propose generating continuations by modifying \(p_{M}\) into \(\tilde{p}_{M}\) via the following multiplicative modification: Footnote 2: Note that this means that the first \(i\) generated tokens are appended as context to both the original and the contrastive input upon generating the \(i+1\)-th token. This ensures that the original context and contrastive context that we condition on do not continuously diverge, but always differ only in the ways the original and contrastive inputs differ \[\tilde{p}_{M}(w|x_{\text{pre}})\propto\alpha(\Delta(w))\cdot p_{M}(w|x_{\text{ pre}}) \tag{1}\] Here, \(\alpha:[-1,1]\rightarrow(0,\infty)\) acts as a scaling function, that multiplicatively transforms the original probability \(p_{M}(w|x_{\text{pre}})\) based on the difference \(\Delta(w)\). We use \(\alpha(v)=\exp(\lambda\cdot v)\). This ensures that the probability \(\tilde{p}_{M}(w)\) (i) remains unchanged for tokens that are equally likely under both the original and contrastive input (\(\Delta(w)\approx 0\)); (ii) decreases for tokens that are more likely under the contrastive input (\(\Delta(w)\ll 0\)); (iii) increases for tokens that are more likely under the original input (\(\Delta(w)\gg 0\)). Here, \(\lambda\in[0,\infty)\) acts as a hyper-parameter that can be used to control the magnitude of the modifications, with \(\lambda=0\) corresponding exactly to the standard (non-contrastive) decoding procedure since \(\tilde{p}_{M}\equiv p_{M}\). See Figure 5 in Appendix B for a visualization. We define Contrastive Input Decoding \(\texttt{CID}(x;x^{\prime},\lambda)\) as decoding3 w.r.t \(\tilde{p}_{M}\), as per Equation (1) and the above choice of \(\alpha\). Footnote 3: The specific decoding strategy (how to select a token based on the next-token distribution) can be chosen depending on the target application; in the rest of the manuscript we simply use greedy decoding (selecting the argmax token). ## 4 Understanding context-specific biases **Motivation**. Existing approaches for auditing neural language models for biases have focused on auditing the internal representations of models (Bolukbasi et al., 2016; Caliskan et al., 2017; Guo and Caliskan, 2021) or highlighting differences across socially-salient subgroups in various downstream classification tasks (Zhao et al., 2018; DeArteaga et al., 2019; Cao and Daume III, 2021). These are not directly applicable to settings in which the objective is to understand biases involved with using LMs in a free-text, generative mode. For example, consider using the LM to answer commonly searched consumer medical questions (Singhal et al., 2022). To evaluate notions like counterfactual fairness (Kusner et al., 2017), we may wish to understand how modifications of certain demographic attributes impact the model's behaviour. As discussed, this is challenging; it is not clear that we necessarily anticipate the model's response should be invariant under the intervention; even if we restrict our attention to inputs for which we do have such knowledge, there could be subtle differences in the model behaviour that are not manifested by comparing the most likely responses. **Experimental setup.** We demonstrate how CID can be used to surface context-specific biases in an interpretable way. We root the investigation in a specific context (e.g. biases in tech) by considering specific input templates, e.g. "_<name>_, _a software developer, failed his (her) interview at a major tech company because he (she)_". Following Maudslay et al. (2019), we intervene on _<name>_ as a way of estimating gender and racial biases for this specific input. For a single pair of names - e.g. John and Ahmed - we obtain model continuations using both greedy decoding and CID. We examine fairness at the level of demographic groups by forming six name groups using the 10 most common male and female names in three countries (US, Mexico and Egypt, Wikipedia (2023)) and examining the most common continuations, out of all 100 combinations of name pairs, for different values of \(\lambda\). Following existing anti-discrimination laws in the context of employment, model continuations are considered biased if the justification is based on a person's origin, race, color, religion, disability status, sex, familiar status, birthplace, culture, language or appearance. **Results.** We report results for flan-T5-large (780M parameters; Chung et al., 2022) and GPT2-large (774M parameters; Radford et al., 2019). For each model and pair of groups (e.g. US Male Figure 1: **Effect of \(\lambda\): Comparing continuations produced using standard greedy decoding and CID for varying \(\lambda\).** and Egypt Male names4) we report the fraction of continuations that are were agreed by raters to be biased according to the criteria mentioned above (Figure 2); see Figure 3 for qualitative examples of common continuations. Together, our results reveal that for GPT, meaningful differences are evident already with greedy decoding, which already tend to be biased. T5, on the other hand, is more fair: greedy decoding does not produce biased continuations, and the continuations are similar across groups. However, for the minority group, CID surfaces differences mapping to known stereotypes. Footnote 4: The results are consistent across different group combinations; here we focus on a single pair, and additional combinations can be found in Appendix C. ## 5 Quantifying perturbation effect **Motivation.** While the sensitivity of LMs to even minor input modifications may be unavoidable, users may reasonably expect that some perturbations (e.g. spelling mistakes or adding irrelevant information) have less impact than others. Testing this in an open-ended generation mode requires quantifying the impact of different perturbations. As we've seen in Section 4, directly comparing the generated continuations (e.g. using a form of semantic similarity) is potentially too coarse. We propose to use CID for this purpose, as follows. Consider a pair (\(x\), \(x^{\prime}\)) of the original and perturbed input. Intuitively, \(\lambda\) serves as a "knob" for driving the contrastive continuations \(\texttt{CID}(x;x^{\prime},\lambda)\) and \(\texttt{CID}(x^{\prime};x,\lambda)\) further apart. Thus, we expect that the semantic similarity between the two continuations will _decrease_ as \(\lambda\)_increases_. We can then quantify the effect of the input perturbation as \(\lambda^{\star}=\arg\min_{\lambda}[\textbf{sim}(x+\texttt{CID}(x;x^{\prime}, \lambda),x+\texttt{CID}(x^{\prime};x,\lambda))<\tau]\), where **sim** is a measure of semantic similarity and \(\tau\) is a threshold of choice. Intuitively, \(\lambda^{\star}\in[0,\infty)\) is the smallest amount of contrasting required to "push" the continuations sufficiently far apart: low values represent input perturbations with a strong effect (with \(\lambda^{\star}=0\) implying the effect is noticeable already with standard decoding); the larger \(\lambda^{\star}\) is, the weaker the effect. **Experimental setup and results**. We use Sentence-BERT (Reimers and Gurevych, 2019) to implement the similarity measure.5 We consider a specific context by fixing a collection of input sentences and define a family of different input perturbations replacing words with their synonyms, adding mostly irrelevant information, and modifications that are more semantic in nature (see the full list in Figure 8 in Appendix D). Footnote 5: As a sanity check, we verify that the similarity is indeed monotonically decreasing in \(\lambda\) (when averaged over multiple different input perturbations); see Figure 9 in the Appendix. **Results.** For each perturbation we compute its \(\lambda^{\star}\), and aggregate the results over the different types of perturbations; see Figure 4. The results reveal, for example, that T5 is quite sensitive to syntactic perturbations. Figure 4: Distribution of \(\lambda^{\star}\) values w.r.t \(\tau=0.85\) per perturbation type (flan-T5-large). Perturbation types are sorted by median value, with boxes corresponding to the quantile range \([0.25,0.75]\). Figure 3: Common continuations using regular decoding (grey) and CID (red). For GPT, meaningful differences are evident with greedy decoding; T5 is more fair, yet CID surfaces biases for the minority group. Figure 2: Fraction of biased contrastive continuations for T5 and GPT. ## 6 Conclusions We proposed Contrastive Input Decoding (CID), a decoding procedure that can be used with any pretrained LM to produce continuations likely for the input text but unlikely for a given _contrastive_ input text. Our focus was on using CID to audit fairness and robustness of pretrained LMs. A promising application we did not explore is using CID to streamline how LMs are used in practice. For example, whether contrastive techniques such as CID can aid prompt engineering by equipping developers with an interpretable way of understanding the impact of modifications to the task description.
2304.08362
NvDEx-100 Conceptual Design Report
Observing nuclear neutrinoless double beta (0vbb) decay would be a revolutionary result in particle physics. Observing such a decay would prove that the neutrinos are their own antiparticles, help to study the absolute mass of neutrinos, explore the origin of their mass, and may explain the matter-antimatter asymmetry in our universe by lepton number violation. We propose developing a time projection chamber (TPC) using high-pressure 82SeF6 gas and top-metal silicon sensors for read-out in the China Jinping Underground Laboratory (CJPL) to search for neutrinoless double beta decay of 82Se, called the NvDEx experiment. Besides being located at CJPL with the world's thickest rock shielding, NvDEx combines the advantages of the high Qbb (2.996 MeV) of 82Se and the TPC's ability to distinguish signal and background events using their different topological characteristics. This makes NvDEx unique, with great potential for low-background and high-sensitivity 0vbb searches. NvDEx-100, a NvDEx experiment phase with 100 kg of SeF6 gas, is being built, with plans to complete installation at CJPL by 2025. This report introduces 0vbb physics, the NvDEx concept and its advantages, and the schematic design of NvDEx-100, its subsystems, and background and sensitivity estimation.
X. Cao, Y. Chang, K. Chen, E. Ciuffoli, L. Duan, D. Fang, C. Gao, S. K. Ghorui, P. Hu, Q. Hu, S. Huang, Z. Huang, L. Lang, Y. Li, Z. Li, T. Liang, J. Liu, C. Lu, F. Mai, Y. Mei, H. Qiu, X. Sun, X. Tang, H. Wang, Q. Wang, L. Xiao, M. Xiao, J. Xin, N. Xu, P. Yang, Y. Yang, Z. Yang, Z. Yu, D. Zhang, J. Zhang, C. Zhao, D. Zhu
2023-04-17T15:22:58Z
http://arxiv.org/abs/2304.08362v2
# N\(\nu\)DEx-100 CALOMPA ###### Abstract The measurement of nuclear neutrinoless double-beta (0\(\nu\beta\beta\)) decay would be a revolutionary result in particle physics. The observation of such a decay would prove that the neutrinos are their own antiparticles, help to study the absolute mass of neutrinos, explore the origin of their mass, and may explain the matter-antimatter asymmetry in our universe by the violation of lepton number. We propose to develop a time projection chamber (TPC) using high-pressure \({}^{82}\)SeF\({}_{6}\) gas and the top-metal silicon sensors for read-out in the China Jinping Underground Laboratory (CJPL), to search for neutrinoless double-beta decay of \({}^{82}\)Se, called the N\(\nu\)DEx experiment. Besides located at CJPL with world's deepest rock shielding, N\(\nu\)DEx combines the advantages of the high Q value (2.996 MeV) of \({}^{82}\)Se and TPC's ability to distinguish signal and background events using their different topological characteristics. These give N\(\nu\)DEx unique and great potential for low background and high sensitivity. N\(\nu\)DEx-100, the N\(\nu\)DEx experiment phase with 100 kg of SeF\({}_{6}\) gas, is being built and planned to complete with installation at CJPL around year 2025. This report will introduce the 0\(\nu\beta\beta\) physics, N\(\nu\)DEx concept and its advantages, the schematic design of N\(\nu\)DEx-100 and its sub-systems, as well as the background and sensitivity estimation for it. neutrinoless double-beta decay, time projection chamber, \({}^{82}\)SeF\({}_{6}\), China Jinping Underground Laboratory ## 1 The Physics The Standard Model (SM) of particle physics is an important cornerstone of physics and even the entire natural sciences, and has been successfully tested by experiments for more than half a century. The discovery of its last component, the Higgs particle, marked the perfect end of an era. In the SM, neutrinos have no mass. However their oscillations, which have been observed nowadays by many independent experiments and are supported by irrefutable evidence, require the presence of a mass term, non-diagonal in the flavor basis. This is the first experimental proof of physics beyond the SM that has been found in particle physics. To this day, some properties of neutrinos are still not known, such as whether they are Dirac or Majorana fermions, their absolute masses and mass hierarchy. The charged fermions in the SM are all Dirac particles, which gain mass through Yukawa coupling with Higgs boson. Since neutrinos are electrically neutral, they are the only candidates in the SM to be Majorana fermions, _i.e._ they could be their own antiparticle. If this is the case, we can also explain why their masses are so much lower than the ones of the other charged leptons in the SM by introducing a seesaw mechanism [1]. Neutrinoless double beta decay experiments are the ideal way to find out if this the the case: If such a process is observed, it would be an irrefutable proof that neutrinos are Majorana particles, which would open the door to new physics. The measured decay rate can quantitatively constrain the absolute mass and mass ordering of neutrinos. In addition, neutrinoless double beta decay violates lepton number and CP parity conservations, which can lead to the generation of net lepton number in the early universe evolution, and thereby may explain the matter-antimatter asymmetry in the universe. ## 2 N\(\nu\)DEx Concept and its Advantages The rate at which neutrinoless double beta (\(0\nu\beta\beta\)) decay occurs (if it occurs) is extremely low, making experimental observations difficult. This kind of experiments has been developed for decades, and there is intense competition among various experimental approaches. Existing large-scale experiments include GERDA [2], MAJORANA [3], CUORE [4], CUPID [5], KamLAND-Zen [6], EXO [7], etc. In China, experiments including CDEX [8], PandaX [9], CUPID-China [10], and JUNO [11] have searched for or are being developed to search for \(0\nu\beta\beta\) decay. Currently, the highest experimental half-life sensitivity reaches \(10^{25}-10^{26}\) years, yet the existence of such decay has not been observed. Next-generation \(0\nu\beta\beta\) decay experiments are approaching the sensitivity needed for the case of inverted hierarchy of neutrino masses, on the order of \(10^{27}\) years for most \(0\nu\beta\beta\) decay isotopes. For the case of normal hierarchy of neutrino masses, which is slightly favored by oscillation experiment results so far, the required experimental half-life sensitivity is 2 orders of magnitude higher, on the order of about \(10^{29}\) years. The key to improving the sensitivity of neutrinoless double beta decay experiments is to reduce the experimental background. With zero background, the sensitivity of the experiment is proportional to the exposure (mass of decay isotope \(\times\) experiment time). However, in the presence of high background, experimental sensitivity increases only like the square root of exposure [12]. Thus to increase experimental sensitivity by another 1-3 orders of magnitude, innovative techniques must be applied to significantly reduce the experimental background. The concept of "No neutrino Double-beta-decay Experiment (N\(\nu\)DEx)" experiment, searching for the neutrinoless double-beta decay of \({}^{82}\)Se using a high-pressure gas time projection chamber (TPC) with \({}^{82}\)SeF\({}_{6}\) as the working medium and read out by Topmetal sensor chips, was proposed by D.R. Nygren, B.J.P. Jones, N. Lopez-March, Y. Mei, F. Psihas and J. Renner in 2018 [13]. This scheme combines the high Q value of \({}^{82}\)Se with the ability of TPC to distinguish signal and background using event topology, which can greatly reduce the experimental background. The Q value of \({}^{82}\)Se decay is as high as 2.996 MeV, which is higher than most of the natural radioactive backgrounds, and also higher than that of the decay isotopes currently used in many mainstream experiments. For example, the natural radioactive \(\gamma\) background near the Q value of \({}^{82}\)Se is more than 2 orders of magnitude lower than that around the Q value of \({}^{136}Xe\) (2.458MeV). Meanwhile, in gaseous TPC, the double beta decay can be reconstructed as two electron tracks, each with a distinct Bragg peak at the end. This feature can be used to distinguish signal from background. However, this experimental concept faces a major technical challenge: SeF\({}_{6}\) is an electronegative gas, in which the electrons generated by ionization will quickly be combined with gas molecules to form negative ions, and electron avalanche amplification cannot happen. Thus with traditional technologies, the weak signals cannot be read out. To solve this problem, we designed the Topmetal-S sensor [14, 15], a kind of silicon sensor chip with a layer of metal on top, which is dedicated to \(0\nu\beta\beta\) decay experiments, making TPC without physical amplification possible. It adopts industrial semiconductor CMOS process, and the top layer has a metal sheet for charge collection. In principle, its noise level can be as low as about 30 e-, thus the primary ionized charge can be directly read out without physical amplification. This gives us a unique opportunity to search for \(0\nu\beta\beta\) decay using \({}^{82}\)SeF\({}_{6}\) gas TPC. The construction of China Jinping Underground Laboratory (CJPL) provides a unique opportunity for the development of \(0\nu\beta\beta\) decay experiments. CJPL has the deepest natural rock shield in the world, and the second phase of CJPL is being constructed with world-class experimental space and low background environment. N\(\nu\)DEx will be developed at CJPL, taking full advantage of its low background level and large space. ## 3 N\(\nu\)DEx-100 Schematic Design ### N\(\nu\)DEx-100 Overall Design Currently, we are developing a N\(\nu\)DEx-100 experiment with 100kg of natural SeF\({}_{6}\) gas, the preliminary design of which is shown in Fig. 1. The main body of the experiment is in a pressure chamber, with feed-through flanges for gas, low-voltage, optical fibers and high-voltage. Inside the pressure chamber is an inner copper shielding to shield most of the external radiation. The core detector of the experiment - TPC - is installed in the barrel part of the chamber. It is composed of an insulating layer, a high-voltage plane, a field cage, and a readout plane. The readout plane consists of the focusing layer and the readout electronics layer on which the Topmetal-S sensor chips are mounted. In addition to the main body of the experiment, there are lead and high-density polyethylene (HDPE) external shieldings surrounding the pressure chamber, as well as auxiliary facilities such as the gas system, which are not shown in Fig. 1. When the experiment runs, \({}^{82}\)Se (with a natural abundance of 8.7%) in SeF\({}_{6}\) gas undergoes a double beta decay, releasing two electrons. In the case of a neutrinoless double beta decay, the total energy of the two electrons is 2.996 MeV. These two electrons lose energy in the gas, ionize the gas, and form curved tracks due to scattering. At the ends of the two tracks, two Bragg peaks with the largest energy loss are formed. Due to the electronegativity of SeF\({}_{6}\), the electrons generated by ionization quickly form negative ions with surrounding SeF\({}_{6}\) molecules. Finally a variety of SeF\({}_{N}^{\pm}\) ions will be formed with certain fractions, including SeF\({}_{0-5}^{+}\) and SeF\({}_{5,6}^{-}\), which will drift to the two ends of the TPC in the electric field. After the SeF\({}_{5,6}^{-}\) ions reach the readout plane, their signal is read out. The drift velocities of SeF\({}_{5}^{-}\) and SeF\({}_{6}^{-}\) ions are different, hence the arrival times will be different as well. The time difference can be used to obtain the drift distance. The readout plane consists of the focusing layer and the readout electronics layer. The focusing layer is used to generate certain electric field structure that allows the drift charges to pass through small holes in it, and be collected with 100% efficiency at the \(\sim 1mm^{2}\)-sized readout electrodes on the surface of the Topmetal-S chips. The Topmetal-S chips are located on the surface of the readout electronics layer. They will measure the charge and time of the signal and generate digital data, which are collected by the electronic readout boards, and transmitted to the data acquisition computer through the optical fibers. ### Pressure Chamber and Inner Copper Shielding N\(\nu\)DEx-100 will use SeF\({}_{6}\) gas at a pressure of 1.0 MPa. The design of the pressure chamber is shown in Fig. 2. The chamber consists of a barrel and two end caps, connected with two DN1200 Tongue-Grove (T-G) flanges. There are six smaller T-G flanges on each end cap: one DN50 flange for gas, one DN80 flange for high voltage, four DN125 flanges for low voltage and optic fibers, and one DN150 flange for vacuum. The inner diameter and length of the barrel are 1200 mm and 1760 mm, respectively. The chamber is made of 10mm-thick low background stainless steel. Figure 3 shows the cross-sectional view of the chamber. The weight of the chamber is around 2211 kg without taking the bolts into account. The barrel part of the pressure chamber sits on two saddles, while the two end caps are supported with carts, which can move away along the rails when opening the chamber. In order to suppress the background radiation from outside the pressure chamber, a 12-cm thick oxygen free copper shielding with low radioactive isotope contamination will be placed inside the pressure chamber as shown in Fig. 1. Figure 4 is the cross-sectional view of the design of the inner copper shielding. It consists of a barrel part and two disks which will be mounted in the end cups of the pressure chamber. The outer and inner diameters of the barrel part are 1190 mm and 950 mm, respectively. The barrel part and the disks each weigh about 6108 kg and 1476 kg, respectively. There are some holes in the disks so that gas, optic fibers, low voltage cables and the high voltage feedthrough can go through. The holes, except the ones for high voltage, are tilted in order to avoid outside radiation reaching the sensitive volume of the TPC by a straight path. The Topmetal sensors and electronics on the read-out plane will generate heat when taking data, which will induce convection in the SeF\({}_{6}\) gas in the sensitive volume, with maximum velocity larger than 10 cm/s assuming heat power of 700 W on the readout plane. This could be a problem Figure 1: Schematic design of the main part of the N\(\nu\)DEx-100 experiment. for N\(\nu\)DEx TPC, because the ions drift very slowly, with a velocity above 20 cm/s, which is much lower than the drift velocity of electrons (on the order of several cm/\(\mu\)s in most other TPC's), and the convection could cause serious distortion of the reconstructed event topology. It is thus necessary to cool the read-out plane and minimize the temperature difference inside the TPC. For the \(\nu\)DEx TPC, the \(\nu\)DEx TPC is a 20 cm/s, which is much lower than the drift velocity of electrons (on the order of several cm/\(\mu\)s in most other TPC's), and the convection could cause serious distortion of the reconstructed event topology. It is thus necessary to cool the read-out plane and minimize the temperature difference inside the TPC. this purpose, a copper heat conductor will be placed between the inner copper shielding disk and the end cap of pressure vessel. As shown in Fig. 1, the heat conductor is composed of a base (in yellow) and a tube (in purple) fixed to the inner copper shielding disk and to the pressure chamber end cap, respectively. Figure 5 shows the cross-sectional view of the copper heat conductor with dimensions. The base and the tube can slide horizontally relative to each other. This design makes sure that there is good contact between all neighboring parts along the heat conduction path, even when the pressure chamber expands due to the gas pressure, so that the total heat resistance is at an acceptable level. The weight of the copper heat conductor is around 489 kg. A liquid cooling plate will be mounted on the outer surface of the end cup of the pressure chamber. The temperature difference in the TPC and convection in the gas will be minimized by adjusting the temperature of the cooling plate. The SeF\({}_{6}\) and \({}^{82}\)SeF\({}_{6}\) gases are very expensive. In order to reduce the amount of the gas to be used and the cost, two plastic fillers will be placed in the end caps of the the pressure chamber, Figure 4: Cross-sectional view of the designed inner copper shielding. Figure 5: Cross-sectional view of the cooling base and tube. occupying the gap outside the inner copper shielding disks, as shown in Fig. 1. Since SeF\({}_{6}\) is toxic, any material that absorbs the gas and gradually releases it when the pressure chamber is open during the maintenance of the experiment could be a danger to people and the environment. Considering this, the fillers as well as the insulator layer and the TPC field cage supporting cylinder (to be described in the next subsection) will be made of polyoxymethylene (POM), which absorbs minimum amount of gas among plastic materials with acceptable mechanical strength. The design of the fillers is shown in Fig. 6. There are also some holes in the fillers for gas, optic fibers, low voltage cables and the high voltage feedthrough to go through. Up to now, the pressure chamber and the inner copper shielding have been manufactured for an on-ground prototype experiment. The copper heat conductor as well as the fillers are being manufactured. The on-ground prototype will be assembled in the near future. Then tests will be carried out on gas tightness of the pressure chamber, heat conduction and temperature control of the read-out plane, etc. ### Field Cage The electronegativity of the SeF\({}_{6}\) gas used in N\(\nu\)DEx-100 is very high. This means the negatively charged particles drifting towards the readout plane will not be electrons, since they will be quickly captured, but negative ions instead. The readout plane will employ innovative Topmetal-S sensors to read out the drifted charge without physical amplification like electron avalanche. Details about Topmetal-S sensors and the readout plane will be introduced in Sub-section 3.4. Figure 6: Cross-sectional view of the POM fillers. ost of the drift negative ions will be SeF\({}_{6}^{-}\) and SeF\({}_{5}^{-}\), however a number of more complex molecules may be formed. The drifting negative ions may form clusters like SeF\({}_{6}^{-}\)(SeF\({}_{6}\))\({}_{n}\) and SeF\({}_{5}^{-}\)(SeF\({}_{6}\))\({}_{n}\) (n=1,2,3,...) with low drift field. These clusters will smear the drift velocity of negative ions resulting in an increase of the noise. Similar to SF\({}_{6}\), the cluster formation in SeF\({}_{6}\) can be suppressed with high drift fields. For this reason, the drift field of N\(\nu\)DEx-100 will be as high as 400V/cm, corresponding to a drift velocity of negative ions above 20cm/s. A cross-sectional view of the design of a prototype field cage (FC) is shown in Fig. 7. The FC is isolated from the inner copper shielding by a 20mm-thick POM cylinder. The FC will be made of flexible printed circuit (FPC) sheets in the size of 315mm\(\times\)423mm, each FPC has 5mm-wide copper strips with a pitch of 6mm on both sides. Three snap off holes at both ends and the center of each copper strip will be used to align and fix the FPC sheets onto a 10mm-thick POM supporting cylinder with screws. Two copper rings will be mounted at the two ends of the POM supporting cylinder. The copper strips and copper rings will be connected with low radioactive background resistors. The cathode of the TPC will be a low radioactive background copper plane mounted on the inner copper shielding disk, isolated with a POM layer of thickness 25mm. 8 pogopins will be used to ensure good connection between the cathode plane and the copper ring on the end of the cylinder part of the FC once the pressure chamber is closed. A high voltage feedthrough will be connected to the cathode by spring pins. It is constructed using a compression seal approach, as shown in Fig. 8. A metal rod is pressed into a Polytetrafluoroethylene (PTEF) seal ring by clamping nuts on a DN80 flange. The feedthrough has been tested with high voltage of 100 kV and for leak-tightness in nitrogen at 1.0 MPa. ### Topmetal-S Sensor and Readout Plane Around 10k CMOS sensors, named Topmetal-S, arranged in an hexagonal pattern as shown in Fig. 9, will be directly placed at the site of charge measurement to collect ionization charges without avalanche multiplication. A perforated focusing electrode is placed above the readout plane with Figure 7: Cross-sectional view of the design of a prototype field cage. ound holes aligned with the charge collection electrodes on the Topmetal-S sensors concentrically. The focusing structure ensures all charges eventually land on the charge collection electrode for maximum charge collection efficiency. Each Topmetal-S sensor is integrated with a charge collection electrode, a front-end amplifier and data processing circuits. The charge collection electrode is an exposed hexagonal top-most metal with a diameter of about 1 mm. The charge signal collected on the Topmetal electrode is directly DC coupled to the Charge Sensitive pre-Amplifier (CSA). The structure of the CSA in the prototype Topmetal-S sensor is a folded cascade amplifier with a feedback capacitor and a feedback transistor. The decay time of the CSA can be adjusted by changing the gate voltage of the feedback transistor. Due to the stringent noise requirement, the analog signal of the CSA output must be digitized immediately. Thus an in-chip Analog-to-Digital Converter (ADC) is designed to minimize the transfer of the analog signal. The ADC should have a noise floor well below the noise of the CSA of about 1 mV and a large enough dynamic range to cover the possible range of input charge up Figure 8: Image of the high voltage feedthrough. Figure 9: Topmetal-S sensors tiled in a hexagonal pattern to form a charge readout plane without gas gain. to about 40 \(ke^{-}\). A sigma-delta (SD) ADC is used in the Topmetal-S sensor. It is composed of a SD modulator (analogue part) with coarse quantizers and decimation filter (digital part) together to produce a data-stream output. By sampling the input signal at a frequency that is much higher than the signal bandwidth (oversampling) the majority of the noise is shifted beyond the band of interest. The out-of-band noise is further attenuated by a decimation filter to achieve an improved signal-to-noise ratio. A photo of the Topmetal-S sensor chip is shown in Fig. 10. Since the sensors are densely packed on the plane, the number of available paths for routing signal out of the plane is limited. Beyond certain plane size or total number of sensors, routing every signal from all sensors out becomes impractical. Digitized data must be communicated through inter-sensor network. Therefore, circuitry that handles data processing and communication must be integrated in the sensor. A distributed, self-organizing and fault-tolerant readout network is proposed with the Topmetal-S sensor. The proposed scheme forms a sensor network by establishing a local connection between adjacent sensors. Each sensor integrates a router as a node of the network, and hence each sensor not only generates and transmits its data but also forwards the data from its adjacent nodes. Finally the data are received by a data acquisition system, which is directly connected with the edge of the network and used to transmit the data between the sensor network and the computer. ### Data Acquisition With the two-dimensional distributed network formed by the digital part of the Topmetal-S sensors, the digitized waveform of each CSA output will be transmitted to the edge of the plane, in the way of streaming readout. The speed of the data chain could go up to 45 Mbps. As shown in Figure 11, the full plane is split into modules with different sizes, to cover the end-cap as much as possible. All the streaming data chains end in the modules on the right side, where the data are further encoded and aggregated into high-speed links with a speed of a few gigabits per second, by the commercial transceiver chips. Depending on orientation of the sensors and how the sensors on the edge columns are connected to the transceiver, there will be 20\(\sim\)50 bidirectional high-speed Figure 10: Photo of the Topmetal-S sensor chip. links in total to connect the readout plan and the DAQ system in the back-end. In the other direction, the control data streams from the DAQ system are transmitted towards the left side of the readout plane. The flexible Printed Circuit Board (PCB) modules will be fabricated with radiopure material. CMOS chips such as the Topmetal-S sensors are known to be low in radioactive contamination, while other components including the capacitors, resistors, transceivers and some power chips will be selected carefully. The radioactivity measurements will be done in the CJPL. Besides the material and components, any tools or materials used during the assembly procedures should also be clean enough. In the back-end, a PCIe based DAQ system will be built to communicate with the front-end electronics on the readout plane via high-speed fiber optic links. Similar PCIe form factor has been adopted by dozens of large-scale experiments such as the ATLAS experiment at the LHC and the sPHENIX experiment at the RHIC [16, 17]. The streaming data from all sensors are received and decoded by FPGAs on the PCIe cards. The data processing, event building and filtering can be flexibly placed in the chain from the FPGA firmware to the software. The raw data and kinds of intermediate-stage data will be streamed from the DAQ server to a high-speed switch. Any client connected in the network can remotely subscribe the data and implement further data analysis. ### External Shielding In order to protect the NvDEx detector from environmental radiations, an external lead shielding with a thickness of 20cm will be built outside the pressure chamber. The preliminary design of the external lead shielding is shown in Fig.12. It is composed of two mobile halves and a fixed base, on which the pressure chamber sits. The mobile halves, including the side walls and the top, are installed on a mobile base connected to a transmission system, so that they can move aside to enable opening and operations on the pressure chamber. The fixed base is installed on a vibration isolation system, in order to minimize influence on the experimental measurements from vibration. The vacuum and gas pipes, the high voltage cable, the low voltage cables and the optical fibers will pass the external lead shielding through several holes at the joints of the two mobile halves. Two shielding doors will be installed to prevent radiation from going through these holes directly. The lead bricks and the steel structure inside the lead layer will be tested and selected for radioactive contamination. Figure 11: Architecture of the DAQ system. eutrons. High-density polyethylene (HDPE) blocks will be placed in the gap between the pressure chamber and the external lead shielding, in order to slow down and absorb neutrons. The HDPE material will also be tested for radioactivity. ### Gas System The working medium of N\(\nu\)DEx is highly toxic SeF\({}_{6}\) gas. When there is moisture in it, SeF\({}_{6}\) can easily decompose and produce corrosive HF, which may damage detector components and / or cause leakage of the toxic gas. The gas system of N\(\nu\)DEx should be able to fill the pressure chamber with SeF\({}_{6}\) to the working pressure of 1 MPa, discharge the gas from the pressure chamber and safely store it during maintenance of the experiment. During the lifetime of the experiment, the pressure vessel will be pressurized, depressurized and vacuumized many times. And the pressure vessel and part of the gas system will be working at pressure of 1 MPa for years during data taking. Thus gas tightness and reliability is critical for N\(\nu\)DEx gas system. With these considerations, the schematic of the N\(\nu\)DEx gas system is designed as shown in Fig. 13. The pressure chamber is connected to a turbomolecular vacuum pump and a dry vacuum pump, which can vacuumize the chamber and the gas system before filling SeF\({}_{6}\) gas. This can minimize contamination of the SeF\({}_{6}\) gas by air, moisture, and Radon, in order to avoid corrosion and radiation background from the gas. Every time before filling the toxic SeF\({}_{6}\) gas to the chamber, SF\({}_{6}\), which has similar properties as SeF\({}_{6}\) but is non-toxic, is filled into the system with a pressure of 1 MPa to test the gas tightness of the pressure chamber and gas system. After the test, SF\({}_{6}\) will be compressed into a SF\({}_{6}\) storage tank to be used in the future. In emergencies like gas leakage, fire, an earthquake, a power outage, etc., SeF\({}_{6}\) gas in the system will be released into an emergency pressure relief tank within 10 seconds, so that the pressure in the system is below atmospheric pressure, in order to ensure safety of personnel and environment. Before maintenance of the experiment, SeF\({}_{6}\) gas will be discharged from the pressure chamber and condensed by the low temperature in a precooler and two condensers. After the SeF\({}_{6}\) saturated vapor pressure is reached in the system, a dry vacuum pump will further pump the gas from the pressure chamber to the condensers. Then the trace amount of SeF\({}_{6}\) left in the system will be Figure 12: Design of the external shielding. ## 6 Conclusions In this paper we have presented a new method for constructing the flushed by nitrogen gas into a KI reactor to be absorbed. And SeF\({}_{6}\) will be safely stored in the condensers at low temperature as solid and vapor at pressure below atmospheric pressure, which can be refilled to the pressure chamber in the future. Up to now, all the components for the gas system have been purchased, waiting to be assembled once the experimental pressure chamber is finished with manufacture. Before moving to CJPL, the system will be commissioned and tested in the on-ground lab with SF\({}_{6}\) gas. ### Negative-pressure Clean Room A negative-pressure clean room will be built to install the whole experiment set-up inside. A clean class of 100,000 is required to avoid contamination of surfaces of the detector with dust, in order to control surface radioactive background. The negative pressure, together with a KI reactor at the exhaust vent of the clean room, serves as a second line of safety for the environment. In case any SeF\({}_{6}\) gas leakage happens, SeF\({}_{6}\) will flow with the air to the KI reactor and be absorbed. An environmental temperature of 22\(\pm\)2\({}^{\circ}\)C is required to minimize gas convection in the TPC. And environmental humidity of 30\(\pm\)10% is required to avoid condensation of moisture at cold spots, such as the cooling plate on the pressure chamber, on the experiment set-up. ## 4 Background and Sensitivity Estimation ### Natural Radioactive \(\gamma\) Background Radioactive isotopes are always present in the walls of the hall and in the rock surrounding them, as well as in the materials of the experimental set-up itself. The decays of these isotopes will produce a large amount of \(\alpha\), \(\beta\) and \(\gamma\) particles. The first two, however, will mostly be stopped without causing detectable background unless they are generated in or near the sensitive volume. So our main concern is the latter. \(\gamma\) that penetrate into the sensitive volume may interact with the gas and generate free electrons, which are visible to the detector. This is a major background source for N\(\nu\)DEx. A 20cm-thick external lead shielding and a 12cm-thick inner copper shielding outside and in the pressure chamber are used to shield external \(\gamma\)'s. The thicknesses of the shielding layers are optimized using Geant4 simulations [18; 19; 20]. Due to the presence of radioactive contamination in the shielding materials themselves, the \(\gamma\) flux in the sensitive volume will not always decrease as the shielding thickness increases. Most of the Region-Of-Interest (ROI) \(\gamma\) background comes from the decay of \({}^{214}\)Bi from \({}^{238}\)U decay chain. The probability of having a \(\gamma\) with energy larger than 2.9 MeV from a \({}^{214}\)Bi decay is \(6.8\times 10^{-4}\)[21], which is quite rare. This is the advantage of \({}^{82}\)Se's high Q value for N\(\nu\)DEx. The decay of \({}^{208}\)Tl from the \({}^{232}\)Th decay chain contributes about one order of magnitude less ROI background, with only around \(8.5\times 10^{-5}\)[21] of the produced \(\gamma\) having energy around and above the ROI. Thus we will only focus on the \({}^{238}\)U decay chain in our \(\gamma\) background simulation and estimation. The \({}^{238}\)U radiation activity of materials considered in our simulations are listed in table 1. The \({}^{238}\)U activity of the concrete in the experimental hall walls is from a measurement at CJPL [22]. The contribution of the rocks was negligible, since the contamination rate of the rocks is significantly lower than the concrete. For other materials in the experiment set-up, we used the measurements from the NEXT experiment [23]. These numbers will be updated using measurements of materials for N\(\nu\)DEx in the future. The activity of the POM material used for TPC field cage is assumed to be the same as HDPE in NEXT. Many other parts or materials, e.g., the SeF\({}_{6}\) gas, the Topmetal-S sensors, the Flexible PCB used in the field cage and readout plane, the bolts on the pressure camber, and various steel supporting structures outside the pressure chamber, are not considered in the current simulation, because of their relative smaller sizes, lower background contributions, complication to built the geometry model, and / or lack of knowledge on their radioactive activities. Due to the above-mentioned reasons, the current background simulation only serves as a very rough estimation and gives some guidance to hardware developments. The results of the natural radioactive \(\gamma\) background simulation are reported in table 2. The largest contribution comes from the POM of the field cage because it is close to the sensitive gas volume. The total ROI \(\gamma\) background level is about 0.4 evts/yr. Note that like most other types of backgrounds to be described in the following sub-sections, this background can be further suppressed by one order of magnitude using a neural network considering event topology information [13], to the order of only 0.04 evts/yr in ROI. ### Neutron Background Radioactive decays can emit neutrons as well. These events are quite rare, however, since it is significantly more difficult to stop neutrons than \(\gamma\)'s, the former can arrive more easily at the \begin{table} \begin{tabular}{l l l} \hline Material & Subsystem & \({}^{238}\)U Activity (mBq/kg) \\ \hline Concrete & Experimental hall & \(6.8\times 10^{3}\)[22] \\ Lead & External shielding & 0.37[23] \\ HDPE & External shielding & 0.23[23] \\ Steel & Pressure vessel & 1.9[23] \\ Copper & Inner copper shielding & 0.012[23] \\ POM & Field cage & 0.23[23] \\ \hline \end{tabular} \end{table} Table 1: Activity assumed for each material \begin{table} \begin{tabular}{l l l l} \hline & Source & \multicolumn{2}{c}{Background in ROI} \\ Material & Subsystem & evts/yr & \(10^{-5}\)evts/(keV kg yr) \\ \hline Concrete & Experimental hall & 0.004 & 0.12 \\ Lead & External shielding & 0.003 & 0.09 \\ HDPE & External shielding & 0.005 & 0.16 \\ Steel & Pressure vessel & 0.026 & 0.86 \\ Copper & Inner copper shielding & 0.050 & 1.67 \\ POM & Field cage & 0.330 & 10.99 \\ \hline Total & & 0.42 & 13.9 \\ \hline \end{tabular} \end{table} Table 2: \(\gamma\) background from different sources without suppression using event topology sensitive volume. Neutrons do not create ionization signals directly, but they can activate nuclei inside the detector, creating unstable isotopes via the reaction \[n+{}^{A}N\rightarrow{}^{A+1}N. \tag{10}\] Our main concern is if these isotopes will be created in the sensitive volume. The unstable isotopes we will consider are \({}^{83}\)Se and \({}^{20}\)F, generated from \({}^{82}\)Se and \({}^{19}\)F, respectively. In principle, other Se isotopes can also be created, but their Q-value is considerably lower than our ROI, so they will not provide additional background for the \(0\nu\beta\beta\) decay search. The main source of neutron induced background is \({}^{20}\)F with a \(\beta\) decay Q-value of 7.0 MeV. The fraction of the \(\beta\) spectrum that falls within the ROI is \(9.4\times 10^{-3}\), according to our Geant4 simulations. \({}^{83}\)Se decays via the chain reaction \[{}^{83}Se\rightarrow\bar{\nu}_{e}+e+{}^{83}Br\rightarrow\bar{\nu}_{e}+e+{}^{8 3}Kr. \tag{11}\] The Q-value of \({}^{83}\)Se decay is 3.7 MeV. The fraction of \(\beta\)'s from \({}^{83}\)Se decay within our ROI is \(2.7\times 10^{-5}\), considerably less than the \(\beta\)'s from \({}^{20}\)F decay. This is because the ROI is in the tail of the \(\beta\) energy distribution. Our simulations show that the activation rate of \({}^{83}\)Se and \({}^{20}\)F are roughly the same, so we can safely ignore the former with subdominant ROI background contribution. The Q-value of \({}^{83}\)Br decay is 0.97 MeV, much lower than the ROI. Moreover, since its half-life is \(\sim\)2.4 hours, it will not be a source of associated pile-up event background, either, so we can neglect it as well. \(\gamma\)'s will be emitted from deexcitation of the products of the above \(\beta\) decays. However, the chance to have a high energy \(\gamma\) close to the \({}^{82}\)Se Q value is very low. Moreover, most of the emitted \(\gamma\)'s will pass through the gas volume without any ionization signals in the detector. If a \(\gamma\) interacts with the gas, in most cases, the TPC can separate its signal from the \(\beta\) decay signal in space, so that their energy will not be summed up to get a higher total energy. Using Geant4 and FLUKA [24; 25; 26; 27] packages, we simulated the neutron induced background rate. The fast neutron spectrum is assumed to be the one measured at CJPL and reported in [28]. Without the HDPE shielding, around 50 neutron activated background events / year is obtained within the ROI, which is even higher than the natural radioactive \(\gamma\) background. The HDPE shielding is thus added to stop neutrons. Unlike high-Z materials like copper and lead, which can efficiently stop \(\gamma\)'s, HDPE contains a lot of hydrogen nuclei, which are able to slow down and absorb neutrons very effectively. HDPE blocks will be placed in the gap between the external lead shielding and the pressure vessel. The neutron induced background is then estimated as 0.1 events/year in ROI. ### Cosmogenic Background When materials used in the experiment are exposed to cosmic rays during production and transportation on the ground, they can be activated, creating relatively long-lived isotopes. The half-lives of these isotopes, although much shorter than those of \({}^{238}\)U and \({}^{232}\)Th described in Sub-section 4.1, can still be long enough, e.g. months to years, to make them an important background source even after the materials are placed underground. The spallation process by high-energy cosmic nucleons is one of the dominant processes for the cosmogenic production of radionuclides. But other reactions like capture can also be important in some cases. Spallation reactions can produce a large number of radionuclides depending on the atomic number of the target material. On the Earth's surface, isotope production is dominated by neutrons because protons are absorbed by the atmosphere. Cosmogenic activation can be minimized by reducing surface exposure, e.g. using shielding against the cosmic rays, avoiding flights and storing on the ground, or even producing materials underground. Purification techniques can also eliminate many of the induced isotopes. However, these preventive measures make the experiment preparation more complex. Consequently, it would be advisable to assess the relevance of the material exposure to cosmic rays for the experiments and its effect on the sensitivity. To quantify the induced activity, A, of an isotope with decay constant \(\lambda\), both the production rate R of the isotope in the considered target, as well as the exposure history, must be well-known. In particular, A can be computed as: \[A=R(1-e^{-t_{exp}/\lambda})e^{-t_{cool}/\lambda} \tag{10}\] considering \(t_{exp}\) the time of exposure to cosmic rays and \(t_{cool}\) the cooling time (time spent underground once shielded from cosmic rays). Some direct measurements of production rates have been carried out for a few materials from the saturation activity, obtained by sensitive screening of materials exposed in well-controlled conditions. However, in many cases, production rates must be evaluated from the flux (per unit energy) of cosmic rays, \(\phi\), and the isotope production cross-section, \(\sigma\), both depending on the particle energy E: \[R=N_{t}\int\sigma(E)\phi(E)dE, \tag{11}\] with \(N_{t}\) the number of target nuclei. We have used the ACTIVIA code [29] to calculate the cosmogenic activation for various materials used in N\(\nu\)DEx. The cosmogenic activation rate of various radio-isotopes in SeF\({}_{6}\) gas, copper, lead and steel, as well as activities after exposure and cooling for certain time lengths, are shown in Tables 3, 4, 5, and 6. Only isotopes with relatively long half-lives and high Q-values are listed, since other isotopes will not create a background in 0\(\nu\beta\beta\) decay ROI after being placed underground for some cooling time. As shown in Tab. 3, the most important cosmogenic background isotope from \({}^{82}\)Se is \({}^{56}\)Co, with Q-value of 4566 keV, which is above \({}^{82}\)Se Q-value, and a half-life of 77.3 days, which is long enough to remain a considerable activity of 0.02 \(\mu\)Bq/kg after 2 years of exposure at sea level and 1 year of cooling time. Cobalt fluorides are solid rather than gas at room temperature, and so far we don't know whether or how much of the generated single-molecular \({}^{56}\)Co fluorides will remain in the gas after SeF\({}_{6}\) production, storage and transportation. Assuming conservatively that all the \({}^{56}\)Co stays in the gas and reaches the sensitive volume of the experiment, for 100kg of \({}^{82}\)SeF\({}_{6}\) with \({}^{56}\)Co activity of 0.02 \(\mu\)Bq/kg, about 26 decays per year will happen. Thus energy deposition in ROI will be minimum. Cosmogenic isotopes from \({}^{19}\)F, having mass numbers no larger than 19, do not have relatively large Q-value and long lifetime at the same time, thus will not constitute important cosmogenic background contributions. The most important cosmogenic background isotope in copper is also \({}^{56}\)Co, as shown in Tab. 4. Considering that the inner copper shielding will be assembled and tested with the pressure chamber at the Institute of Modern Physics at Lanzhou, with an altitude of about 1500m, the exposed cosmic neutron flux is about 3.2 times higher than at sea level. The \({}^{56}\)Co activity will be about 323 \(\mu\)Bq/kg after 2 years of exposure. Using the simulation framework described in Sub-section 4.1, we find the ROI background in the sensitive volume from \({}^{56}\)Co emitted \(\gamma\)'s is as high as about 3700 evts / yr, which is much higher than the natural radiation \(\gamma\) background. After 3 years of cooling, the ROI background will drop to a subdominant level of 0.19 evts / yr. So it is important to place the inner copper shielding underground for cooling as early as possible. Other isotopes in Table 4 have Q-value lower than \({}^{82}\)Se, so they will not contribute to the ROI background alone. However, since the drift velocity of ions in N\(\nu\)DEx TPC is slow, there is chance that ionization due to \(\gamma\)'s from these isotopes (or the ones coming from environmental radioactive decays) add up with other background sources, forming the so-called pile-up event backgrounds and reaching the \({}^{82}\)Se Q-value. This will be described in Subsection 4.5, taking \({}^{60}\)Co, which has a relatively long half-life and high Q-value, as an example of cosmogenic background isotopes. As shown in Tables 5 and 6, the production rates of cosmogenic backgrounds in lead and steel \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Isotope & Q-value & Half-life & Production rate & Activity after & Activity after \\ & (keV) & (d) & \multicolumn{3}{c}{(atoms/kg/d)} & 2yr exposure & 1yr cooling \\ & & & Calc. & Expt. & (\(\mu\)Bq/kg) & (\(\mu\)Bq/kg) \\ \hline \({}^{46}\)Sc & 2367 & 83.8 & 3.1 & 2.18\(\pm\)0.74 & 36 & 1.7 \\ \({}^{54}\)Mn & 1377 & 312 & 14.3 & 8.85\(\pm\)0.86 & 133 & 59 \\ \({}^{59}\)Fe & 1565 & 44.5 & 4.2 & 18.7\(\pm\)4.9 & 49 & 0.2 \\ \({}^{56}\)Co & 4566 & 77.3 & 8.7 & 9.5\(\pm\)1.2 & 101 & 3.8 \\ \({}^{57}\)Co & 836 & 272 & 32.5 & 74\(\pm\)17 & 318 & 125 \\ \({}^{58}\)Co & 2307 & 70.9 & 56.6 & 67.9\(\pm\)3.7 & 655 & 18 \\ \({}^{60}\)Co & 2824 & 1.92\(\times\)10\({}^{3}\) & 26.3 & 86.4\(\pm\)7.8 & 71 & 62 \\ \hline \hline \end{tabular} \end{table} Table 4: Cosmogenic activation rate of various radio-isotopes in copper, as well as activities after exposure at sea level and cooling for certain time lengths. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Isotope & Q-value & Half-life & Production rate & Activity after & Activity after \\ & (keV) & (d) & \multicolumn{3}{c}{(atoms/kg/d)} & 2yr exposure & 1yr cooling \\ & & & Calc. & Expt. & (\(\mu\)Bq/kg) & (\(\mu\)Bq/kg) \\ \hline \({}^{54}\)Mn & 1377 & 312 & 0.37 & - & 3.4 & 1.5 \\ \({}^{56}\)Co & 4566 & 77.3 & 0.04 & - & 0.46 & 0.02 \\ \({}^{57}\)Co & 836 & 272 & 0.14 & - & 1.4 & 0.54 \\ \({}^{58}\)Co & 2307 & 70.9 & 0.83 & - & 9.6 & 0.27 \\ \({}^{60}\)Co & 2824 & 1.92\(\times\)10\({}^{3}\) & 0.11 & - & 0.29 & 0.26 \\ \({}^{75}\)Se & 864 & 120 & 14.9 & - & 170 & 20.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Cosmogenic activation rate of various radio-isotopes in enriched \({}^{82}\)Se, as well as activities after exposure at sea level and cooling for certain time lengths. are either lower than or comparable to those in copper. Considering that they are shielded from the sensitive volume by the inner copper shielding, their background contribution should be less important than cosmogenic backgrounds in copper. ### Other Backgrounds We have also considered the following background categories for N\(\nu\)DEx, which are much lower than natural radioactive \(\gamma\), neutron and cosmogenic backgrounds, and thus can be neglected: * **Natural radioactive \(\alpha\) and \(\beta\) background**: \(\alpha\) and \(\beta\) from natural radioactive isotopes have much shorter path length than \(\gamma\) and neutrons. So they can influence the experiment measurement only if the radioactive isotopes are in the gas or on the surface to the sensitive volume. For the SeF\({}_{6}\) gas, since both the Se and F\({}_{2}\) gas materials are obtainable with high purity (99.999% for Se and 99.99% for F\({}_{2}\)), the radioactive isotope contamination should be low. This needs to be further studied and confirmed by, e.g., ICP-MS measurements of the Se material, or future analysis of the data from N\(\nu\)DEx itself. Fluorides of U and Th are solids at room temperature. So far we don't know whether or how much of the U and \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Isotope & Q-value & Half-life & Production rate & Activity after & Activity after \\ & (keV) & (d) & \multicolumn{2}{c}{(atoms/kg/d)} & \multicolumn{1}{c}{2yr exposure} & \multicolumn{1}{c}{1yr cooling} \\ & & & Calc. & Expt. & (\(\mu\)Bq/kg) & (\(\mu\)Bq/kg) \\ \hline \({}^{48}\)V & 4012 & 16.0 & 21.6 & – & 250 & 3.4\(\times 10^{-5}\) \\ \({}^{52}\)Mn & 4712 & 5.59 & 40.0 & – & 463 & 1.0\(\times 10^{-17}\) \\ \({}^{56}\)Co & 4566 & 77.3 & 46.1 & – & 533 & 20 \\ \({}^{58}\)Co & 2307 & 70.9 & 5.1 & – & 59 & 1.7 \\ \({}^{60}\)Co & 2824 & 1.92\(\times 10^{3}\) & 0.24 & – & 0.64 & 0.56 \\ \({}^{88}\)Y & 3623 & 107 & 0.042 & – & 0.48 & 0.045 \\ \hline \hline \end{tabular} \end{table} Table 6: Cosmogenic activation rate of various radio-isotopes in steel, as well as activities after exposure at sea level and cooling for certain time lengths. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Isotope & Q-value & Half-life & Production rate & Activity after & Activity after \\ & (keV) & (d) & \multicolumn{2}{c}{(atoms/kg/d)} & \multicolumn{1}{c}{2yr exposure} & \multicolumn{1}{c}{1yr cooling} \\ & & & Calc. & Expt. & (\(\mu\)Bq/kg) & (\(\mu\)Bq/kg) \\ \hline \({}^{56}\)Co & 4566 & 77.3 & 0.026 & – & 0.30 & 0.01 \\ \({}^{57}\)Co & 836 & 272 & 0.047 & – & 0.46 & 0.18 \\ \({}^{58}\)Co & 2307 & 70.9 & 0.127 & – & 1.47 & 0.04 \\ \({}^{60}\)Co & 2824 & 1.92\(\times 10^{3}\) & 0.008 & – & 0.02 & 0.02 \\ \({}^{194}\)Au (\({}^{194}\)Hg) & 2501 & 1.90\(\times 10^{5}\) & 5.52 & – & 0.17 & 0.17 \\ \({}^{202}\)Tl(\({}^{202}\)Pb) & 2398 & 1.93\(\times 10^{7}\) & 120 & – & 0.04 & 0.04 \\ \({}^{207}\)Bi & 1363 & 1.15\(\times 10^{4}\) & 1.42 & – & 0.71 & 0.69 \\ \hline \hline \end{tabular} \end{table} Table 5: Cosmogenic activation rate of various radio-isotopes in lead, as well as activities after exposure at sea level and cooling for certain time lengths. For short-lived isotopes with very long-lived parents (given in parentheses), we have considered the half-life of parent isotopes. Th will stay in the gas and enter the sensitive volume after production, storage and filling of SeF\({}_{6}\). Radioactive isotope contamination on the surface to the sensitive volume can be limited by careful cleaning of the components in the pressure chamber and gas system, especially directly on the surfaces to the sensitive volume, i.e., the inner surface of the field cage, the high voltage plate and the focusing plane. \(\alpha\) and \(\beta\) from radioactive contamination on the surfaces to the sensitive volume can also be reduced by cutting the volume within a certain distance to the surface when analyzing the data, since the path length is within a few cm in the gas at 1 MPa. * **Radon background**: Radon, as a radioactive gas which can be emitted from radioactive isotopes in the underground environment and in materials in the experiment set-up, may become a background source if it gets into the sensitive volume of the experiment. In order to limit the level of Radon contamination for the experiment, it is planned to have fresh air flushed through the clean room where the N\(\nu\)DEx experiment is located, during the assembling, running and maintaining of the experiment. Additionally, during the running of the experiment, the pressure chamber is air-tight with a gas pressure of 1 MPa inside, thus the amount of Radon penetrating into the chamber should be minimum. Radon can also be emitted from the materials inside the pressure chamber. Activated carbon can be placed in end caps of the pressure chamber outside the inner copper shielding, in order to absorb Radon. Both \({}^{222}\)Rn (half-life of 3.8 d) from the \({}^{238}\)U chain and \({}^{220}\)Rn (half-life of 55 s) from the \({}^{232}\)Th chain undergoes an alpha decay into polonium. A portion of the products in the following decay chain may be in the form of negative or positive ions, which will drift towards the anode or cathode in the electric field of the TPC. The \(\alpha\) and \(\beta\) from the decay of Radon and its decay products (if they happen to not be ions and drift to the anode or cathode) will be further suppressed with event topology characteristics in TPC, since \(\alpha\) forms thicker and straighter tracks than \(\beta\) and a single \(\beta\) event has 1 instead of 2 Bragg peaks. \({}^{214}\)Bi and \({}^{210}\)Tl in the \({}^{222}\)Rn decay chain have \(\beta\) decay endpoint energy above the ROI, thus could contribute to ROI background. Other \(\alpha\) and \(\beta\) energies in the \({}^{222}\)Rn and \({}^{220}\)Rn decay chains are mostly away from the \({}^{82}\)Se Q value, further limiting the chance to create any background in the ROI. The \(\gamma\) from the decays of Radon and its decay products will have a small chance to interact with the gas, making a much smaller background contribution in ROI than the natural radioactive \(\gamma\) directly from the materials and environment mentioned in Sub-section 4.1. * **Cosmic muon background**: As the deepest underground lab in the world, CJPL has a cosmic muon flux as low as \(3.53\pm 0.22\) (stat.) \(\pm 0.07\) (sys.) \(\times 10^{-10}cm^{-2}s^{-1}\)[30]. N\(\nu\)DEx-100 as a meter-scale experiment at CJPL, will observe only the level of 0.3 cosmic muon per day. These muon background events will show straight tracks throught the TPC and can be easily distinguished from \(0\nu\beta\beta\) signal events, thus can be neglected. A very small fraction of muons will interact with the gas, creating some radioactive isotopes. The chance of this kind of background falling into the energy ROI is also neglectable considering the low muon flux at CJPL. * **Neutrino scattering background**: Neutrinos from various sources can easily penetrate to the sensitive volume of the detector, but the chance a neutrino interacts with the gas is very low. The most important contribution comes from solar neutrinos scattering with electrons, yielding backgrounds at a rate lower than the order of 0.1 evt/ROI/ton/yr [11]. These single electron background events will be further suppressed with event topology analysis as mentioned in Sub-section 4.1. * \(2\nu\beta\beta\)**decay background**: \(2\nu\beta\beta\) decay events have exactly the same characteristics as \(0\nu\beta\beta\) events except for lower energy. With the expected 1% FWHM energy resolution of N\(\nu\)DEx, \(2\nu\beta\beta\) decay events will not be an important background source as shown in Fig. 14. ### Pile-up Event Background N\(\nu\)DEx-100 use a TPC as the main detector. The drift time for the about 160 cm maximum drift length is about 7s. If one event of the above-mentioned backgrounds happens, and while the ionization charges of this event drift, another background event also happens at the position near the cloud of the drifting charges from the first event, these two events could "pile-up" on each other, arriving at the read-out plane at the same location and the same time, looking like one event with energy equal to the total energy of these two events. Thus pile-up events tends to have higher energy than single background events. Since all kinds of background rate drop dramatically with increasing energy, pile-up events have a higher chance to fall into ROI than singal background events. Here we do a very rough estimation of pile-up event background rate. We assume two events can be separated if they are 10cm\(\times\)10cm\(\times\)10cm away when the second event happens. 10cm is roughly the size of a \(0\nu\beta\beta\) event in SeF\({}_{6}\) gas at the pressure of 1 MPa. Since events with lower energy, which dominates the background energy spectrum, are also smaller in size, this will be a conservative estimation. With this assumption, we take the single event energy spectra for various backgrounds, do a convolution and a proper normalization, and obtain the pile-up event background energy spectra in Fig. 14 and 15 for N\(\nu\)DEx-100 using natural SeF\({}_{6}\) gas and enriched \({}^{82}\)SeF\({}_{6}\) gas, respectively. As can be seen from the plots, for the N\(\nu\)DEx-100 experiment with natural SeF\({}_{6}\) gas, the highest pile-up background component is \(2\nu\beta\beta+2\nu\beta\beta\) background, which contributes to the level of 0.06 events per year in ROI. This is still lower than single-event natural radiation \(\gamma\) background. Figure 14: Energy spectra for various single-event and pile-up backgrounds of the N\(\nu\)DEx-100 experiment with natural SeF\({}_{6}\) gas without further suppression using event topology information. owever, for the N\(\nu\)DEx-100 experiment with enriched \({}^{82}\)SeF\({}_{6}\) gas, \(2\nu\beta\beta+2\nu\beta\beta\) pile-up background is at the level of 8 events per year in ROI, which is even higher single-event natural radiation \(\gamma\) background. Thus, suppression of pile-up backgrounds need to be considered in the future for the N\(\nu\)DEx-100 experiment with enriched \({}^{82}\)SeF\({}_{6}\) gas. With full simulation of charge drift, diffusion and read-out responses, a more careful study of pile-up backgrounds can be conducted in the future, which will be more precise than the current conservative estimation. With neural network etc., event topology information can also be used to distinguish pile-up events from double-beta decay events, reducing pile-up events contribution. If the pile-up backgrounds can not be suppressed to be smaller than single-event \(\gamma\) background with only software, adding scintillation light detection at the HV plane side using silicon PM and light guides is also an option to explore in the future. To do this, the SeF\({}_{6}\) gas scintillation characteristics need to be studied. If needed, other scintillator gas may be mixed into the \({}^{82}\)SeF\({}_{6}\) gas to increase the scintillation light yield. And the HV plane may also need to be changed to a mesh to allow scintillation light to go through. With scintillation light read-out, which can easily separate signals from 2 events happening several ns apart, pile-up background events can be almost completely rejected. ### Sensitivity Estimation If N\(\nu\)DEx-100 is successfully developed and reaches the background level listed above, the dominant background contribution are natural radiation \(\gamma\) background and neutron background, at the level of 0.4 and 0.1 cts / yr in ROI, respectively, before suppression using event topology information. According to studies in [13], using event topology information the neural network can further suppress background by a factor of about 10 while keeping 90% signal efficiency. So the final background level is 0.05 cts / yr in ROI. In 5 years of running time, the background is about 0.25 counts in ROI. Thus N\(\nu\)DEx-100 is almost a zero-background experiment. For simplicity, the N\(\nu\)DEx-100 experiment sensitivity is calculated assuming 0 background and shown in Fig.16. We can see that, using 100kg of natural SeF\({}_{6}\) gas, the \(T_{1/2}\) sensitivity can reach \(4\times 10^{25}\) yrs at 90% confidence level, after 5 years of running. If using enriched \({}^{82}\)SeF\({}_{6}\) gas, the \(T_{1/2}\) sensitivity can reach \(4\times 10^{26}\) yrs at 90% confidence level after 5 years of running, which is better than the current best \(T_{1/2}\) sensitivity of \(2.3\times 10^{26}\) yrs from KamLAND-Zen experiment [6]. Figure 15: Energy spectra for various single-event and pile-up backgrounds of the N\(\nu\)DEx-100 experiment with enriched \({}^{82}\)SeF\({}_{6}\) gas without further suppression using event topology information. ## 5 Summary In summary, N\(\nu\)DEx-100 is a 100-kg scale neutrinoless double-beta experiment using high-pressure SeF\({}_{6}\) gas TPC, which will be placed in the China Jinping Underground Laboratory. The topmetal-S sensors have been developed in order to read out drift ion signals in the N\(\nu\)DEx TPC with the electronegative SeF\({}_{6}\) gas. All sub-systems of N\(\nu\)DEx-100, including the pressure chamber and inner copper shielding, TPC field cage, readout plane and data acquisition system, external shielding, gas system, as well as the negative-pressure clean room, have been finished with conceptual design and are described in this report. N\(\nu\)DEx-100 is being built and planned to complete with installation at CJPL around year 2025. Combining the advantages of the high Q value (2.996 MeV) of \({}^{82}\)Se and TPC's ability to distinguish signal and background events using their different topological characteristics, N\(\nu\)DEx-100 can achieve very low background level of 0.05 cts / yr in ROI and high \(T_{1/2}\) sensitivity of \(4\times 10^{25}\) (\(4\times 10^{26}\)) yrs at 90% confidence level after 5 years of running, using 100 kg of natural SeF\({}_{6}\) (enriched \({}^{82}\)SeF\({}_{6}\)) gas. ###### Acknowledgements. This project is supported by the National Key Research and Development Program of China 2021YFA1601300 and 2022YFA1604703, From-0-to-1 Original Innovation Program of Chinese Academy of Sciences ZDBS-LY-SLH014, and International Partner Program of Chinese Academy of Sciences GJHZ2067.
2306.06536
Understanding Active Region Emergence and Origins on the Sun and Other Cool Stars
The emergence of active regions on the Sun is an integral feature of the solar dynamo mechanism. However, details about the generation of active-region-scale magnetism and the journey of this magnetic flux to the photosphere are still in question. Shifting paradigms are now developing for the source depth of the Sun's large-scale magnetism, the organization of this magnetism into fibril flux tubes, and the role of convection in shaping active-region observables. Here we review the landscape of flux emergence theories and simulations, highlight the role flux emergence plays in the global dynamo process, and make connections between flux emergence on the Sun and other cool stars. As longer-term and higher fidelity observations of both solar active regions and their associated flows are amassed, it is now possible to place new constraints on models of emerging flux. We discuss the outcomes of statistical studies which provide observational evidence that flux emergence may be a more passive process (at least in the upper convection zone); dominated to a greater extent by the influence of convection and to a lesser extent by buoyancy and the Coriolis force acting on rising magnetic flux tubes than previously thought. We also discuss how the relationship between stellar rotation, fractional convection zone depth, and magnetic activity on other stars can help us better understand flux emergence processes. Looking forward, we identify open questions regarding magnetic flux emergence that we anticipate can be addressed in the next decade with further observations and simulations.
Maria A. Weber, Hannah Schunker, Laurène Jouve, Emre Işık
2023-06-10T22:48:37Z
http://arxiv.org/abs/2306.06536v1
# Understanding Active Region Emergence and Origins on the Sun and Other Cool Stars ###### Abstract The emergence of active regions on the Sun is an integral feature of the solar dynamo mechanism. However, details about the generation of active-region-scale magnetism and the journey of this magnetic flux to the photosphere are still in question. Shifting paradigms are now developing for the source depth of the Sun's large-scale magnetism, the organization of this magnetism into fibril flux tubes, and the role of convection in shaping active-region observables. Here we review the landscape of flux emergence theories and simulations, highlight the role flux emergence plays in the global dynamo process, and make connections between flux emergence on the Sun and other cool stars. As longer-term and higher fidelity observations of both solar active regions and their associated flows are amassed, it is now possible to place new constraints on models of emerging flux. We discuss the outcomes of statistical studies which provide observational evidence that flux emergence may be a more passive process (at least in the upper convection zone); dominated to a greater extent by the influence of convection and to a lesser extent by buoyancy and the Coriolis force acting on rising magnetic flux tubes than previously thought. We also discuss how the relationship between stellar rotation, fractional convection zone depth, and magnetic activity on other stars can help us better understand flux emergence processes. Looking forward, we identify open questions regarding magnetic flux emergence that we anticipate can be addressed in the next decade with further observations and simulations. Sun, solar, sunspot, magnetic field, flux emergence ## 1 Introduction The Sun is a magnetically active star showing activity on a wide range of spatial scales and field strengths. An active region is defined by the appearance of a dark feature at the surface of the Sun in continuum white light observations. These features are associated with concentrations of strong magnetic fields, and often develop into fully formed, stable sunspots. Typical active regions consist of opposite polarity pairs that are predominantly east-west aligned and have sizes ranging on the order of 10s to 100s of microhemispheres with lifetimes ranging from two days up to many weeks. The Sun's coherent surface flux elements such as sunspots and active regions emerge from the solar interior. However, how they arrive at the surface and their specific depth of origin is not clear. Helioseismology has placed upper bounds on the amplitude and speed of the flows at and below the surface prior to emergence (e.g. Birch et al, 2013), however any unambiguous detection of flows above the background noise remains a challenge. Therefore, numerical simulations of flux emergence through the surface of the Sun are critical to reconciling the observations with the physics of the formation of active regions. Originally, the paradigm of an idealized, magnetically isolated flux tube was invoked to model magnetism giving rise to active regions. Here, it is assumed that the dynamo has already managed to create magnetism at the base of the convection zone a priori. Studies employing this paradigm were conducted first in a 1D Lagrangian frame using the thin flux tube approximation (e.g. Spruit, 1981; Caligari et al, 1995; Fan et al, 1993), followed by 2D (e.g. Moreno-Insertis and Emonet, 1996; Fan et al, 1998) and 3D magnetohydrodynamic (MHD) (e.g. Abbett et al, 2001; Fan, 2008) approaches to resolve the flux tube cross-section and twist of magnetic field lines. As a body of work, these simulations suggest that magnetic buoyancy, the Coriolis force, and the twist of magnetic field lines in a tube play roles in the flux emergence process and are responsible for many active region observables. Addition of a convection velocity field further demonstrated that turbulent interior flows modulate flux emergence, provided the magnetic field strength of the flux tube is not substantially super-equipartition (e.g. Fan et al, 2003; Jouve and Brun, 2009; Weber et al, 2011). However, this paradigm of idealized flux tubes built by a deep-seated dynamo mechanism has been challenged by results from 3D global convective dynamo simulations. Some demonstrate that toroidal wreaths of magnetism can be formed within the bulk of a stellar convection zone (e.g. Brown et al, 2011; Nelson et al, 2011; Augustson et al, 2015; Matilsky and Toomre, 2020). Either from these wreaths (Nelson et al, 2011, 2013) or within the magneto-convection itself (Fan and Fang, 2014), buoyantly rising magnetic structures - possibly starspot progenitors - are spawned. Results from both idealized flux tube simulations and the buoyant magnetic structures built self-consistently by dynamo action show similarities to active region observables, but there are also many discrepancies. Further modeling work with direct comparison to active region observations is critical to elucidate the true origin of active-region-scale magnetism. The paradigm of the idealized, isolated flux tube mechanism for producing active regions is thus now changing towards a more complete picture. A large part of the recent paradigm shift was brought about from a statistical analysis of the flows associated with emerging active regions (e.g. Schunker et al, 2016; Birch et al, 2019), emphasising the importance of solar monitoring missions. Prior to instruments such as Helioseismic and Magnetic Imager (HMI) onboard NASA's Solar Dynamics Observatory (SDO) (Scherrer et al, 2012), with high duty-cycle observations of the magnetic field and Doppler velocity at a cadence sufficiently shorter than the time it takes an active region to emerge, it was not possible to gain any statistical understanding of the emergence process to such detail. Similarly, monitoring campaigns for stars e.g., Mt Wilson, _Kepler_, BCool, LAMOST, and TESS have increased both the sample size and the time range of data, such that magnetic variability has been measured over multiple cycle periods on other stars. Although the level of precision and sampling rate in such measurements are insufficient to amass emergence statistics for other stars like we have for the Sun, they help to shape our view over general trends of active-region formation in longitude and latitude, as well as the lifetimes of surface magnetic structures. The longest record of the eleven-year activity cycle of the Sun is defined by the number of sunspots, or cool, dark regions, visible on the Sun. Active regions are defined from this visible darkening when they are assigned an active region number by the National Oceanic and Atmospheric Administration. At the beginning of the solar cycle, sunspots appear at latitudes around \(30^{\circ}\), and closer to the equator towards the end of the cycle, creating the observed butterfly diagram. Besides simply defining the solar cycle, active regions are found to have characteristics which correlate with the next solar cycle, suggesting that they are an integral part of the dynamo process. For example, the average tilt angle of sunspot groups over a solar cycle is anti-correlated with the amplitude of the next solar cycle (Dasi-Espuig et al, 2010; Jiao et al, 2021), and large active regions that emerge across the equator (e.g. Nagy et al, 2017) have a significant effect on the amplitude and duration of the subsequent solar cycle. Thus, to fully understand the dynamo process, it is critical to understand how active regions form. Presumably the distribution of active regions at the surface of the Sun reflects the distribution of the global toroidal field in the interior (Islk et al, 2011), and can provide a strong constraint for their origin and the solar dynamo (e.g. Cameron et al, 2018). However, it cannot be excluded that the dynamo also produces strong field at latitudes which do not become unstable and rise to the surface. For other cool stars, the combined effects of rotation rate and fractional depth of the convection zone can lead to a possible mismatch between active regions on the surface and distributions of magnetic flux in the deeper interior due to latitudinal deflection as bundles of magnetism rise. As a result, any one-to-one association of observed surface field and the underlying dynamo in active cool stars is not necessarily straightforward (Islk et al, 2011). While it is not currently possible to directly observe the emergence of a starspot, it is possible to make proxy observations (e.g. from chromospheric indices, spectropolarimetry, Zeeman-Doppler imaging and asteroseismology; Berdyugina, 2005; Garcia et al, 2010; See et al, 2016) to infer the distribution, size, lifetime and magnetic field strength. In this paper, we attempt to paint a comprehensive picture of the flux emergence process from generation of the active-region-scale magnetism in the deep interior to its appearance on the photosphere. We begin in Section 2, where we describe observations of the formation of active regions on the solar surface. These observations serve as inspiration and constraints for models of the generation and rise of emerging flux, which we review in Section 3. New observations are highlighted in Section 4, which support a more passive process for active region emergence than was previously understood based on flux emergence models. We then briefly review the role flux emergence plays in the solar dynamo process in Section 5, and discuss flux emergence leading to starspots on other cool stars in Section 6. In Section 7, we conclude with some recommendations as we move toward solving the active-region-scale flux emergence puzzle. ## 2 Formation of active regions at the surface of the Sun Active regions are defined by the appearance of dark spots on the visible disk of the Sun in white light, caused by strong, concentrated magnetic fields. The presence of this magnetism renders the spots cooler, and therefore darker, than the surrounding photosphere. Active region magnetic fields consist of roughly east-west aligned opposite polarity pairs, ranging from 10 up to 3000 micro-hemispheres in size, and \(10^{20}\) to \(10^{22}\) Mx of magnetic flux. Known as Hale's Law, bipolar active regions typically emerge with the same leading polarity in the same hemisphere, with the polarity orientation flipped for the opposite hemisphere (Hale et al, 1919). At the end of each 11-year sunspot cycle, the polarity orientation reverses for each hemisphere. In either hemisphere, active regions are roughly confined in toroidal bands which appear at higher latitudes of \(\sim 35^{\circ}\) at the beginning of each cycle and progressively move toward the equator over the roughly eleven year cycle. The leading polarity of an active region (in the prograde direction) also tends to be closer to the equator than the following polarity (in the retrograde direction). This statistical feature is known as Joy's Law (Hale et al, 1919). Joy's Law is often quantified by the 'tilt angle' of the line drawn between the centers of leading and following polarity regions with respect to the east-west direction. Figure 1 shows the bipolar nature of a typical active region and illustrates Joy's Law, as the leading polarity of this southern-hemisphere active region is tilted closer to the equator. An active region develops from a small magnetic bipole and grows in size as more and more magnetic flux emerges (e.g. Fig. 1). The flux-weighted centres of the polarities move further apart, predominantly in the east-west direction during the emergence process, as more flux emerges. The line-of-sight magnetic field observations show that magnetic field typically emerges as small scale features near the flux-weighted centre of the active region, which then stream towards the main polarities. Active regions have lifetimes on the order of days to weeks, where large, high-flux active regions live longer than small, low-flux regions (e.g. Schrijver and Zwaan, 2000). Within the active regions, sunspots can form with peak magnetic field strengths from 2000 to 4000 G. Generally, the leading-polarity spot of the bipolar pair is larger and more coherent than the trailing-polarity region (see also Fig. 1). Active regions also have a preferred hemispheric sense of magnetic helicity, as obtained from vector magnetograms. The observations favor a left-handed (negative) twist of the field lines in the northern hemisphere, and a right-handed (positive) twist in the southern hemisphere (e.g. Pevtsov et al, 2001, 2003, 2014; Prabhu et al, 2020). Having said how active regions _typically_ form, there is a wide variation in characteristics. When two or more polarity pairs emerge in the vicinity of one another, the polarities can morph into the traditional bipolar structure during the emergence process, usually leading to a more complex, multi-spot active region (e.g. AR 11158 in Schunker et al, 2019, Fig. 1). It is also common to find active regions emerging into sites of existing magnetic field from previous active regions, so-called 'nests' of activity (Castenmiller et al, 1986), where the emerging magnetic field interacts via cancellation and superposition with the existing magnetic field. Figure 2 shows that the duration of the emergence process until magnetic flux has stopped increasing is, on average, linearly proportional to the maximum mean magnetic field \(\langle B\rangle_{\rm max}\) (see Appendix A for details on how the emergence time was calculated). Given that the maximum flux of an active region is known to directly correlate with the lifetime (e.g. Schrijver and Zwaan, 2000), our results are consistent with Harvey (1993) (Chapter 3, Table 3). Those results show that the "rise time" of active regions with a smaller maximum area is 1-2 days and increases to 3-4 days for active regions with larger maximum area (Harvey, 1993). Here, we specifically avoid the term "rise time" since it implies a physical rising. What we are actually measuring is the time it takes magnetic flux to stop increasing in an active region at the surface. Figure 2 shows that the relationship is, though with considerable scatter, remarkably linear, with a slope of \(19.5\pm 2.2\) G per day. ## 3 Models of emerging flux If the active-region-scale magnetism described in Section 2 is generated by the underlying dynamo, then it must somehow make its way from the subsurface large-scale magnetic field to the surface. The appearance of active regions evokes the idea of rising ropes of magnetism. We see arches of magnetic bundles extended above the Sun's surface. At the footpoint of these are sunspots. Within the Sun's interior is where we think these bundles of magnetic flux are born, which then rise and intersect with the photosphere to form sunspots. In this section, we briefly review models and their outcomes which describe the formation of active-region-scale magnetic structures and their rise to the photosphere (also see the reviews by Fan (2021) and by Cheung and Isobe (2014)). ### Formation and Destabilization of Active-Region-Scale Magnetic Structures The magnetism responsible for active regions is formed in the solar interior, however, the exact physical location of magnetic field generation is not known Figure 1: Example of a typical active region, NOAA AR11072, emerging onto the surface of the Sun as observed by SDO/HMI. The top row shows Postel projected maps of the continuum intensity, and the bottom row shows maps of the line-of-sight magnetic field \(\pm 500\) G. In this instance, 0 days corresponds to the emergence time 2010.05.20_17:12:00_TAI, and the maps are centred at Carrington longitude \(316.43^{\circ}\) and latitude \(-15.13^{\circ}\). The east-west direction is \(x\) and the north-south is \(y\). Hale’s Law, the formation of a sunspot in the leading polarity, and Joy’s Law are evident. with certainty. The paradigm that solar physics has clung on to is that the magnetism giving rise to active regions is generated and stored at the base of the convection zone in the weakly subadiabatic overshoot region (e.g. Parker, 1975; van Ballegooijen, 1982; Moreno-Insertis et al, 1992; Rempel, 2003). Here it is thought that shear from differential rotation at the tachocline transforms poloidal field into toroidal field, which is amplified until it is strong enough to become buoyantly unstable. Then the magnetism subsequently rises through the convection zone to the photosphere. Beyond this shearing and storage mechanism, many studies of flux emergence, assuming the magnetism is formed as 'flux tubes' in the overshoot layer or at the very bottom of the convection zone, reproduce many properties of solar active regions (see 3.2). Studies have been carried out which consider magnetic buoyancy instabilities as a means to initiate the rise of magnetic flux bundles from the overshoot region (e.g. Spruit and van Ballegooijen, 1982; Spruit and van Vallegooijen, 1982; Ferriz-Mas and Schussler, 1995; Caligari et al, 1995). Magnetic buoyancy is the result of a buoyant force due to the presence of a concentration of magnetism. Imagining this magnetism as a bundle or 'tube' of magnetic flux, there is a pressure balance between the gas pressure outside the tube (\(P_{e}\) Figure 2: Duration of the emergence process as a function of \(\langle\mathsf{B}\rangle_{\mathrm{max}}\) for each emerging active region (light blue points) and the blue diagonal line is a linear fit with slope \(19.5\pm 2.2\) G/day. The blue horizontal dashed line is the mean of \(\langle\mathsf{B}\rangle_{\mathrm{max}}\), \(48.6\pm 2.9\) G, and the vertical dashed line is the mean duration of emergence time \(1.7\pm 0.1\) days. The mean and uncertainty values of \(\langle\mathsf{B}\rangle_{\mathrm{max}}\) bins with equal number of points are shown in dark blue. See A for details on how the emergence time was calculated. and the sum of the gas pressure (\(P_{i}\)) and magnetic pressure (\(P_{b}\)) inside. The gas density of the tube can be reduced if there is a condition of temperature equilibrium, allowing the tube to buoyantly rise. Even if the tube is in neutral density with its surroundings, a perturbation could result in an undular instability that lifts part of the tube upward creating an \(\Omega\)-shaped loop, allowing mass to locally drain down the legs of the rising loop apex and initiating a buoyant rise. When considering thin flux tubes in mechanical equilibrium, their stability is primarily determined by their magnetic field strength and the subadiabaticity of the overshoot region (e.g. Caligari et al, 1995). It is found that the field strength of the flux tube must exceed the equipartition value of \(\sim 10^{4}\) G by about an order of magnitude in order to develop unstable modes at sunspot latitudes in less than \(\sim\)1 year. Instead of considering isolated magnetic flux tubes built in the tachocline region, many studies using multi-dimensional MHD simulations have focused on the formation of buoyant instabilities within layers of uniform, horizontal magnetic field. (e.g. Cattaneo and Hughes, 1988; Matthews et al, 1995; Fan, 2001; Vasil and Brummell, 2008, 2009). Indeed, it has been shown that regions of velocity shear can generate tube-like magnetic structures or magnetic layers (e.g. Cline et al, 2003; Vasil and Brummell, 2008)). Vasil and Brummell (2008) find that a velocity shear representing a tachocline-like shear can generate a strong layer of horizontal magnetic field. From this self-consistently generated magnetic layer, buoyant structures resembling undulating 'tubes' arise due to magnetic buoyancy instabilities within the magnetic layer. However, the shear required to develop the magnetic buoyancy instabilities of the magnetic layer is much stronger and the magnetic Prandtl number is much larger than what is expected in the solar tachocline (Vasil and Brummell, 2009). In order to generate a twist of the magnetic field within such rising magnetic 'flux tubes', as is found in active region observations, Favier et al (2012) showed that it was sufficient to add an inclined uniform weak field on top of the unstable horizontal magnetic layer. Indeed, in this case, the unstable undulating tubes interact with the overarching inclined field as they buoyantly rise and the field lines start to wind around the tube axis, creating an effective twist in the magnetic structure. There is now a shifting paradigm regarding the location of active-region-scale magnetic field generation. Recent global 3D magnetohydrodynamic (MHD) dynamo simulations have compelling outcomes which suggest that active-region-scale magnetism need not be formed at the base of the convection zone. In some, cyclic wreaths of magnetism are built amid the magneto-convection without the need for a tachocline (e.g. Brown et al, 2011; Augustson et al, 2015). Taking similar simulations but reducing sub-grid-scale turbulent diffusion Nelson et al (2011, 2013, 2014) capture the generation of buoyant magnetic structures arising from magnetic wreaths - possible starspot progenitors. While typical azimuthal field strengths are a few kilogauss, buoyant loops only spawned in regions with super-equipartition localized fields. The global dynamo simulations of Fan and Fang (2014) also exhibit super-equipartition flux bundles that rise toward the simulation upper domain. A common trait of these dynamo-generated buoyant loops is that they are continually amplified by shear and differential rotation as they rise. Unlike flux tube simulations (see 3.2), these are not isolated magnetic structures. Yet recently, Bice and Toomre (2022) found self-consistently generated flux ropes in a global 3D-MHD dynamo simulation representative of an early M-dwarf with a tachocline. The majority of the ropes remain embedded in the tachocline, while buoyant portions are lifted upward by nests of convection. Taken together, these models and simulations of the formation and instability of buoyant magnetic structures, possible starspot progenitors, ask us to consider/reconsider the paradigm of isolated magnetic flux tubes arising from the deep convection zone. However, as is the case with all simulations, it is important to note that all the simulations discussed here are far removed from the regime of real stars. Yet, they reproduce the observed properties of active regions remarkably well and give us a glimpse into the complex interplay of forces and mechanisms at work in stellar interiors that conspire to generate magnetic structures and facilitate their journey toward the surface. ### The Flux Tube Paradigm Isolated magnetic flux tubes in the convection zone have a long history of study because they are convenient both analytically and computationally, and had until recently been able to sufficiently explain the observed properties of active regions. In most studies, they are given an 'a priori' magnetic field strength and flux - it is taken for granted that the dynamo, via global or local processes, has somehow managed to create them - and usually assume they have been formed at the bottom of the convection zone. There are two primary types of flux tube simulations - the thin flux tube approximation (e.g. Spruit, 1981; Caligari et al, 1995; Fan et al, 1993; Weber et al, 2011) and anelastic 2D/3D MHD simulations (e.g. Emonet and Moreno-Insertis, 1998; Fan et al, 2003; Fan, 2008; Jouve and Brun, 2009). The thin flux tube approximation takes the flux tube as so thin that there is an instantaneous balance between the pressure outside the flux tube and the gas pressure plus magnetic pressure inside the flux tube. All physical quantities are taken as averages over the tube cross-section, and the tube is essentially a 1D string of mass elements, free to be accelerated in three dimensions by bulk forces in ideal MHD, including buoyancy, magnetic tension, the Coriolis force, and aerodynamic drag. In order to resolve the flux tube cross section, 2D or 3D MHD simulations are used. These simultaneously solve the full set of MHD equations, and in some cases, convection. But in order to meet the grid resolution typical of these models, they often have a flux too large for most active regions (see Fan, 2021). Flux tube simulations have sought to explain the appearance of solar active regions, such as their latitude of emergence, tilting action in accordance with Joy's Law, and the general trend of a more coherent, less fragmented morphology for the leading polarity of an active region as depicted in Figure 1. For all of these examples, flux tube simulations have pointed toward the Coriolis force as the driver of the phenomenon. Consider three primary forces acting on a flux tube cross-section: a magnetic tension force directed toward the Sun's rotation axis, a buoyancy force directed radially outward, and the Coriolis force resulting from toroidal flow within the tube. As the tube traverses the convection zone, conservation of angular momentum drives a retrograde flow within the flux tube, resulting in a Coriolis force (as mentioned above) directed inward toward the rotation axis. When the magnetic field strength of the flux tube is strong (i.e. super-equipartition), the buoyancy force dominates and the flux tubes rise radially from their original latitude at the base of the convection zone. As the field strength of the flux tube decreases, the outward component of buoyancy diminishes compared to the inward component of the Coriolis force, and the resulting trajectory turns more poleward such that flux avoids emerging at lower latitudes (e.g. Choudhuri and Gilman, 1987; Caligari et al, 1995). A fourth force acting on the flux tube, the drag force, is stronger for flux tubes of lower magnetic flux. As a result, flux tubes with lower initial values of magnetic flux around \(10^{20}-10^{21}\) Mx are able to rise more radially than those of \(10^{22}\) Mx (e.g. Choudhuri and Gilman, 1987; D'Silva and Choudhuri, 1993; Fan et al, 1993). If portions of the flux tube remain anchored deeper down in the convection zone, it is found within thin flux tube simulations that the material near the apex of a rising loop will both expand and diverge (although still with net retrograde motion), leading to a Coriolis force induced tilting of the loop toward the equator (D'Silva and Choudhuri, 1993). Following the Joy's Law trend, these simulations also show an increasing tilt of the flux tube legs with increasing latitude of emergence (e.g. D'Silva and Choudhuri, 1993; Caligari et al, 1995). This is expected if the Coriolis force is responsible for the tilting action, as the Coriolis force is proportional to sine(latitude). Additionally, the tilt angle is found to increase with increasing magnetic flux (Fan et al, 1994). Within thin flux tube simulations, the retrograde plasma motion near the flux tube apex contributes to a stronger magnetic field strength in the leading leg (in the direction of solar rotation) compared to the following leg (e.g. Fan et al, 1993, 1994). It is noted that plasma is evacuated out of the leading flux tube leg into the following leg. Owing to the condition of pressure balance between the flux tube and its surroundings (\(P_{i}+P_{b}=P_{e}\)), this results in a stronger magnetic field strength for the leading side of the loop compared to the following (Fan et al, 1993). Here it is important to highlight that idealized flux tube simulations of all varieties are very efficient at conserving angular momentum (e.g. Fan, 2008; Jouve and Brun, 2009; Weber et al, 2011), yet studies utilizing local helioseismology rules out the presence of retrograde flows on the order of 100 m/s in favor of flows not exceeding \(\sim 15\) m/s Birch et al (2013). In comparison, the buoyantly rising magnetic structures within the 3D convective dynamo simulations of Nelson et al (2014) are weakly retrograde and are actually prograde within the simulations of Fan and Fang (2014). Within these 3D convective dynamo simulations, and perhaps within the Sun itself, flux emergence processes may deviate more from the 'idealized' flux tube paradigm than originally thought. Work has been done to study the twist of flux tube magnetic field lines in 2D and 3D MHD simulations. This body of work shows that if the magnetic field is not twisted enough along the flux tube axis, the flux tube tends to break apart and lose coherence as it rises (see review by Fan, 2021), although a curvature in the flux tube can partially mitigate this (e.g. Martinez-Sykora et al, 2015). Essentially, a minimum magnetic field twist rate (i.e. angular rotation of the magnetic field lines along the flux tube axis) is needed to counteract vorticity generation in the surrounding plasma caused by the buoyancy gradient across the flux tube's cross section. It is observed that active regions have a preferred helical twist of the magnetic field that is left-handed in the Northern hemisphere and right-handed in the Southern hemisphere (Pevtsov et al, 2003). But, Fan (2008) finds the tilt of the rising flux tube ends up in the wrong direction if the twist is of the preferred hemispheric sign and strong enough to maintain coherence of the flux tube. If the twist of the field lines is reversed in handedness, the tilt angle is of the correct sign. Reducing the magnetic field twist per unit length also solves the hemisphere tilt problem, but then the tube becomes less coherent and looses more flux as it rises. The flux tube simulations mentioned previously in this section (3.2) do not consider the impact of convection on the evolution of rising magnetism. However, it is absolutely clear that convection modulates flux emergence when it is included, provided that the magnetic field is not substantially super-equipartition (e.g. Fan et al, 2003; Jouve and Brun, 2009; Jouve et al, 2013; Weber et al, 2011, 2013b). Convective motions and magnetic buoyancy work in concert to promote flux emergence. Convection destabilizes the tube at the base of the convection zone, forcing parts to rise. As the tube bends, mass drains down the tube legs, making the apex less dense than portions deeper down, and that part of the tube also rises buoyantly. This, in combination with convective upflows, help the flux tube to rise toward the surface, while convective downflows can pin parts of the flux tube in deeper layers. By embedding thin flux tubes in a rotating spherical shell of solar-like convection, Weber et al (2013b) perform a statistical study to investigate how convection impacts flux tube properties that can be compared to solar active regions. Taking all their results into consideration, they attempt to constrain the as-of-yet unknown dynamo-generated magnetic field strength of active-region scale flux tubes. They find that tubes with initial field strength \(\geq 40\) kG are good candidates for the progenitors of large (\(10^{21}-10^{22}\)) Mx solar active regions. In particular, Weber et al (2013b) find that convection tends to increase the Joy's Law trend, especially for mid-field-strength flux tubes of 40-50 kG. These flux tubes also take the longest time to rise due to the competing interplay of buoyancy and drag from surrounding turbulent flows. By 'increasing the Joy's Law trend', the authors refer to a systematic effect that the addition of solar-like giant cell convection tends to boost the tilt angle at the same emergence latitude compared to simulations not subject to a convective velocity field. This is attributed, in part, to the associated kinetic helicity within the upflows. Taking all of the simulations together for tubes initiated \(\pm 40^{\circ}\) degrees around the equator with a magnetic flux of \(10^{20}-10^{22}\) Mx and initial field strengths of 15-100 kG, the distribution of tilt angles peaks around \(10^{\circ}\) degrees. This is in good agreement with the active region observations of Howard (1996) and Stenflo and Kosovichev (2012a). Furthermore, similarly peaked tilt angle distributions are found for the buoyantly rising, dynamo-generated loops from the 3D convective dynamo simulations of Nelson et al (2014) and Fan and Fang (2014). Perhaps this is indicative of similar processes at work in both these convective dynamo simulations and the thin flux tube simulations of Weber et al (2013b) - it is the turbulent, helical motion of convective upflows and the dynamics of the rising flux bundles themselves that contribute to the tilt angles extracted here. ### Beyond Idealized Flux Tubes In section 3.2, we introduced the idealized flux tube paradigm to describe the transport of magnetism from the deep interior toward the surface. In section 3.1, we noted that buoyantly rising magnetic structures have been found to arise from simulations of extend magnetic layers and form within wreaths of magnetism generated by dynamo action. In these latter two examples of MHD simulations, the buoyantly rising magnetism is _not_ in the form of idealized, magnetically isolated magnetic flux tubes. While simulations of idealized flux tubes are able to reproduce some properties of solar active region observables (see Section 3.2), it may be unlikely that flux bundles rise within the convection zone entirely isolated from other nearby magnetic flux structures or a background field. Here we review studies of flux emergence that go beyond idealized flux tubes in an anelastic interior. It is recognized that the presence of a background magnetic field and reconnection occurring between various magnetic flux structures have implications for the flux tube's evolution and the complexity of active regions. For example, Pinto and Brun (2013) introduce a twisted flux tube in a 3D spherical convection zone with an evolving background dynamo. In comparison to the purely hydrodynamic case of Jouve and Brun (2009), the presence of the background magnetic field introduces a 'drag' on the tube as it rises which is dependent on the orientation of the flux tube's magnetic field with respect to the background field. In particular, flux tubes with one sign of twist seem to rise faster than the ones possessing the opposite sign. The favored handedness then depends on the preferred magnetic helicity sign of the dynamo field.By embedding a twisted toroidal flux tube in an effectively poloidal background magnetic field, Manek et al (2018); Manek and Brummell (2021); Manek et al (2022) show that a particular sign of twist increases the likelihood of a flux tube's rise and aligns with solar hemispheric helicity rules of active regions. Indeed, as mentioned in Section 2, observations show a tendency for active regions to possess a negative helicity in the Northern hemisphere and a positive one in the Southern hemisphere, although this is not a strict rule and only obeyed by only about 60% of active regions (Pevtsov et al, 2014). Beyond the interactions between buoyant concentrations of magnetic field and the dynamo-generated smaller-scale fields, it has also been argued that the reconnections between multiple buoyantly rising structures could have strong consequences on emerging regions. In particular, these reconnections can be at the origin of complex active regions, with strongly sheared polarity inversion lines and patches of positive and negative magnetic helicity, indicating a high potential for flaring activity. Simulations of such processes were conducted initially by Linton et al (2001) in a Cartesian geometry and then by Jouve et al (2018) in a spherical shell including convection. In the latter, it was found that flux tubes with the same sign of axial field and same twist could merge to produce a single active region with a complicated structure and non-neutralized radial currents which could make these regions more likely to produce flares. Fully compressible calculations by Toriumi et al (2014) were also performed to explore the possibility that the intense flare-productive active region NOAA 11158 could be the product of interacting buoyant magnetic structures. Considering flux tubes isolated from the ambient dynamo field and independently rising to the solar photosphere is thus probably too simplistic. The global models which simulate the interactions between convective motions, large-scale flows and more-or-less idealized, isolated magnetic flux tubes do not treat the upper-most layers of the convection zone and thus do not model the photospheric emergence. Firstly, the thin flux-tube approximation loses its validity above \(\sim\)0.98\(R_{\odot}\), owing to the expansion of the tube apex to maintain pressure balance, to the extent that the tube radius becomes comparable with the local pressure scale height. Secondly, the anelastic approximation also breaks down close to the photosphere where Mach number becomes order unity. At this point, as a caution to the reader, we have to remember that the outcomes from these computational simulations serve only as touchstones for comparison to active regions. Direct comparisons between the properties of observed active regions and the results of thin flux tube simulations and the magnetic bipolar structures produced at the top of the computational domain of 3D anelastic simulations may be misleading. Compressible simulations including radiative transfer more closely approach the physics occurring at the top of the convection zone. These simulations aim at understanding how buoyant magnetic structures would make their way through the huge gradients of density and temperature in this region. This work first started with Cheung et al (2010) who used the MURaM code to simulate the photospheric emergence of a highly twisted torus placed at the base of the computational domain (around 7 Mm below), which was then gently brought towards the surface by an imposed radial velocity field of 1 km/s. This work was then extended to investigate the effects of less structured magnetic fields introduced at the bottom of the domain. In particular, Chen et al (2017) used the flux concentrations produced by the convective dynamo simulations of Fan and Fang (2014) as an input, with a significant rescaling of the magnetic flux contained in these concentrations to have values at the photosphere compatible with typical active regions. Subsequently, an active-region-like structure was formed. Another example employing the MURaM code are the near-surface simulations of Birch et al (2016), where a torus of magnetic field, without twist, is introduced through the bottom boundary with varying speeds, up to the \(\sim 500\) m/s predicted rise speed of a thin flux tube (Fan, 2008). They found that the strong diverging flows at the surface when the torus emerges are incompatible with observations, which do not show a significant diverging field. Using the STAGGER code to model compressible, radiative MHD of the near-surface Stein and Nordlund (2012) introduced an even less structured magnetic concentration by only imposing at the bottom boundary (at 20 Mm) a relatively weak untwisted uniform horizontal field of 1 kG. This field then rises towards the photosphere at the convective upflow speed and self-organizes into a bipolar active region with a coherent polarity and a more dispersed one. Several observational aspects like the rise speed, the absence of a strong retrograde flow, and the asymmetry between the polarities are interestingly reproduced in these simulations. Other types of simulations of highly stratified turbulence also spontaneously produced magnetic flux concentrations resembling active regions, albeit not at the spatial or flux scales of real active regions, without the need to advect a well-defined magnetic structure at the bottom of the domain. It is the case for example of simulations by Brandenburg et al (2013) and then Kapyla et al (2016) where the Negative Magnetic Pressure Instability (NEMPI) mechanism is invoked to explain the spontaneous clumping of magnetic fields into a coherent structure. The most important ingredients in such simulations seem to be the strong density stratification and the large degree of turbulence. The formation of active regions following such a mechanism would then imply that they are produced in the subsurface layers of the Sun where both strong stratification and turbulence exist. If it turns out that such mechanisms are indeed at work in the Sun, this would completely revise our understanding of the flux emergence process and its origin. However, it is yet unclear how observed active region properties such as Joy's Law might be reproduced via the NEMPI mechanism. The flux emergence process spans many orders of magnitude of density scale heights. Owing in part to this, it is difficult to get one singular simulation that tracks flux emergence from its generation by the dynamo to its interaction with the photosphere. As described above, some work has been done to 'couple' flux emergence simulations of the deeper interior with those of a photosphere-like region. Hotta and Iijima (2020) performed the first radiative MHD simulation of a rising flux tube in a full convection zone, although without rotation, up to the photosphere. A 10 kG flux tube is introduced at 35 Mm below the top domain. Convection then modulates the flux tube, resulting in magnetic 'roots' anchored in two downflows as deep as 80 Mm with a bipolar sunspot-like region forming at the apex of the now \(\Omega\)-shaped flux bundle. More realistic simulations like these, incorporating rotational effects, will make it increasingly straightforward to directly compare to, and interpret, solar observations. ## 4 Statistical constraints supporting a passive active region emergence The formation of each active region is unique. Simulations of active region emergence, especially those in 3D with appropriate active-region-scale magnetic flux, are currently too computationally expensive to build a statistically significant sample of flux emergence scenarios. Weber et al (2011, 2013b) have circumvented this somewhat by performing simulations of thin flux tubes embedded within a time-varying 3D convective velocity field (see Sec. 3.2). This limiting factor makes it especially important to have a comprehensive sample of observed emerging active regions for comparison. Understanding the common properties of the emergence process is the only avenue to constrain the common physics behind flux emergence. There have been a number of statistical observational studies (e.g. Komm et al, 2008; Kosovichev and Stenflo, 2008; Leka et al, 2013; Birch et al, 2013; Barnes et al, 2014; McClintock and Norton, 2016) on the formation of active regions, but in this paper we will focus on the observed characteristics that can place direct constraints on the models. We refer to an 'active' emergence as being guided by the magnetic field, and 'passive' as being guided by the convection. ### Geometry of the flux tube There is an apparent asymmetry in the east-west proper motions of the two main active region polarities as they emerge, with the leading polarity moving prograde faster than the following polarity moves retrograde (e.g. Gilman and Howard, 1985; Chou and Wang, 1987). Simulations have explained this as consistent with a geometrical asymmetry in the legs of an emerging flux tube, where the leg in the prograde direction is more tangentially oriented than the following leg which is more radial (see for example Fig. 5 of Jouve et al, 2013). As this flux tube rises through the surface, the leading polarity moves more rapidly in the prograde direction than the following polarity in the retrograde direction. Modeled within the thin flux tube approach in particular, this asymmetry is due to the Coriolis force driving a counter-rotating motion of the tube plasma so that the summit of the loop moves retrograde relative to the legs (e.g. Moreno-Insertis et al, 1994; Caligari et al, 1995). However, Schunker et al (2016) showed that while there is an apparent asymmetry of the leading and following polarity motion of the active region with respect to the Carrington rotation rate, this east-west motion is actually _symmetric_ with respect to the local plasma rotation speed. Here, in Fig. 3 we show the separation velocities for 117 active regions, an increased sample from what is shown in Schunker et al (2016), further supporting the initial results. The average motion of the leading and following polarities in the first day after emergence is asymmetric with respect to the Carrington rotation rate (Fig. 3, left), consistent with e.g. Chou and Wang (1987), where the mean east-west velocity of the leading polarity in the first day after emergence is \(127\pm 14\)ms\({}^{-1}\) and the trailing polarity is \(-61\pm 10\)ms\({}^{-1}\). However, we emphasise that _the east-west motion of the polarities about the local plasma rotation speed is symmetric_ (Fig. 3, right). By embedding thin flux tubes within solar-like convection, Weber et al (2013) show the average rotation speed of the center between the leading and following rising flux tube legs can approach the solar surface rotation speed, but only for strong flux tubes with initial field strengths of \(\geq\) 60 kG. Although, due to strong conservation of angular momentum within the rising loop, the plasma flow at the apex of the loop is substantially retrograde (see also Sec. 3.2) beyond what is detected by observations (e.g. Birch et al, 2013). Based upon these outcomes from observations and simulations, we suggest that any constraints placed on models of emerging flux tubes with geometrically asymmetric legs should be carefully reconsidered. Care should also be taken when choosing the particular reference frame to study their apparent motion asymmetries. ### Rise speed of the flux tube In the absence of convection, idealized thin flux tube simulations show an upward rise speed of about 500 m/s at about 20-30 Mm below the surface (e.g. Caligari et al, 1995). To understand how this would manifest at the surface of the Sun, Birch et al (2016) inserted a torus of magnetic flux through the bottom boundary of a three-dimensional fully convective, near-surface simulation with a rise speed of 500 m/s. This simulation produced a strong outflow at the surface (about 400 m/s) as it emerged. Observations of the surface flows during an emergence on the Sun do not show such a strong outflow signature, but rather flow velocities that are consistent with a rise-speed less than \(\approx\) 100 m/s, typical of convective upflows in the near-surface layers. In agreement with the observations, a flux tube that emerges naturally from a depth of 50 Mm within the radiative MHD simulations of (Hotta and Iijima, 2020) does not produce any significant outflow at the surface. However, the rise speed of the flux tube is 250 m/s. This calls into question the traditional, idealized flux tube picture and suggests that the convection has an influence on the near-surface emergence process, but does not exclude thin flux tubes which may rise from the base of the convection zone with a slower speed. Hotta and Iijima (2020) suggest that the reason their flux tube forms such a convincing active region structure is because it is initially placed across two coherent downflow regions. The downflows effectively pin the ends of the flux tube down, so that the centre emerges as a loop, implying that the influence of convection extends down to where flux tubes lie, and even instigate the emergence process, supporting some of the global models. Such correlations between rising (sinking) parts of flux tubes and upflows (downflows) were already observed in models of emergence in the bulk of the convection zone (e.g. Fan et al, 2003; Weber et al, 2011, 2013b) and in rising flux bundles generated within 3D convective dynamo simulations (Nelson et al, 2011, 2013; Fan and Fang, 2014). But, the work of Hotta and Iijima (2020) shows that this interplay could also happen near the photosphere and highlights the potential importance of convective motions to bring the observed magnetic structures to the surface. ### Onset of Joy's Law Joy's Law is the observed tendency of the leading polarity in predominantly east-west aligned active regions to be slightly closer to the equator than the following polarity. The angle these polarities make relative to the east-west direction is called the tilt angle, and it increases with the latitude of the active Figure 3: Left: The mean east-west velocity relative to the Carrington rotation rate of the leading (red crosses) and trailing (black crosses) polarities over the first day after emergence for 117 active regions selected from the Solar Dynamics Observatory Helioseismic Emerging Active Regions Survey (SDO/HEARS; see Table 1 in each of Schunker et al (2016, 2019) for a full list of active regions in SDO/HEARS and Appendix A for a list of active regions that were excluded). The size of the symbols represents the size of the active region (AR 11158 is the largest). The scatter is large; this emphasises the uniqueness of each active region emergence. The mean velocity of the leading polarity in the first day after emergence is \(127\pm 14\)ms\({}^{-1}\) and the trailing polarity is \(-61\pm 10\)ms\({}^{-1}\). Right: The average velocities in bins of polewards and equatorwards latitudes divided by the median latitude (dashed vertical lines) of the EARs. The black curve shows the differential velocity of the surface plasma relative to the Carrington rotation rate. The uncertainties are given by the rms of the velocities in each bin, divided by the square root of the number of EARs in the bin. This figure is an updated version of Fig. 11 in Schunker et al (2016) where the average speed of the leading polarities was \(121\pm 22\)ms\({}^{-1}\) and for the trailing polarities was \(-70\pm 13\)ms\({}^{-1}\). Full details of the method to measure the east-west polarity speeds are described in Section 7 of Schunker et al (2016). region, strongly suggesting that Joy's Law has its origins in the Coriolis force. In some mean-field dynamo models, Joy's Law is an important characteristic where the tilt angle acts as a non-linear feedback mechanism (e.g. Cameron et al, 2010). Within the idealized flux tube paradigm, plasma near the rising flux tube apex will expand and diverge. This results in a Coriolis force-induced tilt of the tube axis in the sense of Joy's Law that increases with latitude (see Sec. 3.2). In this picture, the tilt angle should be present at the time of emergence. The tilt angle also depends on the flux and field strength of the magnetism (e.g. Fan et al, 1994). A larger magnetic flux \(\Phi\) increases the buoyancy of the tube, and therefore the rise speed and effect from the Coriolis force, such that the tilt angle \(\alpha\) increases (\(\uparrow\Phi\Rightarrow\uparrow\alpha\)). But larger magnetic field \(B\) (for the same flux) increases the magnetic tension of the flux tube, which decreases the tilt angle due to the domination of tension over the Coriolis force (\(\uparrow B\Rightarrow\downarrow\alpha\), see also Isik, 2015). Weber et al (2013) show that incorporating the effects of time-varying giant cell convection systematically increases the tilt angles of rising flux tubes compared to the case without convection, but does not necessarily reproduce the tilt angle trends as found in Fan et al (1994) (also see Sec. 3.2). However, they do note that there is a larger spread in tilt angles at lower magnetic flux, as reported in some observations (e.g. Wang and Sheeley, 1989; Stenflo and Kosovichev, 2012). Taken together, these simulation results show that the interplay of time-varying convection, the Coriolis force, magnetic tension, and buoyancy thus complicate trends in tilt angle. Schunker et al (2020) measured the tilt angle of over 100 active region polarities throughout the emergence process, and found that on average, the polarities tend to emerge east-west aligned (i.e., with zero tilt), albeit with a large scatter, and the tilt angle develops during the emergence process. Moreover, Schunker et al (2020) found that the latitudinal dependence of the tilt angle arises only from the north-south motion of the polarities, and the east-west motion is only dependent on the amount of flux that has emerged. They also found that there is no dependence of the tilt angle on the maximum magnetic field strength of the active region. Schunker et al (2020) conclude that the observed Joy's Law trend is inconsistent with a rising flux tube that has an established, latitudinally dependent tilt angle as it rises to intersect with the photosphere. We note that idealized thin flux tube models do not extend all the way to the surface (typically \(\approx 0.98R_{\odot}\)) where convection becomes important, however in simulations of coherent magnetic structures rising through the near-surface convection, the surface signature still reflects the orientation of the subsurface footpoints (e.g. Chen et al, 2017). Another possibility to explain Joy's Law is the conservation of magnetic helicity in a flux tube as it rises through the surface (e.g. Berger and Field, 1984; Longcope and Klapper, 1997). The magnetic helicity is composed of the writhe, which measures the deformation of the flux tube axis, and the twist of the magnetic field lines about the axis. Ideally, the magnetic helicity is a conserved quantity, and changing one component necessarily requires a change in the other. In some simulations, the twist of the magnetic field about the axis of the flux tube is vital to it remaining coherent as it rises (e.g. Fan et al, 1998; Fan, 2008). Within the thin flux tube context, it is shown that the writhe developed by the evolving flux tube can generate magnetic field twist (Longcope and Klapper, 1997; Fan and Gong, 2000), but this alone is not enough to account for the observed twist of active regions (see Fan, 2021). Indeed, this relationship between the twist of the magnetic field and the writhe of the flux tube (related to the tilt of the active region) has been posited as a means to explore the link between 'kink unstable' flux tubes and complex sunspot groups that have polarity orientations opposite to Hale's Law (e.g. Lopez Fuentes et al, 2003; Fan, 2021). While there have been multiple studies of the helicity and twist of the surface magnetic field in active regions (e.g. Pevtsov et al, 2014, 2014), and references therein), the relationship between the twist and writhe is still ambiguous. This is probably because observations do not have access to the full three-dimensional structure of the magnetic field above the surface, and only proxies for the twist and estimates of the helicity can be measured (e.g. Baumgartner et al, 2022). An interesting proxy for the global twist and writhe in active regions is the presence of so-called magnetic tongues (see Fig. 4, left). These structures are due to the fact that the polarities of active regions appear elongated in line-of-sight magnetograms during their emergence (Lopez Fuentes et al, 2000). The elongation is thought to be produced by the line-of-sight projection of the azimuthal magnetic field at the peak of a twisted emerging flux-tube as it emerges through the surface. Thus, it is a proxy for the net twist of the active region flux tube, and coupled with the orientation of the polarities (as a proxy for writhe), gives a constraint on the magnetic helicity brought to the photosphere by the emergence process (Luoni et al, 2011). As the emergence proceeds, the tongues will vanish as the peak of the flux tube passes the surface and the legs of the flux tube remain. A less-biased measurement of the tilt angle will then be accessible. Magnetic tongues have also been reproduced in 3D MHD simulations where a twisted flux tube emerges through the deeper solar interior (e.g. Jouve et al, 2013, and Fig. 4, right) and closer to the photosphere (e.g. Archontis and Hood, 2010; Cheung et al, 2010). Again, a clear relationship can be established between the direction of elongation of tongues and the sign of the global active region twist, similarly to what is found in observations (Poisson et al, 2022). The cycle-averaged tilt angles of sunspot groups show anti-correlation with the amplitude of the cycle (Dasi-Espuig et al, 2010; Jiao et al, 2021). A surface mechanism that explains this phenomenon is driven by inflows in the north-south direction towards active belts around the equator, effectively pushing the latitudinal separation of polarities of emerged active regions (Jiang et al, 2010; Cameron and Schussler, 2012). In the idealized thin flux tube picture, the same effect could be explained with enhanced cooling near the base of the convection zone where the strong toroidal flux is thought to be stored, which then pushes the onset of magnetic buoyancy to higher magnetic field strengths and thus magnetic tension, resulting in lower tilt angles at the surface (Islk, 2015). The main assumption behind this mechanism is the reported systematic sound-speed reduction from solar minimum to maximum near the base of the convection zone (Baldner and Basu, 2008). This implies stabilisation of flux tubes in the overshoot region, shifting the onset of magnetic buoyancy instability to higher field strengths to an extent that is consistent with the helioseismic inference (see Islk, 2015, for more details). An important task for future numerical simulations then is to decide to which extent Joy's law arises from (1) the latitude-dependent Coriolis force induced by diverging flows near the tube apex below the surface (ie, angular momentum conservation of horizontally diverging flows on a rotating fluid), (2) the interplay among twist, writhe and magnetic tension, and (3) the convective flows, which passively impose the tilt on flux bundles as they rise through the subsurface. ## 5 Flux emergence and the solar dynamo ### Crucial role of active regions in global field reversals The magnetic flux that emerges through the photosphere in the form of bipolar magnetic regions is likely to play a key role in recycling the global magnetic field of the Sun. The idea that flux emergence as tilted bipolar active regions could play an active part in the dynamo process dates back to the 1960's with the seminal works of Babcock (1961) and Leighton (1969). Indeed, in the so-called Babcock-Leighton (BL) model, the large-scale poloidal field owes its origin to the decay of sunspots at the photosphere. The leading polarity, closer to the equator, partially cancels with the opposite polarity in the other hemisphere, leaving a net flux to diffuse towards the pole to reverse the polar field of opposite sign. An important ingredient has then been added to this Figure 4: Left: Example of magnetic tongues observed by SOHO/MDI on the active region 9574. The blue and red shaded areas correspond to negative and positive values of the line-of-sight magnetic field (in units of gauss) and the black circles indicate the positions of the core flux of each polarity. Right: Example of magnetic tongues simulated in a global 3D MHD model of a twisted \(\Omega\)-shaped loop magnetic structure emerging close the top of the spherical shell (here \(r=0.9R_{\odot}\)). Red and blue colours correspond to positive and negative radial magnetic fields and the arrows indicate the tongues of each polarity. ©AAS. Reproduced by permission from Poisson et al (2020) (left panel) and Jouve et al (2013) (right panel). model - the large-scale meridional flow observed in the uppermost part of the convection zone (Gizon et al, 2020). These models including this flow are known as flux-transport dynamo models and are reviewed in another chapter of this book (Hazra et al, 2023). Recently, Cameron and Schussler (2015) applied Stokes' theorem on the meridional plane of the Sun encompassing the convection zone to show that the net toroidal flux generated by differential rotation must come solely from the magnetic flux emerging at the surface. That surface flux mainly comes from the dipole moment contribution to the poloidal field of the Sun, which the tilted active regions eventually produce in the course of an activity cycle (Cameron et al, 2018). This theoretical finding highlighted the importance of flux emergence in solar and stellar dynamo processes. Indeed, similar analysis has been conducted by Jeffers et al (2022) on two active K-dwarf stars followed by spectropolarimetry (\(\epsilon\)-Eridani and 51 Cygni A) where, similarly to the Sun, a balance is found between the generation of toroidal flux associated with the poloidal field threading through the stellar surfaces and the loss of magnetic flux associated with flux emergence. The latitudinal distribution and the tilt angle of emerging active regions thus seem of utmost importance in determining the global axial dipole of the Sun. As discussed in Sect. 4.3, cycle-averaged tilt angle of sunspot groups are reported to show anti-correlation with the cycle strength (Dasi-Espuig et al, 2010; Jiao et al, 2021). This tendency has been interpreted as a manifestation of nonlinear saturation of the solar cycle. Accordingly, the effect works so as to limit further growth of the toroidal flux of the subsequent cycle. It does so by quenching the surface source for the global axial dipole moment through a lower average tilt angle of active regions. To account for the systematic effect, two physical mechanisms have been suggested: convergent flows towards emerged active regions, with the velocity depending on cycle strength (Jiang et al, 2010), or a deep-seated stabilisation of flux tubes by cooling, the extent of which depends on the toroidal magnetic flux (Isik, 2015, see Sec. 4.3). It has to be noted here that despite observational evidence of the possible major role of surface fields and flux emergence in the dynamo process, all the global MHD dynamo models producing large-scale magnetic cycles today (e.g. Ghizaru et al, 2010; Nelson et al, 2013; Kapyla et al, 2012; Augustson et al, 2015; Hotta et al, 2016; Strugarek et al, 2017) do not include the solar or stellar surface and do not produce starspots. This could indicate that the Sun is simply not operating in the same regime as 3D simulations. A more optimistic view would be to consider that a dynamo process relying on differential rotation and convection could still be active in the deep solar/stellar interior and that flux emergence would be an additional source of large-scale field not modelled yet in full 3D calculations. ### Incorporating flux emergence in Babcock-Leighton dynamo models Following the idea that flux emergence could play a key role in the dynamo process and that full 3D MHD global models do not yet capture all the characteristics of flux emergence, some works have been devoted to take prescriptions coming from 3D models of flux emergence and incorporate them into 2D mean-field Babcock-Leighton model. This was done, for example, in Jouve et al (2010). Here, the idea was to take into account the fact that flux tubes do not rise instantaneously to the surface (contrary to what is assumed in the standard BL model) and that the rise speed is a non-linear function of the magnetic field strength. They found that this small (but non-linear) delay in the rise time of flux tubes could produce long-term modulation of the cycle amplitude and phase. Recently, the idea of combining the outcomes of 3D flux emergence simulations and 2D BL models has been used to produce new 3D flux-transport BL dynamo models, where active regions would be formed according to the toroidal field self-consistently created by the shearing of the poloidal field at the base of the convection zone (Yeates and Munoz-Jaramillo, 2013; Miesch and Dikpati, 2014; Kumar et al, 2019; Pipin, 2022; Bekki and Cameron, 2023). These new models are particularly promising to study the role of active regions in the reversal of the polar magnetic field in the Sun and possibly other cool stars. Indeed, one of their advantages is that they are less prone to the caveat of 2D models of producing too much polar flux compared to observations. Moreover, in the last two references cited above, the non-linear feedback of the Lorentz force on the large-scale flows is taken into account and the impact of flux emergence on differential rotation and meridional flows can then be assessed. As further proof on the importance of active region tilt angles on the reversal of the Sun's poloidal field, Karak and Miesch (2017) find that introducing a tilt angle scatter around the Joy's Law trend in a 3D BL dynamo induces variability in the magnetic cycle, promoting grand maxima and minima. Many improvements still need to be implemented in these models, by incorporating statistics of flux emergence and characteristics of mean flows even closer to observations for example and possibly by implementing data assimilation techniques to construct predictive models for future solar activity (see recent review by Nandy, 2021, on this subject). Another improvement could also be to adapt these models to other stars with various emergence characteristics. Nonetheless simplified 3D BL models are already very valuable tools to be used before full 3D MHD models of spot-producing dynamos can be constructed. ## 6 Flux emergence on other cool stars ### Some clues from observations Most stars with outer convection zones are capable of generating strong magnetism leading to starspots (e.g. Berdyugina, 2005; Strassmeier, 2009). The emergence of magnetic regions on other stars is not directly observable, however the strength and distribution of magnetic flux on the surface of stars can be inferred from observations such as light-curve variability, (Zeeman-)Doppler imaging, and interferometry (see also van Saders et al. in this volume). This is only possible for stars significantly more active than the Sun, and it would not be possible to measure these properties treating the Sun as a star. The general trend for Sun-like stars is that for a given effective temperature, the unsigned surface magnetic flux increases with rotation rate until reaching a saturation point for faster rotators (e.g. Reiners et al, 2022). There is also a preference for faster rotators to exhibit higher-latitude spots (e.g. Berdyugina, 2005), but some rapidly rotating stars and fully convective M dwarfs can exhibit spots simultaneously at high and low latitudes (e.g. Barnes et al, 1998; Jeffers et al, 2002; Barnes et al, 2015; Davenport et al, 2015). It is then natural to wonder whether the observed trends of magnetic flux results from a link between the generation of the large-scale toroidal magnetic field and the bulk rotation rate. Stellar rotation and effective temperature also affects the amplitude, vorticity, and turn-over time of the convection; in turn impacting the star's differential rotation (i.e. shear) profile (see e.g. Brun and Browning, 2017, and references therein). Some mean-field dynamos of the Sun incorporating a solar differential rotation profile find toroidal magnetic field generation with equatorward propagation near the tachocline (e.g. Charbonneau and MacGregor, 1997; Dikpati and Charbonneau, 1999; Dikpati and Gilman, 2001). The tachocline is the name given to the region of radial and latitudinal shear at the interface between the solidly rotating radiative interior and the differentially rotating convection zone. These simulations emphasized that the tachocline is a key physical component in the solar dynamo mechanism. Yet, it is observed that even fully convective M dwarfs without tachoclines exhibit starspots and the so-called magnetic 'activity-rotation correlation' (e.g. Reiners et al, 2014; Wright and Drake, 2016; Reiners et al, 2022). Further, some 3D convective dynamo models demonstrate that buoyantly rising magnetic flux structures can be generated within the bulk of the convection zone (see Sec. 3.1). With the recent emphasis on the role that convection plays (both local and mean flows) in active region emergence on the Sun (see also Sec. 4), stars without tachoclines can offer some additional insights into how active-region-scale magnetism is manifested. The variation in the 3D geometry of stellar photospheric magnetic fields poses another problem for numerical simulations of flux emergence. Zeeman-Doppler imaging of cool stars indicate that the magnetic energy in the toroidal component increases with the poloidal field for more active stars (See et al, 2015). Though with large scatter, the observational relation is steeper than one-to-one scaling for stars with masses above \(0.5M_{\odot}\), with a power index of \(1.25\pm 0.06\). The existence of a large amount of toroidal flux at the photosphere provides valuable constraints for the theory of magnetic flux emergence. Further analysis and interpretation by numerical simulations are needed to understand how such magnetic landscapes occur. ### Modelling the distribution of activity on stars: Hints from simulations #### 6.2.1 Active nests and longitudes We noted in Section 6.1 that the unsigned surface magnetic flux increases with the rotation rate, for a given effective temperature. Whether this is due to increasing emergence frequency of active regions or larger sizes of individual active regions is unclear. These two scenarios do not exclude each other. An increased tendency for active regions to emerge near existing sites of emergence, known as active nests, is another possibility (Isk et al, 2020, see also van Saders et al. in this volume). Observations indicate that the emergence of solar active features, including sunspots, coronal flares, and coronal streamers, are distributed inhomogeneously in longitude (e.g. Jetsu et al, 1997; Berdyugina and Usoskin, 2003; Li, 2011). Some other cool stars and young rapid rotators also exhibit these so-called 'active longitudes' (e.g. Jarvinen et al, 2005; Garcia-Alvarez et al, 2011; Luo et al, 2022). The cause of active longitudes is still unknown, but a few theories have been put forward. One simple suggestion is that a long-lived localization of toroidal, amplified magnetic field at the base of the convection zone could spawn the onset of a magnetic buoyancy instability, promoting a series of rising flux loops (e.g. Ruzmaikin, 1998). Similarly, the convective dynamo simulations of Nelson et al (2011, 2013) generate wreaths of magnetism within the convection zone that spawn buoyant bundles of flux when localized regions exceed a threshold field strength. Although, this effect is perhaps more closely related to the 'active nest' phenomenon described above. Instead of relying on the localized enhancement of magnetic fields at particular longitudes, Dikpati and Gilman (2005) show that MHD instabilities within a shallow water model of the tachocline can produce simultaneous variations in the tachocline thickness and tipping instabilities of the toroidal magnetic field there. A correlation between a 'bulge' in the tachocline and a tipped toroidal band can force the magnetic field into the convection zone where it will rise buoyantly. Weber et al (2013) present yet another alternate theory utilizing their thin flux tube simulations embedded in solar-like convection (Weber et al, 2011, 2013), which shows that active longitudes might also arise from the presence of rotationally aligned giant-cell convection. The simulations exhibit a pattern of flux emergence with longitudinal modes of low order and low-latitude alignment across the equator. Essentially, the extent of giant-cell upflows and the strong downflow boundaries form windows within which rising flux tubes can emerge. Although, Weber et al (2013) use 'active longitudes' to refer to a longitudinal alignment of flux emergence rather than repeated flux emergence at specific longitudes for multiple rotations. In reality, it is likely that both the amplification of localized magnetic fields and the effects of convective flows (which can also amplify localized fields) play a role in the active longitude and active nest phenomenon. Active longitudes have also been observed on stars in close binary systems (e.g. Berdyugina and Tuominen, 1998; Berdyugina, 2005). In this case tidal forcing was shown to affect the flux emergence patterns, leading to active longitudes on opposite sides of the star (Holzwarth and Schussler, 2003). An exploration of the surface distribution of flux emergence for increasing stellar activity level has now become a necessity for physics-based numerical simulations, to better understand how stellar activity patterns scale with the activity level and the rotation rate. #### Emergence latitudes and tilts Although highly simplified, simulations employing the thin flux tube approximation have been used as tools to explore the distribution of magnetic activity on stars with varying rotation rates (Schussler et al, 1996) and spectral types (Granzer et al, 2000). These models once again point toward the importance of the rotationally-driven Coriolis force on flux tube dynamics (see also Sec. 3.2). The existence of high-latitude and polar spots on stars with more rapid rotation and/or deeper convective envelopes can be explained by angular momentum conservation of a rising flux loop, leading to an internal retrograde flow. In the co-rotating frame, this effect would be experienced as an inward directed Coriolis force component towards the rotation axis, with a magnitude that increasingly dominates the radially outward buoyancy with more rapid rotation (Schussler and Solanki, 1992). The general trend is that beyond four times the solar rotation rate, a zone of avoidance forms around the equator, where no flux emergence occur (Isk et al, 2011, 2018). In their simulations, the initial field strengths of toroidal flux tubes are assumed to be close to the analytical prediction of the onset of magnetic buoyancy instability. This limits the initial field strengths to the range 80-110 kG for the solar model with the initial tube location at the middle of the overshoot region below the convection zone. In a solar-type star rotating eight times faster, the range is in 150-350 kG, so that rotation stabilizes the tubes at a fixed field strength, owing mainly to angular momentum conservation (Isk et al, 2018, see Fig. 2). For rapidly rotating early K dwarfs and subgiants, the equatorial band of avoidance is somewhat widened in latitude, owing simply to the geometry of the convection zone boundaries: when the fractional depth of the convection zone increases (ie, towards cooler stars), the poleward-deflected tube apex can emerge at even higher latitudes (Isk et al, 2011). These aforementioned simulations were based on the assumption that active-region producing flux tubes were formed near the base of the convection zone in the overshoot region, in the same way as for the idealised flux tubes in the Sun. It should be noted that in these studies (Isk et al, 2011, 2018), thin flux tubes rise in the presence of a differential rotation profile \(\Delta\Omega\), which is kept constant with increasing stellar rotation rate. Taking notes from the simulations of Weber et al (2011, 2013b), it is likely that incorporating turbulent, time-varying convective flows could modify these trends. Thin flux tube simulations have also shown that the tilt angles near emergence generally increase with the rotation rate (Islk et al, 2018). This is consistent with the Coriolis acceleration along the tube apex being proportional to the local rotation rate. An increase of the tilt angle limits flux cancellation within the emerged bipolar regions and supports stronger fields to accumulate at the rotational poles (see also Isik et al, 2007).1 The tilt angles are not only larger in average than solar ones, but their variance is also larger, showing jumps at some emergence latitudes. Such a jump is demonstrated in Figure 5, which shows the detailed geometry of the flux tube apex starting close to \(45^{\circ}\) latitude and emerging around \(52^{\circ}\). When the initial latitude \(\lambda_{0}\) is above \(46^{\circ}\), the prograde part of the apex is intruded by a more east-west oriented and broader peak, leading to a tilt angle of about \(3^{\circ}\). For \(\lambda_{0}<46^{\circ}\), the large-tilt loop (\(38^{\circ}\)) emerges before the small-tilt loop. Possibly, such multiple-peaked adjacent loops emerge on active stars at certain latitudes, leading to complex active-region topologies with enhanced free energy deposits for the upper atmosphere. Footnote 1: With the latitudinal distribution of emerging flux being confined to high latitudes, the stellar dynamo might not be dominated by the dipolar mode. M dwarfs with masses \(\leq 0.35M_{\odot}\) are fully convective, and so lack a tachocline. Yet, in at least some ways, this magnetism is similar to that observed in Sun-like stars (see Sec. 6.1). Weber and Browning (2016) embed the thin flux tube model within simulations of time-varying giant-cell convection to explore flux emergence trends in fully convective M dwarfs. Since there is no tachocline layer of shear, they introduce flux tubes at depths of \(0.5R_{\star}\) and \(0.75R_{\star}\) to sample the differing mean and local time-varying flows at each Figure 5: Geometry of three emerging flux loops with initial latitudes \(\lambda_{0}\) at the base of the convection zone and emergence tilt angles \(\alpha\). Upper panels: The parts of the tube that are beneath the outer sphere (\(0.97R_{\odot}\)) are shaded in grey, whereas the emerged parts are brighter. The colours denote the cross-sectional tube radius (the redder the thicker). Lower panels: latitudinal and radial projections of the tubes. The horizontal line on the radial profile corresponds to the location of the outer sphere (\(0.97R_{\odot}\)), where \(\alpha\) is measured from footpoint locations. The red arrows denote the apex of each tube. Isik et al., A&A, 620, A177 (2018), reproduced with permission © ESO. depth. A range of initial flux tube field strengths of 30-200 kG are chosen. On the lower end (30 kG), this encompasses magnetic fields that would not be too susceptible to suppression of their rise due to turbulent downflows. On the upper end (200 kG), this excludes field strengths above which the flux tubes would rise faster than they could plausibly be generated by large-scale convective eddies (Browning et al, 2016). Convection modulates the flux tubes as they rise, both promoting localized rising loops while suppressing the global rise of flux tubes (akin to magnetic pumping) for those initiated in the deeper interior at lower latitudes (see also Weber et al, 2017). Within these simulations, a robust result is a tendency for flux tubes to rise parallel to the rotation axis (see Sec. 3.2 and the first paragraph in this section), leading to a preference of mid-to-high latitude flux emergence. However, low latitude flux emergence is found in special cases where the flux tubes are initiated closer to the surface and are of strong magnetic fields, or of weaker fields and rise through regions of prograde differential rotation near the equator. ## 7 Moving forward Active regions define the solar cycle, and in some models are an integral part of the transformation of the toroidal field to the poloidal field (Sec. 5). Understanding their deep-seated origins, formation and distribution will place tight constraints on their role in the solar dynamo and provide insights into these same processes in other cool stars. Typically, active-region-scale magnetism has been modeled as buoyantly rising, fibril tubes originating in the deep interior (Sec. 3.2). New observations and simulations now suggest a shifting paradigm away from these idealized, isolated flux tubes toward a paradigm with a more complex, yet realistic, interplay between rising bundles of magnetism and their surroundings. Observations of surface magnetism demonstrate that active region flux emergence is a more 'passive' process than was originally thought (Sec. 4). The upward rise of the magnetism as detected near the surface is typical of convective upflows, placing much less of an emphasis on buoyancy in this region. However, it is not yet possible to say whether this influence of convection over buoyancy is confined only to the near-surface regions, or if it extends to the very beginning of the magnetism's rise. In idealized flux tube simulations, the Coriolis force leads to a geometrical asymmetry in the rising loop legs and a tilting action of these legs toward the equator. The former has been used as an explanation for why the leading active region polarity moves prograde faster than the following polarity moves retrograde. However, it is shown that the east-west motion of active regions is actually symmetric with respect to the local plasma rotation speed (Sec. 4.1). Further, the observed Joy's Law trend may not be consistent with the latitudinally-dependent tilt that the legs of a flux tube acquire as it rises through the convection zone (Sec. 4.3). The examples here and in the previous paragraph are observational evidence that flux emergence might be dominated less by buoyancy and the Coriolis force than was previously determined through flux tube models. No global convective dynamo models have yet been able to produce starspots, partly because they do not include a realistic surface layer. Yet, we know that the surface distribution of emerging flux and the timing of their appearance is a key ingredient in Babcock-Leighton flux-transport dynamo models. Indeed, these incorporate ingredients self-consistently generated in global convective dynamo models such as differential rotation and meridional circulation. Also, they often assume that the primary region of magnetic field generation is at the tachocline. However, some convective dynamo simulations show that rising bundles of magnetism can be built within the bulk of the convection zone. At present, the exact generation region of active-region-scale magnetism and its strength is unknown. Learning more about the distribution of starspots across stellar photospheres for both Sun-like and fully convective stars may help to better constrain the interior source region of coherent magnetic structures. Knowing how the patterns of flux emergence vary as a function of stellar rotation and inferred surface differential rotation will also play a role in disentangling the imprints of rotation, mean flows, and shearing regions on the flux emergence pattern. To fully understand the extent to which flux emergence is a passive process, more constraints from observations are still needed. But to understand what is happening below the surface, more simulations are critical. In particular, we suggest a strong emphasis to be placed on developing simulations that connect near-surface simulations with deeper flux emergence and dynamo models with fidelity. Further statistical analysis of solar active region emergence properties dependent on, for example, the extremes of magnetic flux and latitude, are also needed. We conclude with some open questions regarding magnetic flux emergence in the Sun and other cool stars, raised by the observational and simulation landscape reviewed here, that we anticipate can be addressed in the next decade: * What properties of active region formation are driven primarily by the influence of convection? * To what extent do the Coriolis force, convective flows, and tension, twist, and writhe of the magnetic field contribute to the observed Joy's law trend? * What is the important physics that must be faithfully simulated to capture the observed statistical properties of emerging active regions? * the tachocline, near-surface, bulk of the convection zone, or some combination of these? * Can signatures of the underlying dynamo be found in patterns of magnetic activity (as reviewed here) on the photospheres of the Sun and other cool stars? AcknowledgementsHS is the recipient of an Australian Research Council Future Fellowship Award (project number FT220100330) funded by the Australian Government and her contribution is partially funded by this grant. LJ acknowledges funding by the Institut Universitaire de France. ## Declarations * **Conflict of interest:** The authors have no conflicts of interest to declare that are relevant to the content of this article. ## Appendix A Defining the duration of active region emergence Here we describe how we computed the duration of the active region emergence process as shown in Fig. 2. Figure. A1 shows the evolution of the mean unsigned magnetic field, \(\langle B\rangle\) within the central region of 49 Mm radius the map. The emergence time is defined as being 10% of the total magnetic flux 36 hours after the active region was officially named (see Schunker et al, 2016, for more details). The maps are centred on the flux-weighted centre of the active region about the time of emergence (for a detailed definition see Schunker et al, 2016; Birch et al, 2016) for four example active regions from the Solar Dynamics Observatory Helioseismic Emerging Active Regions survey (SDO/HEARS) (Schunker et al, 2016) which contains a total of 180 active regions. The emergence of an active region ends when all of the magnetic field has appeared at the surface. We identified the time when the mean line-of-sight magnetic field \(\langle B\rangle\) with a 5.3 hour cadence was a maximum. If the maximum occurred within three time intervals (\(\approx 16\) hours) of the end of the time series it is difficult to assess whether it is still emerging (e.g. AR11103 in Fig. A1), and so we exclude these regions (35 in total). Otherwise, we fit a quadratic to the \(\langle B\rangle\) values between 8 hours (two time intervals) before and 13 hours (three time intervals) after the time when \(\langle B\rangle_{\max}\) occurred, and we defined the time of the maximum of the quadratic function as the end time of the emergence process. There is no physical basis for fitting a quadratic, only that we found it fit the peak reasonably well (see Fig. A1). The duration of the emergence process is the difference between end of the emergence process and the emergence time. List of 120 NOAA active region numbers included in Fig. 3: 11066, 11070, 11072, 11075, 11076, 11079, 11081, 11086, 11088, 11103, 11105, 11114, 11122, 11132, 11136, 11137, 11138, 11141, 11142, 11145, 11148, 11154, 11158, 11159, 11167, 11198, 11199, 11200, 11206, 11209, 11211, 11214, 11223, 11239, 11273, 11288, 11290, 11297, 11300, 11304, 11310, 11322, 11327, 11331, 11381, 11397, 11400, 11404, 11406, 11414, 11416, 11431, 11437, 11446, 11450, 11456, 11472, 11497, 11500, 11510, 11511, 11523, 11531, 11547, 11549, 11551, 11554, 11565, 11570, 11574, 11597, 11603, 11607, 11624, 11626, 11627, 11631, 11640, 11645, 11686, 11696, 11699, 11703, 11707, 11712, 11718, 11736, 11750, 11780, 11781, 11784, 11786, 11789, 11807, 11813, 11821, 11824, 11833, 11843, 11855, 11867, 11874, 11878, 11886, 11894, 11915, 11924, 11946, 11962, 11969, 11978, 11992, 12003, 12011, 12039, 12064, 12078, 12099, 12118, 12119.
2305.18258
Maximize to Explore: One Objective Function Fusing Estimation, Planning, and Exploration
In online reinforcement learning (online RL), balancing exploration and exploitation is crucial for finding an optimal policy in a sample-efficient way. To achieve this, existing sample-efficient online RL algorithms typically consist of three components: estimation, planning, and exploration. However, in order to cope with general function approximators, most of them involve impractical algorithmic components to incentivize exploration, such as optimization within data-dependent level-sets or complicated sampling procedures. To address this challenge, we propose an easy-to-implement RL framework called \textit{Maximize to Explore} (\texttt{MEX}), which only needs to optimize \emph{unconstrainedly} a single objective that integrates the estimation and planning components while balancing exploration and exploitation automatically. Theoretically, we prove that \texttt{MEX} achieves a sublinear regret with general function approximations for Markov decision processes (MDP) and is further extendable to two-player zero-sum Markov games (MG). Meanwhile, we adapt deep RL baselines to design practical versions of \texttt{MEX}, in both model-free and model-based manners, which can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards. Compared with existing sample-efficient online RL algorithms with general function approximations, \texttt{MEX} achieves similar sample efficiency while enjoying a lower computational cost and is more compatible with modern deep RL methods.
Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui Zheng, Zhuoran Yang, Zhaoran Wang
2023-05-29T17:25:26Z
http://arxiv.org/abs/2305.18258v2
One Objective to Rule Them All: A Maximization Objective Fusing Estimation and Planning for Exploration ###### Abstract In online reinforcement learning (online RL), balancing exploration and exploitation is crucial for finding an optimal policy in a sample-efficient way. To achieve this, existing sample-efficient online RL algorithms typically consist of three components: estimation, planning, and exploration. However, in order to cope with general function approximators, most of them involve impractical algorithmic components to incentivize exploration, such as optimization within data-dependent level-sets or complicated sampling procedures. To address this challenge, we propose an easy-to-implement RL framework called _Maximize to Explore_ (MEX), which only needs to optimize _unconstrainedly_ a single objective that integrates the estimation and planning components while balancing exploration and exploitation automatically. Theoretically, we prove that MEX achieves a sublinear regret with general function approximations for Markov decision processes (MDP) and is further extendable to two-player zero-sum Markov games (MG). Meanwhile, we adapt deep RL baselines to design practical versions of MEX, in both model-free and model-based manners, which can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards. Compared with existing sample-efficient online RL algorithms with general function approximations, MEX achieves similar sample efficiency while enjoying a lower computational cost and is more compatible with modern deep RL methods. ###### Contents * 1 Introduction * 1.1 Main Contributions * 1.2 Related Works * 1.3 Notations and Outlines * 2 Preliminaries * 2.1 Episodic Markov Decision Process and Online Reinforcement Learning * 2.2 Function Approximation: Model-Free and Model-Based Hypothesis * 3 Algorithm Framework: Maximize to Explore (MEX) * 4 Regret Analysis for MEX Framework * 5 Examples of MEX Framework * 5.1 Model-free Online RL in Markov Decision Processes * 5.2 Model-based Online RL in Markov Decision Processes * 6 Extensions to Two-player Zero-sum Markov Games * 6.1 Online Reinforcement Learning in Two-player Zero-sum Markov Games * 6.2 Function Approximation: Model-Free and Model-Based Hypothesis * 6.3 Algorithm Framework: Maximize to Explore (MEX-MG) * 6.3.1 Generic algorithm * 6.3.2 Model-free algorithm * 6.3.3 Model-based algorithm. * 6.4 Regret Analysis for MEX-MG Framework * 6.5 Examples of MEX-MG Framework * 6.5.1 Model-free Online RL in Two-player Zero-sum Markov Games * 6.5.2 Model-based Online RL in Two-player Zero-sum Markov Games * 7 Experiments * 7.1 Experiment Setups * 7.2 Implementation Details * 7.3 Experimental Results * 8 Conclusions * A Proof of Main Theoretical Results * A.1 Proof of Theorem 4.4 * A.2 Proof of Theorem 6.7 * B Examples of Model-based and Model-free Online RL in MDPs * B.1 Examples of Model-free Online RL in MDPs * B.2 Examples of Model-based Online RL in MDPs * B.3 Proof of Proposition 5.1 * B.4 Proof of Proposition 5.3 * C Proofs for Model-free and Model-based Online RL in Two-player Zero-sum MGs * C.1 Proof of Proposition 6.11 * C.2 Proof of Proposition 6.16 * C.3 Proof of Proposition 6.8 * D Technical Lemmas * E Experiment Settings * E.1 Implementation Details of MEX-MF * E.2 Implementation Details of MEX-MB Introduction The crux of online reinforcement learning (online RL) lies in maintaining a balance between i) exploiting the current knowledge of the agent about the environment and ii) exploring unfamiliar areas (Sutton and Barto, 2018). To fulfill this, agents in existing sample-efficient RL algorithms predominantly undertake three tasks: i) _estimate_ a hypothesis using historical data to encapsulate their understanding of the environment; ii) perform _planning_ based on the estimated hypothesis to exploit their current knowledge; iii) further _explore_ the unknown environment via carefully designed exploration strategies. There exists a long line of research on integrating the aforementioned three components harmoniously to find the optimal policy in a sample-efficient manner. From a theoretical perspective, existing theories aim to minimize the notion of _online external regret_ which measures the cumulative suboptimality gap of the policies learned during online learning. It is well studied that one can design both _statistically_ and _computationally_ efficient algorithms (e.g., upper confidence bound (UCB), Azar et al. (2017); Jin et al. (2020); Cai et al. (2020); Zhou et al. (2021)) with sublinear online regret for tabular and linear Markov decision processes (MDPs). But when it comes to MDPs with general function approximations, most of them involve impractical algorithmic components to incentivize exploration. Usually, to cope with general function approximations, agents need to solve constrained optimization problems within data-dependent level-sets (Jin et al., 2021; Du et al., 2021), or sample from complicated posterior distributions over the space of hypotheses (Dann et al., 2021; Agarwal and Zhang, 2022; Zhong et al., 2022), both of which pose considerable challenges for implementation. From a practical perspective, a prevalent approach in deep RL for balancing exploration and exploitation is to use an ensemble of neural networks (Wiering and Van Hasselt, 2008; Osband et al., 2016; Chen et al., 2017; Lu and Van Roy, 2017; Kurutach et al., 2018; Chua et al., 2018; Lee et al., 2021), which serves as an empirical approximation of the UCB method. However, such an ensemble method suffers from high computational cost and lacks theoretical guarantee when the underlying MDP is neither linear nor tabular. As for other deep RL algorithms for exploration (Haarnoja et al., 2018; Aubret et al., 2019; Burda et al., 2018; Bellemare et al., 2016; Choi et al., 2018), such as the curiosity-driven method (Pathak et al., 2017), it also remains unknown in theory whether they are provably sample-efficient in the context of general function approximations. Hence, in this paper, we are aimed at tackling these issues and answering the following question: _Under general function approximation, can we design a sample-efficient and easy-to-implement RL framework to trade off between exploration and exploitation?_ Towards this goal, we propose an easy-to-implement RL framework, _Maximize to Explore_ (MEX), as an affirmative answer to above question. In order to strike a balance between exploration and exploitation, MEX propose to maximize a weighted sum of two objectives: (a) the optimal expected total return associated with a given hypothesis, and (b) the negative estimation error of that hypothesis. Consequently, MEX naturally combines planning and estimation components in just one single objective. By choosing the hypothesis that maximizes the weighted sum and executing the optimal policy with respect to the chosen hypothesis, MEX automatically balances between exploration and exploitation. We highlight that the objective of MEX is _not_ obtained through the Lagrangian duality of the constrained optimization objective within data-dependent level-sets (Jin et al., 2021; Du et al., 2021). This is because the coefficient of the weighted sum, which remains fixed, is data-independent and predetermined for all episodes. Contrary to the Lagrangian duality, MEX does not necessitate an inner loop of optimization for dual variables, thereby circumventing the complications associated with minimax optimization. As a maximization-only framework, MEX is friendly to implementations with neural networks and does not rely on sampling or ensemble. In the theory part, we prove that MEX achieves a sublinear \(\widetilde{\mathcal{O}}(\texttt{Poly}(H)d_{\text{GEC}}^{1/2}(1/\sqrt{HK})K^{1/2})\) regret under mild structural assumptions and is thus sample-efficient. Here \(K\) is the number of episodes, \(H\) is the horizon length, and \(d_{\text{GEC}}(\cdot)\) is the **Generalized****E****luder****Coefficient** (GEC) (Zhong et al., 2022) that characterizes the complexity of learning the underlying MDP using general function approximations in the online setting. Because the class of low-GEC MDPs includes almost all known theoretically tractable MDP instances, our result can be tailored to a multitude of specific settings with either a model-free or a model-based hypothesis, such as MDPs with low Bellman eluder dimension (Jin et al., 2021), MDPs of bilinear class (Du et al., 2021), and MDPs with low witness rank (Sun et al., 2019). Thanks to the flexibility of the MEX framework, we further extend it to online RL in two-player zero-sum Markov games (MGs), for which we also generalize the definition of GEC to two-player zero-sum MGs and establish the sample efficiency with general function approximations. Finally, as the low-GEC class also contains many tractable Partially Observable MDP (POMDP) classes (Zhong et al., 2022), MEX can also be applied to these POMDPs. Moving beyond theory and into practice, we adapt famous RL baselines TD3(Fujimoto et al., 2018) and MBPO(Janner et al., 2019) to design practical versions of MEX in model-free and model-based fashion, respectively. On various MuJoCo environments (Todorov et al., 2012) with sparse rewards, experimental results show that MEX outperforms baselines steadily and significantly. Compared with other deep RL algorithms, MEX has low computational overhead and easy implementation while maintaining a theoretical guarantee. ### Main Contributions We conclude our main contributions from the following three perspectives. 1. We propose an easy-to-implement RL algorithm framework MEX that _unconstrainedly_ maximizes a single objective to fuse estimation and planning, automatically trading off between exploration and exploitation. Under mild structural assumptions, we prove that MEX achieves a sublinear regret \[\widetilde{\mathcal{O}}\Big{(}\texttt{Poly}(H)\cdot d_{\mathrm{GEC}}(1/\sqrt{ HK})^{\frac{1}{2}}\cdot K^{\frac{1}{2}}\Big{)}\] with general function approximators, and thus is sample-efficient. Here \(K\) denotes the number of episodes, \(\texttt{Poly}(H)\) is a polynomial term in horizon length \(H\) which is specified in Section 5, \(d_{\mathrm{GEC}}(\cdot)\) is the Generalized Eluder Coefficient (GEC) (Zhong et al., 2022) of the underlying MDP. 2. We instantiate the generic MEX framework to solve several model-free and model-based MDP instances and establish corresponding theoretical results. Beyond MDPs, we further extend the MEX framework to two-player zero-sum MGs and also prove the sample efficiency with an extended definition of GEC. 3. We design deep RL implementations of MEX in both model-free and model-based styles. Experiments on various MuJoCo environments with sparse rewards demonstrate the effectiveness of MEX framework. ### Related Works Sample-efficient RL with function approximation.The success of DRL methods has motivated a line of works focused on function approximation scenarios. This line of works is originated in the linear function approximation case (Wang et al., 2019; Yang and Wang, 2019; Cai et al., 2020; Jin et al., 2020; Zanette et al., 2020; Ayoub et al., 2020; Yang et al., 2020; Modi et al., 2020; Zhou et al., 2021; Zhong and Zhang, 2023) and is later extended to general function approximations. Wang et al. (2020) first study the general function approximation using the notion of eluder dimension (Russo and Van Roy, 2013), which takes the linear MDP (Jin et al., 2020) as a special case but with inferior results. Zanette et al. (2020) consider a different type of framework based on Bellman completeness, which assumes that the class used for approximating the optimal Q-functions is closed in terms of the Bellman operator and improves the results for linear MDP. After this, Jin et al. (2021) consider the eluder dimension of the class of Bellman residual associated with the RL problems, which captures more solvable problems (low Bellman eluder (BE) dimension). Another line of works focuses on the low-rank structures of the problems, where Jiang et al. (2017) propose the Bellman rank for model-free RL and Sun et al. (2019) propose the witness rank for model-based RL. Following these two works, Du et al. (2021) propose the bilinear class, which contains more MDP models with low-rank structures (Azar et al., 2017; Sun et al., 2019; Jin et al., 2020; Modi et al., 2020; Cai et al., 2020; Zhou et al., 2021) by allowing a flexible choice of discrepancy function class. However, it is known that neither BE nor bilinear class captures each other. Dann et al. (2021) first consider eluder-coefficient-type complexity measure on the Q-type model-free RL. It was later extended by Zhong et al. (2022) to cover all the above-known solvable problems in both model-free and model-based manners. Foster et al. (2021, 2023) study another notion of complexity measure, the decision-estimation coefficient (DEC), which also unifies the BE dimension and bilinear class and is appealing due to the matching lower bound in some decision-making problems but may not be applied to the classical optimism-based or sampling-based methods due to the presence of a minimax subroutine in the definition. Chen et al. (2022); Foster et al. (2022) extend the vanilla DEC by incorporating an optimistic modification. Chen et al. (2022) extend the GOLF algorithm (Jin et al., 2021) and the Bellman completeness in model-free RL by considering more general (vector-form) discrepancy loss functions and obtaining sharper bounds in some problems. Xie et al. (2022) connect the online RL with the coverage condition in the offline RL, and also study the GOLF algorithm proposed in Jin et al. (2021). Algorithmic design in sample-efficient RL with function approximation.The most prominent approach in this area is based on the principle of "Optimism in the Face of Uncertainty" (OFU), which dates back to Auer et al. (2002). For instance, for linear function approximation, Jin et al. (2020) propose an optimistic variant of Least-Squares Value Iteration (LSVI), which achieves optimism by adding a bonus at each step. For the general case, Jiang et al. (2017) first propose an elimination-based algorithm with optimism in model-free RL and is extended to model-based RL by Sun et al. (2019). After these, Du et al. (2021); Jin et al. (2021) propose two OFU-based algorithms, which are more similar to the lin-UCB algorithm (Abbasi-Yadkori et al., 2011) studied in the linear contextual bandit literature. The model-based counterpart (Optimistic Maximum Likelihood Estimation (OMLE)) is studied in Liu et al. (2022); Chen et al. (2022). Specifically, these algorithms explicitly maintain a confidence set that contains the ground truth with high probability and conducts a constrained optimization step to select the most optimistic hypothesis in the confidence set. The other line of works studies another powerful algorithmic framework based on posterior sampling. For instance, Zanette et al. (2020) study randomized LSVI (RLSVI), which can be interpreted as a sampling-based algorithm and achieves an order-optimal result for linear MDPs. For general function approximations, the works mainly follow the idea of the "feel-good" modification of the Thompson sampling algorithm (Thompson, 1933) proposed in Zhang (2022). These algorithms start from some prior distribution over the hypothesis space and update the posterior distribution according to the collected samples but with certain optimistic modifications in either the prior or the loglikelihood function. Then the hypothesis for each iteration is sampled from the posterior and guides data collection. In particular, Dann et al. (2021) study the model-free Q-type problem, and Agarwal and Zhang (2022) study the model-based problems, but under different notions of complexity measures. Zhong et al. (2022) further utilize the idea in Zhang (2022) and extend the posterior sampling algorithm in Dann et al. (2021) to be a unified sampling-based framework to solve both model-free and model-based RL problems, which is also shown to apply to the more challenging partially observable setting. In addition to the OFU-based algorithm and the sampling-based framework, Foster et al. (2021) propose the Estimation-to-Decisions (E2D) algorithm, which can solve problems with low Decision-Estimation Coefficient (DEC) but requires solving a complicated minimax subroutine to fit in the framework of DEC. Exploration in deep RL.There has also been a long line of works that studies the exploration-exploitation trade-off from a practical perspective, where a prominent approach is referred to as the curiosity-driven method (Pathak et al., 2017). Curiosity-driven method focuses on the intrinsic rewards (Pathak et al., 2017) (to handle the sparse extrinsic reward case) when making decisions, whose formulation can be largely grouped into either encouraging the algorithm to explore "novel" states (Bellemare et al., 2016; Lopes et al., 2012) or encouraging the algorithm to pick actions that reduce the uncertainty in its knowledge of the environment (Houthooft et al., 2016; Mohamed and Jimenez Rezende, 2015; Stadie et al., 2015). These methods share the same theoretical motivation as the OFU principle. In particular, one popular approach in this area is to use ensemble methods, which combine multiple neural networks of the value function and (or) policy (see (Wiering and Van Hasselt, 2008; Osband et al., 2016; Chen et al., 2017; Lu and Van Roy, 2017; Kurutach et al., 2018; Chua et al., 2018; Lee et al., 2021) and reference therein). For instance, Chen et al. (2017) leverage the idea of upper confidence bound by estimating the uncertainty via ensembles to improve the sample efficiency. However, the uncertainty estimation via ensembles is more computationally inefficient as compared to the vanilla algorithm. Meanwhile, these methods lack theoretical guarantees beyond tabular and linear settings. It remains unknown in theory whether they are provably sample-efficient in the context of general function approximations. There is a rich body of literature, and we refer interested readers to Section 4 of Zha et al. (2021) for a comprehensive review. Two-player zero-sum Markov game.There have been numerous works on designing provably efficient algorithms for zero-sum Markov games (MGs). In the tabular case, Bai et al. (2020); Bai and Jin (2020); Liu et al. (2020) propose algorithms with regret guarantees polynomial in the number of states and actions. Xie et al. (2020); Chen et al. (2021) then study the MGs in the linear function approximation case and design algorithms with a \(\widetilde{\mathcal{O}}(\text{poly}(d,H)\sqrt{K})\) regret, where \(d\) is the dimension of the linear features. These approaches are later extended to general function approximations by Jin et al. (2021); Huang et al. (2021); Xiong et al. (2022), where the former two works studied OFU-based algorithms and the last one studied posterior sampling. ### Notations and Outlines For a measurable space \(\mathcal{X}\), we use \(\Delta(\mathcal{X})\) to denote the set of probability measure on \(\mathcal{X}\). For an integer \(n\in\mathbb{N}\), we use \([n]\) to denote the set \(\{1,\cdots,n\}\). For a random variable \(X\), we use \(\mathbb{E}[X]\) and \(\mathbb{V}[X]\) to denote its expectation and variance respectively. For two probability densities on \(\mathcal{X}\), we denote their Hellinger distance \(D_{\mathrm{H}}\) as \[D_{\mathrm{H}}(p\|q)=\frac{1}{2}\int_{\mathcal{X}}\big{(}\sqrt{p(x)}-\sqrt{q( x)}\big{)}^{2}\mathrm{d}x.\] For two functions \(f(x)\) and \(g(x)\), we denote \(f\lesssim g\) if there is a constant \(C\) such that \(f(x)\leq C\cdot g(x)\) for any \(x\). The paper is organized as follows. In Section 2, we introduce the basics of online RL in MDPs, where we also define the settings for general function approximations. In Section 3, we propose the MEX framework, and we provide generic theoretical guarantees for MEX in Section 4. In Section 5, we instantiate MEX to solve several model-free and model-based MDP instances, with some details referred to Appendix B. We further extend the algorithm and the theory of MEX to zero-sum two-player MGs in Section 6. In Section 7, we conduct deep RL experiments to demonstrate the effectiveness of MEX in various MuJoCo environments. ## 2 Preliminaries ### Episodic Markov Decision Process and Online Reinforcement Learning We consider an episodic MDP defined by a tuple \((\mathcal{S},\mathcal{A},H,\mathbb{P},r)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action spaces, \(H\in\mathbb{N}_{+}\) is a finite horizon, \(\mathbb{P}=\{\mathbb{P}_{h}\}_{h\in[H]}\) with \(\mathbb{P}_{h}:\mathcal{S}\times\mathcal{A}\mapsto\Delta(\mathcal{S})\) the transition kernel at the \(h\)-th timestep, and \(r=\{r_{h}\}_{h\in[H]}\) with \(r_{h}\colon\mathcal{S}\times\mathcal{A}\to[0,1]\) the reward function at the \(h\)-th timestep. Without loss of generality, we assume that the reward function \(r\) is both deterministic and known by the learner. We consider _online_ reinforcement learning in the episodic MDP, where the agent interacts with the MDP for \(K\in\mathbb{N}_{+}\) episodes through the following protocol. At the beginning of the \(k\)-th episode, the agent selects a policy \(\pi^{k}=\{\pi^{k}_{h}:\mathcal{S}\mapsto\Delta(\mathcal{A})\}_{h\in[H]}\). Then at the \(h\)-th timestep of this episode, the agent is at some state \(x^{k}_{h}\) and it takes an action \(a^{k}_{h}\sim\pi^{k}_{h}(\cdot\,|\,x^{k}_{h})\). After receiving the reward \(r^{k}_{h}=r_{h}(x^{k}_{h},a^{k}_{h})\), it transits to the next state \(x^{k}_{h+1}\sim\mathbb{P}_{h}(\cdot\,|\,x^{k}_{h},a^{k}_{h})\). When it reaches the state \(x^{k}_{H+1}\), it ends the \(k\)-th episode. Without loss of generality, we assume that the initial state \(x^{k}_{1}=\underline{x}\) is fixed all \(k\in[K]\). Our algorithm and analysis can be directly generalized to the setting where \(x_{1}\) is sampled from a distribution on \(\mathcal{S}\). Policy and value functions.For any given policy \(\pi=\{\pi_{h}:\mathcal{S}\mapsto\Delta(\mathcal{A})\}_{h\in[H]}\), we denote by \(V^{\pi}_{h}:\mathcal{S}\mapsto\mathbb{R}_{+}\) and \(Q^{\pi}_{h}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}_{+}\) its state-value function and its state-action value function at the \(h\)-th timestep, which characterize the expected total rewards received by executing the policy \(\pi\) starting from some \(x_{h}=x\in\mathcal{S}\) (or \(x_{h}=x\in\mathcal{S},a_{h}=a\in\mathcal{A}\), resp.), till the end of the episode. Specifically, for any \((x,a)\in\mathcal{S}\times\mathcal{A}\), \[V^{\pi}_{h}(x):=\mathbb{E}_{\mathbb{P},\pi}\left[\sum_{h^{\prime}=h}^{H}r_{h^{ \prime}}(x_{h^{\prime}},a_{h^{\prime}})\Bigg{|}\,x_{h}=x\right],\quad Q^{\pi}_ {h}(x,a):=\mathbb{E}_{\mathbb{P},\pi}\left[\sum_{h^{\prime}=h}^{H}r_{h^{ \prime}}(x_{h^{\prime}},a_{h^{\prime}})\Bigg{|}\,x_{h}=x,a_{h}=a\right]. \tag{2.1}\] It is known that there exists an optimal policy, denoted by \(\pi^{*}\), which has the optimal state-value function for all initial states (Puterman, 2014). That is, \(V^{\pi^{*}}_{h}(x)=\sup_{\pi}V^{\pi}_{h}(x)\) for all \(h\in[H]\) and \(x\in\mathcal{S}\). For simplicity, we abbreviate \(V^{\pi^{*}}\) as \(V^{*}\) and the optimal state-action value function \(Q^{\pi^{*}}\) as \(Q^{*}\). Moreover, the optimal value functions \(Q^{*}\) and \(V^{*}\) satisfy the following Bellman optimality equation (Puterman, 2014), \[V^{*}_{h}(x)=\max_{a\in\mathcal{A}}Q^{*}_{h}(x,a),\quad Q^{*}_{h}(x,a)=( \mathcal{T}_{h}Q^{*}_{h+1})(x,a):=r_{h}(x,a)+\mathbb{E}_{x^{\prime}\sim\mathbb{ P}_{h}(\cdot\,|\,x,a)}\Big{[}\max_{a^{\prime}\in\mathcal{A}}Q^{*}_{h+1}\left(x^{ \prime},a^{\prime}\right)\Big{]}, \tag{2.2}\] with \(Q^{*}_{H+1}(\cdot,\cdot)=0\) for all \((x,a,h)\in\mathcal{S}\times\mathcal{A}\times[H]\). We call \(\mathcal{T}_{h}\) the Bellman optimality operator at timestep \(h\). Also, for any two functions \(Q_{h}\) and \(Q_{h+1}\) on \(\mathcal{S}\times\mathcal{A}\), we define \[\mathcal{E}_{h}(Q_{h},Q_{h+1};x,a):=Q_{h}(x,a)-\mathcal{T}_{h}Q_{h+1}(x,a), \quad\forall(x,a)\in\mathcal{S}\times\mathcal{A}, \tag{2.3}\] as the Bellman residual at timestep \(h\) of \((Q_{h},Q_{h+1})\). Performance metric.We measure the performance of an online RL algorithm after \(K\) episodes by its _regret_. We assume that the learner predicts the optimal policy \(\pi^{*}\) via \(\pi^{k}\) in the \(k\)-th episode for each \(k\in[K]\). Then the regret after \(K\) episodes is defined as the cumulative suboptimality gap of \(\{\pi^{k}\}_{k\in[K]}\)1, defined as Footnote 1: We allow the agent to predict the optimal policy via \(\pi^{k}\) while executing some other exploration policy \(\pi^{k}_{\text{exp}}\) to interact with the environment and collect data, as is considered in the related literature (Sun et al., 2019; Du et al., 2021; Zhong et al., 2022) \[\text{Regret}(K)=\sum_{k=1}^{K}V_{1}^{*}(x_{1})-V_{1}^{\pi^{k}}(x_{1}). \tag{2.4}\] The target of sample-efficient online RL is to achieve sublinear regret (2.4) with respect to \(K\). ### Function Approximation: Model-Free and Model-Based Hypothesis To deal with MDPs with large or even infinite state space \(\mathcal{S}\), we introduce a class of function approximators. In specific, we consider an abstract hypothesis class \(\mathcal{H}=\mathcal{H}_{1}\times\cdots\times\mathcal{H}_{H}\), which can be specified to model-based and model-free settings, respectively. Also, we denote \(\Pi=\Pi_{1}\times\cdots\times\Pi_{H}\) as the space of all Markovian policies. The following two examples show how to specify \(\mathcal{H}\) for model-free and model-based settings. **Example 2.1** (Model-free hypothesis class).: _For model-free setting, \(\mathcal{H}\) contains approximators of the optimal state-action value function of the MDP, i.e., \(\mathcal{H}_{h}\subseteq\{f_{h}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\}\). For any \(f=(f_{1},\cdots,f_{H})\in\mathcal{H}\):_ 1. _we denote corresponding state-action value function_ \(Q_{f}=\{Q_{h,f}\}_{h\in[H]}\) _with_ \(Q_{h,f}=f_{h}\)_;_ 2. _we denote corresponding state-value function_ \(V_{f}=\{V_{h,f}\}_{h\in[H]}\) _with_ \(V_{h,f}(\cdot)=\max_{a\in\mathcal{A}}Q_{h,f}(\cdot,a)\)_, and we denote the corresponding optimal policy by_ \(\pi_{f}=\{\pi_{h,f}\}_{h\in[H]}\) _with_ \(\pi_{h,f}(\cdot)=\arg\max_{a\in\mathcal{A}}Q_{h,f}(\cdot,a)\)_._ 3. _we denote the optimal state-action value function under the true model, i.e.,_ \(Q^{*}\)_, by_ \(f^{*}\)_._ **Example 2.2** (Model-based hypothesis class).: _For model-based setting, \(\mathcal{H}\) contains approximators of the transition kernel of the MDP, for which we denote \(f=\mathbb{P}_{f}=(\mathbb{P}_{1,f},\cdots,\mathbb{P}_{H,f})\in\mathcal{H}\). For any \((f,\pi)\in\mathcal{H}\times\Pi\):_ 1. _we denote_ \(V_{f}^{\pi}=\{V_{h,f}^{\pi}\}_{h\in[H]}\) _as the state-value function induced by model_ \(\mathbb{P}_{f}\) _and policy_ \(\pi\)_._ 2. _we denote_ \(V_{f}=\{V_{h,f}\}_{h\in[H]}\) _as the optimal state-value function under model_ \(\mathbb{P}_{f}\)_, i.e.,_ \(V_{h,f}=\sup_{\pi\in\Pi}V_{h,f}^{\pi}\)_. The corresponding optimal policy is denoted by_ \(\pi_{f}=\{\pi_{h,f}\}_{h\in[H]}\)_, where_ \(\pi_{h,f}=\arg\sup_{\pi\in\Pi}V_{h,f}^{\pi}\)_._ 3. _we denote the true model_ \(\mathbb{P}\) _of the MDP as_ \(f^{*}\)_._ We remark that the main difference between the model-based hypothesis (Example 2.2) and the model-free hypothesis (Example 2.1) is that model-based RL directly learns the transition kernel of the underlying MDP, while model-free RL learns the optimal state-action value function. Since we do not add any specific structural form to the hypothesis class, e.g., linear function or kernel function, we are in the context of _general function approximations_(Sun et al., 2019; Jin et al., 2021; Du et al., 2021; Zhong et al., 2022). ## 3 Algorithm Framework: Maximize to Explore (MEX) In this section, we propose an algorithm framework, named _Maximize to Explore_ (MEX, Algorithm 1), for online RL in MDPs with general function approximations. With a novel single objective, MEX automatically balances the goal of exploration and exploitation in online RL. Since MEX only requires an _unconstrained_ maximization procedure, it is friendly to implement in practice. We first give a generic algorithm framework and then instantiate it to model-free (Example 2.1) and model-based (Example 2.2) hypotheses respectively. Generic algorithm.At each episode \(k\in[K]\), the agent first estimates a hypothesis \(f^{k}\in\mathcal{H}\) using historical data \(\{\mathcal{D}^{s}\}_{s=1}^{k-1}\) by maximizing a composite objective (3.1). To achieve the goal of exploiting history knowledge while encouraging exploration, the composite objective (3.1) sums: **(a)** the negative loss \(-L_{h}^{k-1}(f)\) induced by the hypothesis \(f\), which represents the exploration incentive driven by estimation error to improve the agent's knowledge; and **(b)** the optimal expected total return associated with the current hypothesis, i.e., \(V_{1,f}\), which represents the goal of exploitation via planning. With a tuning parameter \(\eta>0\), the agent balances the weight put on the tasks of exploitation and exploration. Then the agent predicts \(\pi^{*}\) via the optimal policy associated with the hypothesis \(f^{k}\), i.e., \(\pi_{f^{k}}\). Also, the agent executes some exploration policy \(\pi_{\exp}(f^{k})\) to collect data \(\mathcal{D}^{k}=\{(x_{h}^{k},a_{h}^{k},r_{h}^{k},x_{h+1}^{k})\}_{h=1}^{H}\) and updates the loss function \(L_{h}^{k}(\cdot)\). The choice of the loss function \(L(\cdot)\) varies between model-free and model-based hypotheses, which we specify in the following. The choice of the exploration policy \(\pi_{\exp}(f^{k})\) depends on the specific MDP structure, and we refer to examples in Section 5 and Appendix B for detailed discussions. We need to highlight that \(\mathtt{MEX}\) is not a Lagrangian duality of the constrained optimization objectives within data-dependent level-sets proposed by previous works (Jin et al., 2021; Du et al., 2021). In fact, \(\mathtt{MEX}\) only needs to fix the parameter \(\eta\) across each episode \(k\). Thus \(\eta\) is independent of data and predetermined, which contrasts Lagrangian methods that involve an inner loop of optimization for the dual variables. We also remark that we can rewrite (3.1) as a joint optimization \((f,\pi)=\operatorname*{argsup}_{f\in\mathcal{H},\pi\in\Pi}V_{1,f}^{\pi}(x_{1} )-\eta\sum_{h=1}^{H}L_{h}^{k-1}(f).\) When \(\eta\) tends to infinity, \(\mathtt{MEX}\) coincides with the vanilla actor-critic framework (Konda and Tsitsiklis, 1999), where the critic \(f\) minimizes the estimation error and the actor \(\pi\) conducts greedy policy associated with the critic \(f\). In the following two parts, we instantiate Algorithm 1 to model-based and mode-free hypotheses respectively by specifying the loss function \(L_{h}^{k}(f)\). Model-free algorithm.For model-free hypothesis (Example 2.1), the composite objective (3.1) becomes \[f^{k}=\operatorname*{argsup}_{f\in\mathcal{H}}\left\{\max_{a_{1}\in\mathcal{A }}Q_{1,f}(x_{1},a_{1})-\eta\cdot\sum_{h=1}^{H}L_{h}^{k-1}(f)\right\}. \tag{3.2}\] Regarding the choice of the loss function, for seek of theoretical analysis, to deal with MDPs with low Bellman eluder dimension (Jin et al., 2021) and MDPs of bilinear class (Du et al., 2021), we assume the existence of certain function \(l\), which generalizes the notion of Bellman residual. **Assumption 3.1**.: _The function \(l:\mathcal{H}\times\mathcal{H}_{h}\times\mathcal{H}_{h+1}\times(\mathcal{S} \times\mathcal{A}\times\mathbb{R}\times\mathcal{S})\mapsto\mathbb{R}\) satisfies2:_ Footnote 2: For simplicity we drop the dependence of \(l\) on the index \(h\) since this makes no confusion. Similar simplifications are used later. 1. (Generalized Bellman completeness) _(Zhong et al., 2022; Chen et al., 2022). There exists a functional operator_ \(\mathcal{P}_{h}:\mathcal{H}_{h+1}\mapsto\mathcal{H}_{h}\) _such that for any_ \((f^{\prime},f_{h},f_{h+1})\in\mathcal{H}\times\mathcal{H}_{h}\times\mathcal{H }_{h+1}\) _and_ \(\mathcal{D}_{h}=(x_{h},a_{h},r_{h},x_{h+1})\in\mathcal{S}\times\mathcal{A} \times\mathbb{R}\times\mathcal{S}\)_,_ \[l_{f^{\prime}}\big{(}(f_{h},f_{h+1});\mathcal{D}_{h}\big{)}-l_{f^{\prime}}\big{(} (\mathcal{P}_{h}f_{h+1},f_{h+1});\mathcal{D}_{h}\big{)}=\mathbb{E}_{x_{h+1} \sim\mathcal{P}_{h}(\cdot|x_{h},a_{h})}\big{[}l_{f^{\prime}}\big{(}(f_{h},f_{h +1});\mathcal{D}_{h}\big{)}\big{]},\] _where we require that_ \(\mathcal{P}_{h}f_{h+1}^{*}=f_{h}^{*}\) _and that_ \(\mathcal{P}_{h}f_{h+1}\in\mathcal{H}_{h}\) _for any_ \(f_{h+1}\in\mathcal{H}_{h+1}\) _and_ \(h\in[H]\)_;_ 2. (Boundedness)_. It holds that_ \(|l_{f^{\prime}}((f_{h},f_{h+1});\mathcal{D}_{h})|\leq B_{l}\) _for some_ \(B_{l}>0\) _and any_ \((f^{\prime},f_{h},f_{h+1})\in\mathcal{H}\times\mathcal{H}_{h}\times\mathcal{H }_{h+1}\) _and_ \(\mathcal{D}_{h}=(x_{h},a_{h},r_{h},x_{h+1})\in\mathcal{S}\times\mathcal{A} \times\mathbb{R}\times\mathcal{S}\)_._ Intuitively, the operator \(\mathcal{P}_{h}\) can be considered as a generalization of the Bellman optimality operator. We set the choice of \(l\) and \(\mathcal{P}\) for concrete model-free examples in Section 5. We then set the loss function \(L_{h}^{k}\) as an empirical estimation of the generalized squared Bellman error \(|\mathbb{E}_{x_{h+1}\sim\mathbb{P}_{h}(\cdot|x_{h},a_{h})}[l_{f^{*}}((f_{h},f_{h+1 }),\mathcal{D}_{h}^{s})]|^{2}\), given by \[L_{h}^{k}(f)=\sum_{s=1}^{k}l_{f^{*}}\big{(}(f_{h},f_{h+1});\mathcal{D}_{h}^{s} \big{)}^{2}-\inf_{f_{h}^{*}\in\mathcal{H}_{h}}\sum_{s=1}^{k}l_{f^{*}}\big{(}(f_ {h}^{\prime},f_{h+1});\mathcal{D}_{h}^{s}\big{)}^{2}. \tag{3.3}\] We remark that the subtracted infimum term in (3.3) is for handling the variance terms in the estimation to achieve a fast theoretical rate. Similar essential ideas are also adopted by Jin et al. (2021); Xie et al. (2021); Dann et al. (2021); Jin et al. (2022); Lu et al. (2022); Agarwal and Zhang (2022); Zhong et al. (2022). Model-based algorithm.For model-based hypothesis (Example 2.2), the composite objective (3.1) becomes \[f^{k}=\operatorname*{argsup}_{f\in\mathcal{H}}\left\{\sup_{\pi\in\Pi}V_{1, \mathbb{P}_{f}}^{\pi}(x_{1})-\eta\cdot\sum_{h=1}^{H}L_{h}^{k-1}(f)\right\}, \tag{3.4}\] which gives a joint optimization over the model \(\mathbb{P}_{f}\) and the policy \(\pi\). In the model-based algorithm, we choose the loss function \(L_{h}^{k}\) as the negative log-likelihood loss, defined as \[L_{h}^{k}(f)=-\sum_{s=1}^{k}\log\mathbb{P}_{h,f}(x_{h+1}^{s}|x_{h}^{s},a_{h}^{s }). \tag{3.5}\] ## 4 Regret Analysis for MEX Framework In this section, we analyze the regret of the MEX framework (Algorithm 1). Specifically, we give an upper bound of its regret which holds for both model-free (Example 2.1) and model-based (Example 2.2) settings. To derive the theorem, we first present three key assumptions needed. In Section 5, we specify the generic upper bound to specific examples of MDPs and hypothesis classes that satisfy these assumptions. We first assume that the hypothesis class \(\mathcal{H}\) is well-specified, containing the true hypothesis \(f^{*}\). **Assumption 4.1** (Realizablity).: _We assume that the true hypothesis \(f^{*}\in\mathcal{H}\)._ Moreover, we make a structural assumption on the underlying MDP to ensure sample-efficient online RL. Inspired by Zhong et al. (2022), we require the MDP to have low **G****Generalized****Eluder****Coefficient** (GEC). In MDPs with low GEC, the agent can effectively mitigate out-of-sample prediction error by minimizing in-sample prediction error based on the historical data. Therefore, the GEC can be used to measure the difficulty inherent in generalization from the observation to the unobserved trajectory, thus further quantifying the hardness of learning the MDP. We refer the readers to Zhong et al. (2022) for a detailed discussion of GEC. To define GEC, we introduce a discrepancy function \[\ell_{f^{\prime}}(f;\xi_{h}):\mathcal{H}\times\mathcal{H}\times(\mathcal{S} \times\mathcal{A}\times\mathbb{R}\times\mathcal{S})\mapsto\mathbb{R},\] which characterizes the error incurred by hypothesis \(f\in\mathcal{H}\) on data \(\xi_{h}=(x_{h},a_{h},r_{h},x_{h+1})\). Specific choices of \(\ell\) are given in Section 5 for concrete model-free and model-based examples. **Assumption 4.2** (Low generalized eluder coefficient (Zhong et al., 2022)).: _We assume that given an \(\epsilon>0\), there exists \(d(\epsilon)\in\mathbb{R}_{+}\), such that for any sequence of \(\{f^{k}\}_{k\in[K]}\subseteq\mathcal{H}\), \(\{\pi_{\exp}(f^{k})\}_{k\in[K]}\subseteq\Pi\),_ \[\sum_{k=1}^{K}V_{1,f^{k}}-V_{1}^{\pi_{f^{k}}}\leq\inf_{\mu>0}\left\{\frac{\mu} {2}\sum_{h=1}^{H}\sum_{k=1}^{K}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim\pi_{ \exp}(f^{s})}[\ell_{f^{s}}(f^{k};\xi_{h})]+\frac{d(\epsilon)}{2\mu}+\sqrt{d( \epsilon)HK}+\epsilon HK\right\}.\] _We denote the smallest number \(d(\epsilon)\in\mathbb{R}_{+}\) satisfying this condition as \(d_{\mathrm{GEC}}(\epsilon)\)._ As is shown by Zhong et al. (2022), the low-GEC MDP class covers almost all known theoretically tractable MDP instances, such as linear MDP (Yang and Wang, 2019; Jin et al., 2020), linear mixture MDP (Ayoub et al., 2020; Modi et al., 2020; Cai et al., 2020), MDPs of low witness rank (Sun et al., 2019), MDPs of low Bellman eluder dimension (Jin et al., 2021), and MDPs of bilinear class (Du et al., 2021). Finally, we make a concentration-style assumption which characterizes how the loss function \(L_{h}^{k}\) is related to the expectation of the discrepancy function \(\mathbb{E}[\ell]\) appearing in the definition of GEC. For ease of presentation, we assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<\infty\), but our result can be directly extended to an infinite \(\mathcal{H}\) using covering number arguments (Wainwright, 2019; Jin et al., 2021; Liu et al., 2022b; Jin et al., 2022). **Assumption 4.3** (Generalization).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\), and that with probability at least \(1-\delta\), for any episode \(k\in[K]\) and hypothesis \(f\in\mathcal{H}\), it holds that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f)\lesssim-\sum_{h=1}^{H}\sum_{s= 1}^{k-1}\mathbb{E}_{\xi_{h}\sim\pi_{\text{exp}}(f^{s})}[\ell_{f^{s}}(f;\xi_{h} )]+\texttt{Poly}(H,B_{l})\cdot\log(HK|\mathcal{H}|/\delta),\] _Here we use \(\texttt{Poly}(H,B_{l})\) to denote polynomials of \(H\) (and \(B_{l}\) for model-free hypothesis, see Assumption 3.1)._ As we will show in Proposition 5.1 and Proposition 5.3, Assumption 4.3 holds for both the model-free and model-based settings. With Assumptions 4.1, 4.2, and 4.3, we can present our main theoretical result. **Theorem 4.4** (Online regret of \(\mathtt{MEX}\) (Algorithm 1)).: _Under Assumptions 4.1, 4.2, and 4.3, by setting_ \[\eta=\sqrt{\frac{d_{\mathrm{GEC}}(1/\sqrt{HK})}{\log(HK|\mathcal{H}|/\delta) \cdot\texttt{Poly}(H,B_{l})\cdot K}},\] _then the regret of Algorithm 1 after \(K\) episodes is upper bounded by_ \[\mathrm{Regret}(K)\lesssim\sqrt{\texttt{Poly}(H,B_{l})\cdot d_{\mathrm{GEC}}( 1/\sqrt{HK})\cdot\log(HK|\mathcal{H}|/\delta)\cdot K},\] _with probability at least \(1-\delta\). Here \(d_{\mathrm{GEC}}(\cdot)\) is defined in Assumption 4.2._ Proof of Theorem 4.4.: See Appendix A.1 for a detailed proof. By Theorem 4.4, the regret of Algorithm 1 scales with the square root of the number of episodes \(K\) and the polynomials of the horizon \(H\), the GEC \(d_{\mathrm{GEC}}(1/\sqrt{K})\), and the log of the hypothesis class cardinality \(\log|\mathcal{H}|\). When the number of episodes \(K\) tends to infinity, the average regret \(\mathrm{Regret}(K)/K\) vanishes, meaning that the output policy of Algorithm 1 is approximately optimal. Thus Algorithm 1 is provably sample-efficient. Besides, as we can see in Theorem 4.4 and its specifications in Section 5, \(\mathtt{MEX}\) matches existing theoretical results in the literature of online RL under general function approximations (Jiang et al., 2017; Sun et al., 2019; Du et al., 2021; Jin et al., 2021; Dam et al., 2021; Agarwal and Zhang, 2022; Zhong et al., 2022). But meanwhile, \(\mathtt{MEX}\) does not require explicitly solving a constrained optimization problem within data-dependent level-sets or performing a complex sampling procedure, as is required by previous theoretical algorithms. This advantage makes \(\mathtt{MEX}\) a principled approach with much easier practical implementations. We conduct deep RL experiments for \(\mathtt{MEX}\) in Section 7 to demonstrate its power in complicated online tasks. Finally, thanks to the simple and flexible form of \(\mathtt{MEX}\), in Section 6, we further extend this framework and its analysis to two-player zero-sum Markov games (MGs), for which we also extend the definition of generalized eluder coefficient (GEC) to two-player zero-sum MGs. Moreover, a vast variety of tractable partially observable problems also enjoy low GEC (Zhong et al., 2022), including regular PSR (Zhan et al., 2022), weakly revealing POMDPs (Jin et al., 2020), low rank POMDPs (Wang et al., 2022), and PO-bilinear class POMDPs (Uehara et al., 2022). We believe that our proposed \(\mathtt{MEX}\) framework can also be applied to solve these POMDPs. ## 5 Examples of \(\mathtt{MEX}\) Framework In this section, we specify Algorithm 1 to model-based and model-free hypothesis classes for various examples of MDPs of low GEC (Assumption 4.2), including MDPs with low witness rank (Sun et al., 2019), MDPs with low Bellman eluder dimension (Jin et al., 2021), and MDPs of bilinear class (Du et al., 2021). Meanwhile, we show that Assumption 4.3 (generalization) holds for both model-free and model-based settings. It is worth highlighting that for both model-free and model-based hypotheses, we provide generalization guarantees in a neat and unified manner, independent of specific MDP examples. ### Model-free Online RL in Markov Decision Processes In this subsection, we specify Algorithm 1 for model-free hypothesis (Example 2.1). For a model-free hypothesis class, we choose the discrepancy function \(\ell\) as, given \(\mathcal{D}_{h}=(x_{h},a_{h},r_{h},x_{h+1})\), \[\ell_{f^{\prime}}(f;\mathcal{D}_{h})=\left(\mathbb{E}_{x_{h+1}\sim\mathbb{P}_{ h}(\cdot|x_{h},a_{h})}[l_{f^{\prime}}((f_{h},f_{h+1});\mathcal{D}_{h})]\right)^{2}. \tag{5.1}\] where the function \(l:\mathcal{H}\times\mathcal{H}_{h}\times\mathcal{H}_{h+1}\times(\mathcal{S} \times\mathcal{A}\times\mathbb{R}\times\mathcal{S})\mapsto\mathbb{R}\) satisfies Assumption 3.1. We specify the choice of \(l\) in concrete examples of MDPs later. In the following, we check and specify Assumptions 4.2 and 4.3 for model-free hypothesis classes. **Proposition 5.1** (Generalization: model-free RL).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\). Under Assumption 3.1, with probability at least \(1-\delta\), for any \(k\in[K]\) and \(f\in\mathcal{H}\), it holds that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f)\lesssim-\sum_{h=1}^{H}\sum_{s =1}^{k-1}\mathbb{E}_{\xi_{h}\sim\pi_{\text{exp}}(f^{*})}[\ell_{f^{*}}(f;\xi_{ h})]+HB_{l}^{2}\log(HK|\mathcal{H}|/\delta),\] _where \(L\) and \(\ell\) are defined in (3.3) and (5.1) respectively. Here \(B_{l}\) is specified in Assumption 3.1._ Proof of Proposition 5.1.: See Appendix B.3 for detailed proof. Proposition 5.1 specifies Assumption 4.3 with \(\texttt{Poly}(H,B_{l})=B_{l}^{2}H\). For Assumption 4.2, we need structural assumptions on the MDP. Given an MDP with GEC \(d_{\text{GEC}}\), we have the following corollary of Theorem 4.4. **Corollary 5.2** (Online regret of MEX: model-free hypothesis).: _Given an MDP with generalized eluder coefficient \(d_{\text{GEC}}(\cdot)\) and a finite model-free hypothesis class \(\mathcal{H}\) with \(f^{*}\in\mathcal{H}\), under Assumption 3.1, setting_ \[\eta=\sqrt{\frac{d_{\text{GEC}}(1/\sqrt{HK})}{\log(HK|\mathcal{H}|/\delta) \cdot B_{l}^{2}HK}}, \tag{5.2}\] _then the regret of Algorithm 1 after \(K\) episodes is upper bounded by_ \[\text{Regret}(T)\lesssim B_{l}\cdot\sqrt{d_{\text{GEC}}(1/\sqrt{HK})\cdot\log( HK|\mathcal{H}|/\delta)\cdot HK}, \tag{5.3}\] _with probability at least \(1-\delta\). Here \(B_{l}\) is specified in Assumption 3.1._ Corollary 5.2 can be directly specified to MDPs with low GEC, including MDPs with low Bellman eluder dimension (Jin et al., 2021) and MDPs of bilinear class (Du et al., 2021). We refer the readers to Appendix B.1 for a detailed discussion of these two examples. ### Model-based Online RL in Markov Decision Processes In this part, we specify Algorithm 1 to model-based hypothesis (Example 2.2). For a model-based hypothesis class, we choose the discrepancy function \(\ell\) as the _Hellinger distance_. Given \(\mathcal{D}_{h}=(x_{h},a_{h},r_{h},x_{h+1})\), we let \[\ell_{f^{\prime}}(f;\mathcal{D}_{h})=D_{\text{H}}(\mathbb{P}_{h,f}(\cdot|x_{h},a_{h})\|\mathbb{P}_{h,f^{*}}(\cdot|x_{h},a_{h})), \tag{5.4}\] where \(D_{\text{H}}(\cdot\|\cdot)\) denotes the Hellinger distance. According to (5.4), the discrepancy function \(\ell\) does not depend on the input \(f^{\prime}\in\mathcal{H}\). In the following, we check and specify Assumptions 4.2 and 4.3. **Proposition 5.3** (Generalization: model-based RL).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\). Then with probability at least \(1-\delta\), for any \(k\in[K]\), \(f\in\mathcal{H}\), it holds that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f)\lesssim-\sum_{h=1}^{H}\sum_{s =1}^{k-1}\mathbb{E}_{\xi_{h}\sim\pi_{\text{exp}}(f^{*})}[\ell_{f^{*}}(f;\xi_{ h})]+H\log(H|\mathcal{H}|/\delta),\] _where \(L\) and \(\ell\) are defined in (3.5) and (5.4) respectively._ Proof of Proposition 5.3.: See Appendix B.4 for detailed proof. Proposition 5.3 specifies Assumption 4.3 with \(\mathsf{Poly}(H)=H\). For Assumption 4.2, we also need structural assumptions on the MDP. Given an MDP with GEC \(d_{\mathrm{GEC}}\), we have the following corollary of Theorem 4.4. **Corollary 5.4** (Online regret of \(\mathtt{MEX}\): model-based hypothesis).: _Given an MDP with generalized eluder coefficient \(d_{\mathrm{GEC}}(\cdot)\) and a finite model-based hypothesis class \(\mathcal{H}\) with \(f^{*}\in\mathcal{H}\), by setting_ \[\eta=\sqrt{\frac{d_{\mathrm{GEC}}(1/\sqrt{HK})}{\log(H|\mathcal{H}|/\delta) \cdot HK}},\] _then the regret of Algorithm 1 after \(K\) episodes is upper bounded by, with probability at least \(1-\delta\),_ \[\mathrm{Regret}(K)\lesssim\sqrt{d_{\mathrm{GEC}}(1/\sqrt{HK})\cdot\log(H| \mathcal{H}|/\delta)\cdot HK}, \tag{5.5}\] Corollary 5.4 can be directly specified to MDPs having low GEC, including MDPs with low witness rank (Sun et al., 2019). We refer the readers to Appendix B.2 for a detailed discussion of this example. ## 6 Extensions to Two-player Zero-sum Markov Games In this section, we extend the definition of GEC to the two-player zero-sum MG setting and adapt \(\mathtt{MEX}\) to this setting in both model-free and model-based styles. Then we provide the theoretical guarantee for our proposed algorithms and specify the results in concrete examples such as linear two-player zero-sum MG. ### Online Reinforcement Learning in Two-player Zero-sum Markov Games Markov games (MGs) generalize the standard Markov decision process to the multi-agent setting. We consider the episodic two-player zero-sum MG, which is denoted as \((H,\mathcal{S},\mathcal{A},\mathcal{B},\mathbb{P},r)\). Here \(\mathcal{S}\) is the state space shared by both players, \(\mathcal{A}\) and \(\mathcal{B}\) are the action spaces of the two players (referred to as the max-player and the min-player) respectively, \(H\in\mathbb{N}_{+}\) denotes the length of each episode, \(\mathbb{P}=\{\mathbb{P}_{h}\}_{h\in[H]}\) with \(\mathbb{P}_{h}:\mathcal{S}\times\mathcal{A}\times\mathcal{B}\mapsto\Delta( \mathcal{S})\) the transition kernel of the next state given the current state and two actions from the two players at timestep \(h\), and \(r=\{r_{h}\}_{h\in[H]}\) with \(r_{h}:\mathcal{S}\times\mathcal{A}\times\mathcal{B}\mapsto[0,1]\) the reward function at timestep \(h\). We consider _online_ reinforcement learning in the episodic two-player zero-sum MG, where the two players interact with the MG for \(K\in\mathbb{N}_{+}\) episodes through the following protocol. Each episode \(k\) starts from an initial state \(x_{1}^{k}\). At each timestep \(h\), two players observe the current state \(x_{h}^{k}\), take joint actions \((a_{h}^{k},b_{h}^{k})\) individually, and observe the next state \(x_{h+1}^{k}\sim\mathbb{P}_{h}(\cdot\mid x_{h}^{k},a_{h}^{k},b_{h}^{k})\). The \(k\)-th episode ends after step \(H\) and then a new episode starts. Without loss of generality, we assume each episode has a common fixed initial state \(x_{1}^{k}=\underline{x}_{1}\), which can be easily generalized to having \(x_{1}\) sampled from a fixed but unknown distribution. Policies and value functions.We consider Markovian policies for both the max-player and the min-player. A Markovian policy of the max-player is denoted by \(\mu=\{\mu_{h}:\mathcal{S}\mapsto\Delta(\mathcal{A})\}_{h\in[H]}\). Similarly, a Markovian policy of the min-player is denoted by \(\nu=\{\nu_{h}:\mathcal{X}\mapsto\Delta(\mathcal{B})\}_{h\in[H]}\). Given a joint policy \(\mathbf{\pi}=(\mu,\nu)\), its state-value function \(V_{h}^{\mu,\nu}:\mathcal{S}\mapsto\mathbb{R}_{+}\) and state-action value function \(Q_{h}^{\mu,\nu}:\mathcal{S}\times\mathcal{A}\times\mathcal{B}\mapsto\mathbb{R }_{+}\) at timestep \(h\) are defined as \[V_{h}^{\mu,\nu}(x) :=\mathbb{E}_{\mathbb{P},(\mu,\nu)}\left[\sum_{h^{\prime}=h}^{H}r _{h^{\prime}}(x_{h^{\prime}},a_{h^{\prime}},b_{h^{\prime}})\Bigg{|}\,x_{h}=x \right], \tag{6.1}\] \[Q_{h}^{\mu,\nu}(x,a,b) :=\mathbb{E}_{\mathbb{P},(\mu,\nu)}\left[\sum_{h=h}^{H}r_{h^{ \prime}}(x_{h^{\prime}},a_{h^{\prime}},b_{h^{\prime}})\Bigg{|}\,(x_{h},a_{h},b_ {h})=(x,a,b)\right], \tag{6.2}\] where the expectations are taken over the randomness of the transition kernel and the policies. In the game, the max-player wants to maximize the value functions, while the min-layer aims at minimizing the value functions. Best response, Nash equilibrium, and Bellman equations.Given a max-player's policy \(\mu\), the _best response policy_ of the min-player, denoted by \(\nu^{\dagger}(\mu)\), is the policy that minimizes the total rewards given that the max-player uses \(\mu\). According to this definition, and for notational simplicity, we denote \[V^{\mu,\dagger}_{h}(x) :=V^{\mu,\nu^{\dagger}(\mu)}_{h}(x)=\inf_{\nu}V^{\mu,\nu}_{h}(x),\] \[Q^{\mu,\dagger}_{h}(x,a,b) :=Q^{\mu,\nu^{\dagger}(\mu)}_{h}(x,a,b)=\inf_{\nu}Q^{\mu,\nu}_{h}(x,a,b), \tag{6.3}\] for any \((x,a,b,h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times[H]\). Similarly, given a min-player's policy \(\nu\), there is a _best response policy_\(\mu^{\dagger}(\nu)\) for the max-player that maximizes the total rewards given \(\nu\). According to the definition, we denote \[V^{\dagger,\nu}_{h}(x) :=V^{\mu^{\dagger}(\nu),\nu}_{h}(x)=\sup_{\mu}V^{\mu,\nu}_{h}(x),\] \[Q^{\dagger,\nu}_{h}(x,a,b) :=Q^{\mu^{\dagger}(\nu),\nu}_{h}(x,a,b)=\sup_{\mu}Q^{\mu,\nu}_{h} (x,a,b), \tag{6.4}\] for any \((x,a,b,h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times[H]\). Furthermore, there exists a _Nash equilibrium_ (NE) joint policy \((\mu^{*},\nu^{*})\) (Filar and Vrieze, 2012) such that both players are optimal against their best responses. That is, \[V^{\mu^{*},\dagger}_{h}(x)=\sup_{\mu}V^{\mu,\dagger}_{h}(x),\quad V^{\dagger, \nu^{*}}_{h}(x)=\inf_{\nu}V^{\dagger,\nu}_{h}(x), \tag{6.5}\] for any \((x,h)\in\mathcal{S}\times[H]\). For the NE joint policy, we have the following minimax equation, \[\sup_{\mu}\inf_{\nu}V^{\mu,\nu}_{h}(x)=V^{\mu^{*},\nu^{*}}_{h}(x)=\inf_{\nu} \sup_{\mu}V^{\mu,\nu}_{h}(x). \tag{6.6}\] for any \((x,h)\in\mathcal{S}\times[H]\). This shows that: i) the for two-player zero-sum MG, the sup and the inf exchanges; ii) the NE policy has a unique state-value (state-action value) function, which we denote as \(V^{*}\) and \(Q^{*}\) respectively. Finally, we introduce two sets of Bellman equations for best response value functions and NE value functions. In specific, for the min-player's best response value functions given max-player policy \(\mu\), i.e., (6.3), we have the following Bellman equation,3 Footnote 3: For simplicity, we define \(\mathbb{D}_{(\mu_{h},\nu_{h})}:=\mathbb{E}_{a\sim\mu_{h}(\cdot|x),b\sim\mu_{h} (\cdot|x)}[Q(x,a,b)]\) for any \(\mu_{h}\), \(\nu_{h}\), and function \(Q\). \[Q^{\mu,\dagger}_{h}(x,a,b)=(\mathcal{T}^{\mu}_{h}Q^{\mu,\dagger}_{h+1})(x,a,b) :=r_{h}(x,a,b)+\mathbb{E}_{x^{\prime}\sim\mathbb{P}_{h}(\cdot|x,a,b)}\bigg{[} \inf_{\nu_{h+1}}\mathbb{D}_{(\mu_{h+1},\nu_{h+1})}Q^{\mu,\dagger}_{h+1}(x^{ \prime})\bigg{]}, \tag{6.7}\] for any \((x,a,b,h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times[H]\). We name \(\mathcal{T}^{\mu}_{h}\) as the _min-player best response Bellman operator_ given max-player policy \(\mu\), and we define \[\mathcal{E}^{\mu}_{h}(Q_{h},Q_{h+1};x,a,b):=Q_{h}(x,a,b)-\mathcal{T}^{\mu}_{h} Q_{h+1}(x,a,b), \tag{6.8}\] as the _min-player best response Bellman residual_ given max-player policy \(\mu\) at timestep \(h\) of any functions \((Q_{h},Q_{h+1})\). Also, for the NE value functions, i.e., (6.1), we also have the following NE Bellman equation, \[Q^{*}_{h}(x,a,b)=(\mathcal{T}^{\text{NE}}_{h}Q^{*}_{h+1})(x,a,b):=r_{h}(x,a,b)+ \mathbb{E}_{x^{\prime}\sim\mathbb{P}_{h}(\cdot|x,a,b)}\bigg{[}\sup_{\mu_{h+1} }\inf_{\nu_{h+1}}\mathbb{D}_{(\mu_{h+1},\nu_{h+1})}Q^{*}_{h+1}(x^{\prime}) \bigg{]}, \tag{6.9}\] for any \((x,a,b,h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times[H]\). We call \(\mathcal{T}^{\text{NE}}_{h}\) the NE Bellman operator, and we define \[\mathcal{E}^{\text{NE}}_{h}(Q_{h},Q_{h+1};x,a,b):=Q_{h}(x,a,b)-\mathcal{T}^{ \text{NE}}_{h}Q_{h+1}(x,a,b), \tag{6.10}\] as the _NE Bellman residual_ at timestep \(h\) of any functions \((Q_{h},Q_{h+1})\). Performance metric.We say a max-player's policy \(\mu\) is \(\epsilon\)-close to Nash equilibrium if \(V^{*}(x_{1})-V^{\mu,\dagger}(x_{1})<\epsilon\). The goal of this section is to find such a max-player policy. The corresponding regret after \(K\) episodes is, \[\text{Regret}_{\text{MG}}(K)=\sum_{k=1}^{K}V^{*}_{1}(x_{1})-V^{\mu^{k},\dagger} _{1}(x_{1}), \tag{6.11}\] where \(\mu^{k}\) is the policy used by the max-player for the \(k\)-th episode. Such a problem setting is also considered by Jin et al. (2022); Huang et al. (2021); Xiong et al. (2022). Actually, the roles of two players can be exchanged, so that the goal turns to learning a min-player policy \(\nu\) which is \(\epsilon\)-close to the Nash equilibrium. ### Function Approximation: Model-Free and Model-Based Hypothesis Parallel to the MDP setting, we study two-player zero-sum MGs in the context of general function approximations. In specific, we assume access to an abstract hypothesis class \(\mathcal{H}=\mathcal{H}_{1}\times\cdots\times\mathcal{H}_{H}\), which can be specified to model-based and model-free settings, respectively. Also, we denote \(\boldsymbol{\Pi}=\mathbf{M}\times\mathbf{N}\) with \(\mathbf{M}=\mathbf{M}_{1}\times\cdots\times\mathbf{M}_{H}\) and \(\mathbf{N}=\mathbf{N}_{1}\times\cdots\times\mathbf{N}_{H}\) as the space of Markovian joint policies. The following two examples show how to specify \(\mathcal{H}\) for model-free and model-based settings. **Example 6.1** (Model-free hypothesis class: two-player zero-sum Markov game).: _For the model-free setting, \(\mathcal{H}\) contains approximators of the state-action value functions of the MG, i.e., \(\mathcal{H}_{h}\subseteq\{f_{h}:\mathcal{S}\times\mathcal{A}\times\mathcal{B} \mapsto\mathbb{R}\}\). Specifically, for any \(f=(f_{1},\cdots,f_{H})\in\mathcal{H}\):_ 1. _we denote the corresponding state-action value function_ \(Q_{f}=\{Q_{h,f}\}_{h\in[H]}\) _with_ \(Q_{h,f}=f_{h}\)_;_ 2. _we denote the corresponding NE state-value function_ \(V_{f}=\{V_{h,f}\}_{h\in[H]}\) _with_ \[V_{h,f}(\cdot)=\sup_{\mu_{h}\in\mathbf{M}_{h}}\inf_{\nu_{h}\in\mathbf{N}_{h}} \mathbb{D}_{(\mu_{h},\nu_{h})}Q_{h,f}(\cdot),\] _and we denote the corresponding NE max-player policy by_ \(\mu_{f}=\{\mu_{h,f}\}_{h\in[H]}\) _with_ \[\mu_{h,f}(\cdot)=\operatorname*{argsup}_{\mu_{h}\in\mathbf{M}_{h}}\inf_{\nu_{ h}\in\mathbf{N}_{h}}\mathbb{D}_{(\mu_{h},\nu_{h})}Q_{h,f}(\cdot).\] 3. _given a policy of the max-player_ \(\mu\in\mathbf{M}\)_, we define_ \(V_{f}^{\mu,\dagger}=\{V_{h,f}^{\mu,\dagger}\}_{h\in[H]}\) _as the state-value function induced by_ \(Q_{f}\)_,_ \(\mu\) _and its best response, i.e.,_ \(V_{h,f}^{\mu,\dagger}(\cdot)=\inf_{\nu_{h}\in\mathbf{N}_{h}}\mathbb{D}_{(\mu_{h },\nu_{h})}Q_{h,f}(\cdot)\)_, and we denote the corresponding best response min-player policy as_ \(\nu_{f,\mu}=\{\nu_{h,f,\mu}\}_{h\in[H]}\)_, i.e.,_ \(\nu_{h,f}=\operatorname*{arginf}_{\nu_{h}\in\mathbf{N}_{h}}\mathbb{D}_{(\mu_{h },\nu_{h})}Q_{h,f}(\cdot)\)_._ 4. _we denote the NE state-action value function under the true model, i.e.,_ \(Q^{*}\)_, by_ \(f^{*}\)_._ **Example 6.2** (Model-based hypothesis class: two-player zero-sum Markov game).: _For the model-based setting, \(\mathcal{H}\) contains approximators of the transition kernel of the MG, for which we denote \(f=\mathbb{P}_{f}=(\mathbb{P}_{1,f},\cdots,\mathbb{P}_{H,f})\in\mathcal{H}\). For any \((f,\boldsymbol{\pi})\in\mathcal{H}\times\boldsymbol{\Pi}\) with \(\boldsymbol{\pi}=(\mu,\nu)\):_ 1. _we denote_ \(V_{f}^{\mu,\nu}=\{V_{h,f}^{\mu,\dagger}\}_{h\in[H]}\) _as the state-value function induced by model_ \(\mathbb{P}_{f}\) _and joint policy_ \((\mu,\nu)\)_._ 2. _we denote_ \(V_{f}=\{V_{h,f}\}_{h=\in[H]}\) _as the NE state-value function induced by model_ \(\mathbb{P}_{f}\)_, and we denote the corresponding NE max-player policy as_ \(\mu_{f}=\{\mu_{h,f}\}_{h\in[H]}\)_._ 3. _given a policy of the max-player_ \(\mu\in\mathbf{M}\)_, we define_ \(V_{f}^{\mu,\dagger}=\{V_{h,f}^{\mu,\dagger}\}_{h\in[H]}\) _as the state-value function induced by model_ \(\mathbb{P}_{f}\)_,_ \(\mu\) _and its best response, i.e.,_ \(V_{h,f}^{\mu,\dagger}(\cdot)=\inf_{\nu\in\mathbf{N}}V_{h,f}^{\mu,\nu}(\cdot)\)_, and we denote the corresponding best response min-player policy as_ \(\nu_{f,\mu}=\{\nu_{h,f}\}_{h\in[H]}\)_, i.e.,_ \(\nu_{f,\mu}=\operatorname*{arginf}_{\nu\in\mathbf{N}}V_{h,f}^{\mu,\nu}(\cdot)\)_._ 4. _we denote the true model_ \(\mathbb{P}\) _of the two-player zero-sum MG as_ \(f^{*}\)_._ ### Algorithm Framework: Maximize to Explore (MEX-MG) In this section, we extend the _Maximize to Explore_ framework (MEX, Algorithm 1) proposed in Section 3 to the two-player zero-sum MG setting, resulting in \(\mathtt{MEX-MG}\) (Algorithm 2). \(\mathtt{MEX-MG}\) controls the max-player and the min-player in a centralized manner. The min-player is aimed at assisting the max-player to achieve low regret. This kind of _self-play_ algorithm framework has received considerable attention recently in theoretical study of two-player zero-sum MGs (Jin et al., 2022; Huang et al., 2021; Xiong et al., 2022). We first give a generic algorithm framework and then instantiate it to model-free (Example 6.1) and model-based (Example 6.2) hypotheses respectively. #### 6.3.1 Generic algorithm \(\mathtt{MEX-MG}\) leverages the asymmetric structure between the max-player and min-player to achieve sample-efficient learning. In specific, it picks two different hypotheses for the two players respectively, so that the max-player is aimed at approximating the NE max-player policy and the min-player is aimed at approximating the best response of the max-player, assisting its regret minimization. Max-player.At each episode \(k\in[K]\), \(\mathtt{MEX-MG}\) first estimates a hypothesis \(f^{k}\in\mathcal{H}\) for the max-player using historical data \(\{\mathcal{D}^{s}\}_{s=1}^{k-1}\) by maximizing objective (6.12). Parallel to \(\mathtt{MEX}\), to achieve the goal of exploiting history knowledge while encouraging exploration, the composite objective (6.12) sums: 1 the negative loss \(-L_{h,\mu}^{k-1}(f)\) induced by the hypothesis \(f\); 2 the Nash equilibrium value associated with the current hypothesis, i.e., \(V_{1,f}\). \(\mathtt{MEX-MG}\) balances exploration and exploitation via a tuning parameter \(\eta>0\). With the hypothesis \(f^{k}\), \(\mathtt{MEX-MG}\) sets the max-player's policy \(\mu^{k}\) as the NE max-player policy with respect to \(f^{k}\), i.e., \(\mu_{f^{k}}\). Min-player.After obtaining the max-player policy \(\mu^{k}\), \(\mathtt{MEX-MG}\) goes to estimate another hypothesis for the min-player in order to approximate the best response of the max-player. In specific, \(\mathtt{MEX-MG}\) estimates \(g^{k}\in\mathcal{H}\) using historical data \(\{\mathcal{D}^{s}\}_{s=1}^{k-1}\) by maximizing objective (6.13), which also sums two objectives: 1 the negative loss \(-L_{h,\mu}^{k-1}(g)\) induced by the hypothesis \(g\). Here the loss function depends on \(\mu^{k}\) since we aim to approximate the best response of \(\mu^{k}\); 2 the negative best response min-player value associated with the current hypothesis \(g\) and \(\mu^{k}\), i.e., \(-V_{1,g}^{\mu^{k},\dagger}\). The negative sign is due to the goal of min-player, i.e., minimization of the total rewards. With \(g^{k}\), \(\mathtt{MEX-MG}\) sets the min-player's policy \(\nu^{k}\) as the best response policy of \(\mu^{k}\) under \(g^{k}\), i.e., \(\nu_{g^{k},\mu^{k}}\). Data collection.Finally, the two agents execute the joint policy \(\mathbf{\pi}^{k}=(\mu^{k},\nu^{k})\) to collect new data \(\mathcal{D}^{k}=\{(x_{h}^{k},a_{h}^{k},b_{h}^{k},r_{h}^{k},x_{h+1}^{k})\}_{h=1 }^{H}\) and update their loss functions \(L(\cdot)\). The choice of the loss functions varies between model-free and model-based hypotheses, which we specify in the following. #### 6.3.2 Model-free algorithm For model-free hypothesis (Example 6.1), the composite objectives (6.12) and (6.13) becomes \[f^{k} =\operatorname*{argsup}_{f\in\mathcal{H}}\left\{\sup_{\mu_{1} \in\mathbf{M}_{1}}\inf_{\nu_{1}\in\mathbf{N}_{1}}\mathbb{D}_{(\mu_{1},\nu_{ 1})}Q_{1,f}(x_{1})-\eta\cdot\sum_{h=1}^{H}L_{h}^{k-1}(f)\right\}, \tag{6.14}\] \[g^{k} =\operatorname*{argsup}_{g\in\mathcal{H}}\left\{-\inf_{\nu_{1} \in\mathbf{N}_{1}}\mathbb{D}_{(\mu_{1}^{k},\nu_{1})}Q_{1,g}(x_{1})-\eta\cdot \sum_{h=1}^{H}L_{h,\mu^{k}}^{k-1}(g)\right\}. \tag{6.15}\] In the model-free algorithm, we choose the loss functions as empirical estimates of squared Bellman residuals. For the max-player who wants to approximate the NE max-player policy, we choose the loss function \(L_{h}^{k}(f)\) as an estimation of the squared NE Bellman residual, given by \[L_{h}^{k}(f)=\sum_{s=1}^{k}\Big{(}Q_{h,f}(x_{h}^{s},a_{h}^{s},b_{h}^ {s})-r_{h}^{s}-V_{h+1,f}(x_{h+1}^{s})\Big{)}^{2}\] \[\qquad\qquad-\inf_{f_{h}^{\prime}\in\mathcal{H}_{h}}\sum_{s=1}^{k }\Big{(}Q_{h,f^{\prime}}(x_{h}^{s},a_{h}^{s},b_{h}^{s})-r_{h}^{s}-V_{h+1,f}(x_{ h+1}^{s})\Big{)}^{2}. \tag{6.16}\] For the min-player who aims at approximating the best response policy of \(\mu^{k}\), we set the loss function \(L_{h,\mu}^{k}(g)\) as an estimation of the squared best-response Bellman residual given max-player policy \(\mu\), \[L_{h,\mu}^{k}(g)=\sum_{s=1}^{k}\Big{(}Q_{h,g}(x_{h}^{s},a_{h}^{s},b_{h}^{s})-r_{h}^{s}-V_{h+1,g}^{\mu,\dagger}(x_{h+1}^{s})\Big{)}^{2}\] \[\qquad\qquad-\inf_{g_{h}^{\prime}\in\mathcal{H}_{h}}\sum_{s=1}^{k }\Big{(}Q_{h,g^{\prime}}(x_{h}^{s},a_{h}^{s},b_{h}^{s})-r_{h}^{s}-V_{h+1,g}^{ \mu,\dagger}(x_{h+1}^{s})\Big{)}^{2}\,. \tag{6.17}\] We remark that the subtracted infimum term in both (6.16) and (6.17) is for handling the variance terms in the estimation to achieve a fast theoretical rate, as we do for \(\mathtt{MEX}\) with model-free hypothesis in Section 3. #### 6.3.3 Model-based algorithm. For model-based hypothesis (Example 6.2), the composite objectives (6.12) and (6.13) becomes \[f^{k}=\operatorname*{\arg\!\sup}_{f\in\mathcal{H}}\left\{\sup_{ \mu\in\mathbf{M}}\inf_{\nu\in\mathbf{N}}V_{1,\mathbb{P}_{f}}^{\mu,\nu}(x_{1}) -\eta\cdot\sum_{h=1}^{H}L_{h}^{k-1}(f)\right\}, \tag{6.18}\] \[g^{k}=\operatorname*{\arg\!\sup}_{g\in\mathcal{H}}\left\{-\inf_ {\nu\in\mathbf{N}}V_{1,\mathbb{P}_{g}}^{\mu^{k},\nu}(x_{1})-\eta\cdot\sum_{h= 1}^{H}L_{h,\mu^{k}}^{k-1}(g)\right\}, \tag{6.19}\] which can be understood as a joint optimization over model \(\mathbb{P}_{f}\) and the joint policy policy \(\boldsymbol{\pi}=(\mu,\nu)\). In the model-based algorithm, we choose the loss function \(L_{h}^{k}(f)\) as the negative log-likelihood loss, \[L_{h}^{k}(f)=-\sum_{s=1}^{k}\log\mathbb{P}_{h,f}(x_{h+1}^{s}|x_{h}^{s},a_{h}^{s },b_{h}^{s}). \tag{6.20}\] Meanwhile, we choose the loss function \(L_{h,\mu}^{k}(g)=L_{h}^{k}(g)\), i.e., (6.20), regardless of the max-player policy \(\mu\). But we remark that despite \(L_{h}^{k}=L_{h,\mu}^{k}\), \(f^{k}\) and \(g^{k}\) are still different since the exploitation component in (6.18) and (6.19) are not the same due to the different targets of the max-player and the min-player. ### Regret Analysis for \(\mathtt{MEX}\)-MG Framework In this section, we establish the regret of the \(\mathtt{MEX}\)-MG framework (Algorithm 2). Specifically, we give an upper bound of its regret which holds for both model-free (Example 6.1) and model-based (Example 6.2) settings. We first present several key assumptions needed for the main result. We first assume that the hypothesis class \(\mathcal{H}\) is well-specified, containing certain true hypotheses. **Assumption 6.3** (Realizablity).: _We make the following realizability assumptions for the model-free and model-based hypotheses respectively:_ * _For model-free hypothesis (Example_ 6.1_), we assume that the true Nash equilibrium value_ \(f^{*}\in\mathcal{H}\)_. Moreover, for any_ \(f\in\mathcal{F}\)_, it holds that_ \(Q^{\mu_{f},\dagger}\in\mathcal{H}\)_._ * _For model-based hypothesis (Example_ 6.2_), we assume that the true transition_ \(f^{*}\in\mathcal{H}\)_._ Also, we make the following completeness and boundedness assumption on \(\mathcal{H}\). **Assumption 6.4** (Completeness and Boundedness).: _For model-free hypothesis (Example 6.1), we assume that for any \(f,g\in\mathcal{H}\), it holds that \(\mathcal{T}_{h}^{\mu_{f}}g_{h}\in\mathcal{H}_{h}\), for any timestep \(h\in[H]\). Also, we assume that there exists \(B\geq 1\) such that for any \(f_{h}\in\mathcal{H}_{h}\), it holds that \(f_{h}(x,a,b)\in[0,B]\) for any \((x,a,b,h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times[H]\)._ Assumptions 6.3 and 6.4 are standard assumptions in studying two-player zero-sum MGs (Jin et al., 2022; Huang et al., 2021; Xiong et al., 2022). Moreover, we make a structural assumption on the underlying MG to ensure sample-efficient online RL. Inspired by the single-agent analysis, we require that the MG has a low **Two-player Generalized****Eluder****Coefficient** (TGEC), which generalizes the GEC defined in Section 4. We provide specific examples of MGs with low TGEC, both model-free and model-based, in Section 6.5. To define TGEC, we introduce two discrepancy functions \(\ell\) and \(\ell_{\mu}\), \[\ell_{f^{\prime}}(f;\xi_{h}) :\mathcal{H}\times\mathcal{H}\times(\mathcal{S}\times\mathcal{A} \times\mathbb{R}\times\mathcal{S})\mapsto\mathbb{R},\] \[\ell_{f^{\prime},\mu}(f;\xi_{h}) :\mathcal{H}\times\mathbf{N}\times\mathcal{H}\times(\mathcal{S} \times\mathcal{A}\times\mathbb{R}\times\mathcal{S})\mapsto\mathbb{R},\] which characterizes the error incurred by a hypothesis \(f\in\mathcal{H}\) on data \(\xi_{h}=(x_{h},a_{h},b_{h},r_{h},x_{h+1})\). Intuitively, \(\ell\) aims at characterizing the NE Bellman residual (6.10), while \(\ell_{\mu}\) aims at characterizing the min-player best response Bellman residual given max-player policy \(\mu\) (6.8). Specific choices of \(\ell\) are given in Section 6.5 for concrete model-free and model-based examples. **Assumption 6.5** (Low Two-Player Generalized Eluder Coefficient (TGEC)).: _We assume that given an \(\epsilon>0\), there exists a finite \(d(\epsilon)\in\mathbb{R}_{+}\), such that for any sequence of hypotheses \(\{(f^{k},g^{k})\}_{k\in[K]}\subset\mathcal{H}\) and policies \(\{\boldsymbol{\pi}^{k}=(\mu_{f^{k}},\nu_{g^{k},\mu_{f^{k}}})\}_{k\in[K]} \subset\boldsymbol{\Pi}\), it holds that_ \[\sum_{k=1}^{K}V_{1,f^{k}}(x_{1})-V_{1}^{\boldsymbol{\pi}^{k}}(x_ {1}) \leq\inf_{\zeta>0}\left\{\frac{\zeta}{2}\sum_{h=1}^{H}\sum_{k=1}^{K} \sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim\boldsymbol{\pi}^{k}}[\ell_{f^{s}}(f^ {k};\xi_{h})]+\frac{d(\epsilon)}{2\zeta}+\sqrt{d(\epsilon)HK}+\epsilon HK\right\},\] _and it also holds that_ \[\sum_{k=1}^{K}V_{1}^{\boldsymbol{\pi}^{k}}(x_{1})-V_{1,g^{k}}^{ \mu^{k},\dagger}(x_{1}) \leq\inf_{\zeta>0}\left\{\frac{\zeta}{2}\sum_{h=1}^{H}\sum_{k=1}^{ K}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim\boldsymbol{\pi}^{k}}[\ell_{g^{s},\mu^{k}} (g^{k};\xi_{h})]+\frac{d(\epsilon)}{2\zeta}+\sqrt{d(\epsilon)HK}+\epsilon HK \right\},\] _where \(\mu_{k}=\mu_{f^{k}}\). We denote the smallest \(d(\epsilon)\in\mathbb{R}_{+}\) satisfying this condition as \(d_{\mathrm{TGEC}}(\epsilon)\)._ Finally, we make a concentration-style assumption on loss functions, parallel to Assumption 4.3 for MDPs. For ease of presentation, we also assume that the hypothesis class \(\mathcal{H}\) is finite. **Assumption 6.6** (Generalization).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\), and that with probability at least \(1-\delta\), for any episode \(k\in[K]\) and hypotheses \(f,g\in\mathcal{H}\), it holds that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f) \lesssim-\sum_{h=1}^{H}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim \boldsymbol{\pi}^{k}}[\ell_{f^{s}}(f;\xi_{h})]+\mathtt{Poly}(H,B)\cdot\log(HK| \mathcal{H}|/\delta).\] _and it also holds that, with \(\star=Q^{\mu^{k},\dagger}\) for model-free hypothesis and \(\star=f^{*}\) for model-based hypothesis,_ \[\sum_{h=1}^{H}L_{h,\mu^{k}}^{k-1}(\star)-L_{h,\mu^{k}}^{k-1}(g) \lesssim-\sum_{h=1}^{H}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim\boldsymbol{\pi}^ {k}}[\ell_{g^{s},\mu^{k}}(g;\xi_{h})]+\mathtt{Poly}(H,B)\cdot\log(HK|\mathcal{H }|/\delta),\] _Here we use \(\mathtt{Poly}(H,B)\) to denote polynomials of \(H\) (and \(B\) for model-free hypothesis, see Assumption 6.4)._ As we show in Proposition 6.13 and Proposition 6.8, Assumption 6.6 holds for both model-free and model-based settings. With Assumptions 6.3, 6.4 (model-free only), 6.5, and 6.6, we can present our main theoretical result. **Theorem 6.7** (Online regret of **Mex-Mg** (Algorithm 2)).: _Under Assumptions 6.3, 6.4 (model-free only), 6.5, and 6.6, by setting_ \[\eta=\sqrt{\frac{d_{\mathrm{TGEC}}(1/\sqrt{HK})}{\log(HK|\mathcal{H}| /\delta)\cdot\mathtt{Poly}(H,B)\cdot K}},\] the regret of Algorithm 2 after \(K\) episodes is upper bounded by_ \[\mathrm{Regret}(K)\lesssim\sqrt{d_{\mathrm{TGEC}}(1/\sqrt{K})\cdot\log(HK|\mathcal{ H}|/\delta)\cdot\mathtt{Poly}(H,B)\cdot K},\] _with probability at least \(1-\delta\). Here \(d_{\mathrm{TGEC}}(\cdot)\) is given by Assumption 6.5._ Proof of Theorem 6.7.: See Appendix A.2 for detailed proof. ### Examples of MEX-MG Framework #### 6.5.1 Model-free Online RL in Two-player Zero-sum Markov Games In this subsection, we specify MEX-MG (Algorithm 2) for model-free hypothesis class (Example 6.1). In specific, we choose the discrepancy functions \(\ell\) and \(\ell_{\mu}\) as, given \(\xi_{h}=(x_{h},a_{h},b_{h},r_{h},x_{h+1})\), \[\ell_{f^{\prime}}(f;\xi_{h}) =\Big{(}Q_{h,f}(x_{h},a_{h},b_{h})-r_{h}-\mathbb{E}_{x_{h+1}\sim \mathbb{P}_{h}(\cdot|x_{h},a_{h},b_{h})}[V_{h+1,f}(x_{h+1})]\Big{)}^{2}, \tag{6.21}\] \[\ell_{f^{\prime},\mu}(g;\xi_{h}) =\Big{(}Q_{h,g}(x_{h},a_{h},b_{h})-r_{h}-\mathbb{E}_{x_{h+1}\sim \mathbb{P}_{h}(\cdot|x_{h},a_{h},b_{h})}[V_{h+1,g}^{\mu,\dagger}(x_{h+1})] \Big{)}^{2}. \tag{6.22}\] By (6.21) and (6.22), both \(\ell_{f^{\prime}}\) and \(\ell_{f^{\prime},\mu}\) do not depend on the input \(f^{\prime}\). In the following, we check and specify Assumptions 6.5 and 6.6 in Section 6.4 for model-free hypothesis class. **Proposition 6.8** (Generalization: model-free RL).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\). Then with probability at least \(1-\delta\), for any \(k\in[K]\) and \(f,g\in\mathcal{H}\), it holds simultaneously that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f) \lesssim-\sum_{h=1}^{H}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim \overline{\pi}^{k}}[\ell_{f^{*}}(f;\xi_{h})]+HB^{2}\log(HK|\mathcal{H}|/\delta),\] \[\sum_{h=1}^{H}L_{h,\mu^{k}}^{k-1}(Q^{\mu^{k},\dagger})-L_{h,\mu^{ k}}^{k-1}(g) \lesssim-\sum_{h=1}^{H}\sum_{s=1}^{k-1}\mathbb{E}_{\xi_{h}\sim \overline{\pi}^{k}}[\ell_{g^{*},\mu^{k}}(g;\xi_{h})]+HB^{2}\log(HK|\mathcal{H} |/\delta),\] _where \(L\), \(L_{\mu}\), \(\ell\), and \(\ell_{\mu}\) are defined in (6.15), (6.16), (6.21), and (6.22), respectively._ Proof of Proposition 6.8.: See Appendix C.3 for a detailed proof. Proposition 6.8 specifies Assumption 6.6 for abstract model-free hypothesis with \(\mathtt{Poly}(H,B)=HB^{2}\). Now given a two-player zero-sum MG with TGEC \(d_{\mathrm{TGEC}}\), we have the following corollary of Theorem 6.7. **Corollary 6.9** (Online regret of MEX-MG: model-free hypothesis).: _Given a two-player zero-sum MG with two-player generalized eluder coefficient \(d_{\mathrm{TGEC}}(\cdot)\) and a finite model-free hypothesis class \(\mathcal{H}\) satisfying Assumptions 6.3 and 6.4, by setting_ \[\eta=\sqrt{\frac{d_{\mathrm{TGEC}}(1/\sqrt{HK})}{\log(HK|\mathcal{H}|/\delta) \cdot B^{2}HK}}, \tag{6.23}\] _then the regret of Algorithm 2 after \(K\) episodes is upper bounded by_ \[\mathrm{Regret}(T)\lesssim B\cdot\sqrt{d_{\mathrm{TGEC}}(1/\sqrt{HK})\cdot \log(HK|\mathcal{H}|/\delta)\cdot HK}, \tag{6.24}\] _with probability at least \(1-\delta\). Here \(B\) is specified in Assumption 6.4._ Linear two-player zero-sum Markov game.Next, we introduce the linear two-player zero-sum MG (Xie et al., 2020) as a concrete model-free example, for which we can explicitly specify its TGEC. Linear MG is a natural extension of linear MDPs (Jin et al., 2020) to the two-player zero-sum MG setting, whose reward and transition kernels are modeled by linear functions. **Definition 6.10** (Linear two-player zero-sum Markov game).: _A \(d\)-dimensional two-player zero-sum linear Markov game satisfies that \(r_{h}(x,a,b)=\phi_{h}(x,a,b)^{\top}\alpha_{h}\) and \(\mathbb{P}_{h}(x^{\prime}\,|\,x,a,b)=\phi_{h}(x,a,b)^{\top}\psi_{h}^{\star}(x^{ \prime})\) for some known feature mapping \(\phi_{h}(x,a,b)\in\mathbb{R}^{d}\) and some unknown vector \(\alpha_{h}\in\mathbb{R}^{d}\) and some unknown function \(\psi_{h}(x^{\prime})\in\mathbb{R}^{d}\) satisfying \(\|\phi_{h}(x,a,b)\|_{2}\leq 1\) and \(\max\{\|\alpha_{h}\|_{2},\|\psi_{h}^{\star}(x^{\prime})\|_{2}\}\leq\sqrt{d}\) for any \((x,a,b,x^{\prime},h)\in\mathcal{S}\times\mathcal{A}\times\mathcal{B}\times \mathcal{S}\times[H]\)._ Linear two-player zero-sum MG covers the tabular two-player zero-sum MG as a special case. For a linear two-player zero-sum MG, we choose the model-free hypothesis class as, for each \(h\in[H]\), \[\mathcal{H}_{h}=\Big{\{}\phi_{h}(\cdot,\cdot,\cdot)^{\top}\theta_{h}:\|\theta _{h}\|_{2}\leq(H+1-h)\sqrt{d}\Big{\}}. \tag{6.25}\] The following proposition gives the TGEC of a linear two-player zero-sum MG with hypothesis class (6.25). **Proposition 6.11** (TGEC of linear two-player zero-sum MG).: _For a linear two-player zero-sum MG, with model-free hypothesis (6.25), it holds that_ \[d_{\mathrm{TGEC}}(1/K)\lesssim d\log(K),\quad\log\mathcal{N}(\mathcal{H},1/K, \|\cdot\|_{\infty})\lesssim dH\log(dK), \tag{6.26}\] _where \(\mathcal{N}(\mathcal{H},1/K,\|\cdot\|_{\infty})\) denotes the \(1/K\)-covering number of \(\mathcal{H}\) under \(\|\cdot\|_{\infty}\)-norm._ Proof of Proposition 6.11.: See Appendix C.1 for a detailed proof. As proved by Huang et al. (2021), a linear two-player zero-sum MG with model-free hypothesis class (6.25) also satisfies the realizability and completeness assumptions (Assumptions 6.3 and 6.4, with \(B=H\)). Thus we can specify Theorem 6.7 for linear two-player zero-sum MGs as follows. **Corollary 6.12** (Online regret of MEX-MG: linear two-player zero-sum MG).: _By setting \(\eta=\widetilde{\Theta}(\sqrt{1/H^{3}K})\), the regret of Algorithm 2 for linear two-player zero-sum MG after \(K\) episodes is upper bounded by_ \[\mathrm{Regret}_{\mathrm{MG}}(K)\lesssim dH^{2}K^{1/2}\log(HKd/\delta),\] _with probability at least \(1-\delta\), where \(d\) is the dimension of the linear two-player zero-sum MG._ Proof of Corollary 6.12.: Using Corollary 6.9, Proposition 6.11, and a covering number argument. #### 6.5.2 Model-based Online RL in Two-player Zero-sum Markov Games In this subsection, we specify Algorithm 2 for model-based hypothesis class \(\mathcal{H}\) (Example 6.2). In specific, we choose the discrepancy function \(\ell\) as the Hellinger distance. Given data \(\xi_{h}=(x_{h},a_{h},b_{h},x_{h+1})\), we let \[\ell_{f^{\prime}}(f;\xi_{h})=\ell_{f^{\prime},\mu}(f;\xi_{h})=D_{\mathrm{H}}( \mathbb{P}_{h,f}(\cdot|x_{h},a_{h},b_{h})\|\mathbb{P}_{h,f^{\star}}(\cdot|x_{h },a_{h},b_{h})), \tag{6.27}\] where \(D_{\mathrm{H}}(\cdot\|\cdot)\) denotes the Hellinger distance. We note that due to (6.27), the discrepancy function \(\ell\) does not depend on the input \(f^{\prime}\in\mathcal{H}\) and the max-player policy \(\mu\). In the following, we check and specify Assumptions 6.5 and 6.6 in Section 6.4 for model-based hypothesis classes. **Proposition 6.13** (Generalization: model-based RL).: _We assume that \(\mathcal{H}\) is finite, i.e., \(|\mathcal{H}|<+\infty\). Then with probability at least \(1-\delta\), for any \(k\in[K]\), \(f\in\mathcal{H}\), it holds that_ \[\sum_{h=1}^{H}L_{h}^{k-1}(f^{*})-L_{h}^{k-1}(f)\lesssim-\sum_{h=1}^{H}\sum_{s= 1}^{k-1}\mathbb{E}_{\xi_{h}\sim\pi^{k}}[\ell_{f^{*}}(f;\xi_{h})]+H\log(H| \mathcal{H}|/\delta),\] _where \(L\) and \(\ell\) are defined in (6.20) and (6.27) respectively._ Proof of Proposition 6.13.: This proposition follows from the same proof of Proposition 5.3. Since \(L_{h}^{k}=L_{h,\mu}^{k}\) and \(\ell_{f}=\ell_{f,\mu}\), Proposition 6.13 means that Assumption 6.6 holds with \(\mathtt{Poly}(H)=H\). Now given a two-player zero-sum MG with TGEC \(d_{\mathrm{TGEC}}\), we have the following corollary of Theorem 6.7. **Corollary 6.14** (Online regret of -MG: model-based hypothesis).: _Given a two-player zero-sum MG with two-player generalized eluder coefficient and a finite model-based hypothesis class with, by setting_ (6.28) _then the regret of Algorithm 2 after episodes is upper bounded by_ (6.29) _with probability at least._ Linear mixture two-player zero-sum Markov game.Next, we introduce the linear mixture two-player zero-sum MG as a concrete model-based example, for which we can explicitly specify its TGEC. Linear mixture MG is a natural extension of linear mixture MDPs (Ayoub et al., 2020; Modi et al., 2020; Cai et al., 2020) to the two-player zero-sum MG setting, whose transition kernels are modeled by linear kernels. But just as the single-agent setting, the linear mixture MG and the linear MG (Definition 6.10) do not cover each other as special cases (Cai et al., 2020). **Definition 6.15** (Linear mixture two-player zero-sum Markov game).: _A d-dimensional two-player zero-sum linear mixture Markov game satisfies that for some known feature mapping and some unknown vector satisfying for any._ Linear mixture two-player zero-sum MG also covers the tabular two-player zero-sum MG as a special case. For a linear mixture two-player zero-sum MG, we choose the model-based hypothesis class as, for each, (6.30) The following proposition gives the TGEC of a linear mixture two-player zero-sum MG. **Proposition 6.16** (TGEC of linear mixture two-player zero-sum MG).: _For a linear mixture two-player zero-sum MG, with model-free hypothesis (6.25), it holds that_ (6.31) _where denotes the -covering number of under -norm._ Proof of Proposition 6.16.: See Appendix C.2 for a detailed proof. Then we can specify Theorem 6.7 for linear mixture two-player zero-sum MGs as follows. **Corollary 6.17** (Online regret of -MG: linear mixture two-player zero-sum MG).: _By setting, the regret of Algorithm 2 for linear mixture two-player zero-sum MG after episodes is upper bounded by_ (6.32) _with probability at least, where is the dimension of the linear mixture two-player zero-sum MG._ Proof of Corollary 6.17.: Using Corollary 6.14, Proposition 6.16, and a covering number argument. ## 7 Experiments In this section, we propose practical versions of in both model-free and model-based fashion. We aim to answer the following two questions: 1. What are the practical approaches to implementing in both model-based and model-free settings via deep RL methods? 2. Can handle challenging exploration tasks, especially those that involve sparse reward scenarios? ### Experiment Setups We evaluate the effectiveness of MEX by assessing its performance in both standard gym locomotion tasks and sparse reward locomotion and navigation tasks within the MuJoCo (Todorov et al., 2012) environment. For sparse reward tasks, we select cheetah-vel, walker-vel, hopper-vel, ant-vel, and ant-goal adapted from Yu et al. (2020), where the agent receives a reward _only_ when it successfully attains the desired velocity or goal. To adapt to deep RL settings, we consider infinite-horizon \(\gamma\)-discounted MDPs and corresponding MEX variants. We report the results averaged over five random seeds. In the sparse-reward tasks, the agent only receives a reward when it achieves the desired velocity or position. Regarding the model-based sparse-reward experiments, we assign a target value of \(1\) to the vel parameter for the walker-vel task and \(1.5\) for the hopper-vel, cheetah-vel, ant-vel tasks. For the model-free sparse-reward experiments, we set the target vel to \(3\) for the hopper-vel, walker-vel, cheetah-vel tasks, and the target goal to \((2,0)\) for ant-goal task. ### Implementation Details Model-free algorithm.For the model-free variant MEX-MF, we observe from (3.2) that adding a maximization bias term to the standard TD error is sufficient for provably efficient exploration. However, this may lead to instabilities as the bias term only involves the state-action value function of the current policy, and thus the policy may be ever-changing. To address this issue, we adopt a similar treatment as in CQL(Kumar et al., 2020) by subtracting a baseline state-action value from random policy \(\mu=\text{Unif}(\mathcal{A})\) and obtain the following objective, \[\min_{\theta}\max_{\pi}\,\mathbb{E}_{\mathcal{D}}\left[\big{(}r+\gamma Q_{ \theta}(x^{\prime},a^{\prime})-Q_{\theta}(x,a)\big{)}^{2}\right]-\eta^{\prime} \cdot\mathbb{E}_{\mathcal{D}}\big{[}\mathbb{E}_{a\sim\pi}Q_{\theta}(x,a)- \mathbb{E}_{a\sim\mu}Q_{\theta}(x,a)\big{]}. \tag{7.1}\] We update \(\theta\) and \(\pi\) in objective (7.1) iteratively in an actor-critic fashion. To stabilize training, we adopt a similar entropy regularization \(\mathcal{H}(\mu)\) over \(\mu\) as in CQL(Kumar et al., 2020). By incorporating such a regularization, we obtain the following soft constrained variant of MEX-MF, i.e. \[\min_{\theta}\max_{\pi}\mathbb{E}_{\beta}\left[\big{(}r+\gamma Q_{\theta}(x^{ \prime},a^{\prime})-Q_{\theta}(x,a)\big{)}^{2}\right]-\eta^{\prime}\cdot \mathbb{E}_{\beta}\bigg{[}\mathbb{E}_{a\sim\pi}Q_{\theta}(x,a)-\log\sum_{a \in\mathcal{A}}\exp\big{(}Q_{\theta}(x,a)\big{)}\bigg{]}.\] Model-based algorithm.For the model-based variant MEX-MB, we use the following objective: \[\max_{\phi}\max_{\pi}\,\mathbb{E}_{(x,a,r,x^{\prime})\sim\mathcal{D}}\left[ \log\mathbb{P}_{\phi}(x^{\prime},r\,|\,x,a)\right]+\eta^{\prime}\cdot\mathbb{E }_{x\sim\sigma}\big{[}V^{\pi}_{\mathbb{P}_{\phi}}(x)\big{]}, \tag{7.2}\] where we denote by \(\sigma(\cdot)\) the initial state distribution, \(\mathcal{D}\) the replay buffer, and \(\eta^{\prime}\) corresponds to \(1/\eta\) in the previous theory sections. We leverage the _score function_ to obtain the model value gradient \(\nabla_{\phi}V^{\pi}_{\mathbb{P}_{\phi}}\) in a similar way to likelihood ratio policy gradient (Sutton et al., 1999), with the gradient of action log-likelihood replaced by the gradient of state and reward log-likelihood in the model. Specifically, \[\nabla_{\phi}\mathbb{E}_{x\sim\sigma}\big{[}V^{\pi}_{\mathbb{P}_{\phi}}(x) \big{]}=\mathbb{E}_{\tau^{\pi}_{\phi}}\Big{[}\big{(}r+\gamma V^{\pi}_{\mathbb{ P}_{\phi}}(x^{\prime})-Q^{\pi}_{\mathbb{P}_{\phi}}(x,a)\big{)}\cdot\nabla_{\phi} \log\mathbb{P}_{\phi}(x^{\prime},r\,|\,x,a)\Big{]}, \tag{7.3}\] where \(\tau^{\pi}_{\phi}\) is the trajectory under policy \(\pi\) and transition \(\mathbb{P}_{\phi}\), starting from \(\sigma\). We refer the readers to previous works (Rigter et al., 2022; Wu et al., 2022) for a derivation of (7.3). The model \(\phi\) and policy \(\pi\) in (7.2) are updated iteratively in a Dyna(Sutton, 1990) style, where model-free policy updates are performed on model-generated data. Particularly, we adopt SAC(Haarnoja et al., 2018) to update the policy \(\pi\) and estimate the value \(Q^{\pi}_{\mathbb{P}_{\phi}}\) using the model data generated by the model \(\mathbb{P}_{\phi}\). We also follow Rigter et al. (2022) to update the model using mini-batches from \(\mathcal{D}\) and normalize the advantage \(r+\gamma V^{\pi}_{\mathbb{P}_{\phi}}-Q^{\pi}_{\mathbb{P}_{\phi}}\) within each mini-batch. We refer the readers to Appendix E.2 for more implementation details of MEX-MB. ### Experimental Results We report the performance of MEX-MF and MEX-MB in Figures 1 and 2, respectively. Results for Mex-Mf.We compare MEX-MF with the model-free baseline TD3 (Fujimoto et al., 2018). We observe that TD3 fails in many sparse reward tasks, while MEX-MF can significantly boost the performance. In standard MuJoCo gym tasks, MEX-MF also steadily outperforms TD3 with faster convergence and higher final returns. Results for Mex-Mg.We compare MEX-MB with MBPO (Janner et al., 2019), where our method differs from MBPO _only_ in the inclusion of the value gradient in (7.3) during model updates. We find that MEX-MB offers an easy implementation with minimal computational overhead and yet remains highly effective across sparse and standard MuJoCo tasks. Notably, in the sparse reward settings, MEX-MB excels at achieving the goal velocity and outperforms MBPO by a stable margin. In standard gym tasks, MEX-MB showcases greater sample efficiency in challenging high-dimensional tasks with higher asymptotic returns. ## 8 Conclusions In this paper, we propose a novel online RL algorithm framework _Maximize to Explore_ (MEX), aimed at striking a balance between exploration and exploitation in online learning scenarios. MEX is provably sample-efficient with general function approximations and is easy to implement. Theoretically, we prove that under mild structural assumptions (low generalized eluder coefficient (GEC)), MEX achieves \(\widetilde{\mathcal{O}}(\sqrt{K})\)-online regret for Markov decision processes. We further extend the definition of GEC and the MEX framework to two-player zero-sum Markov games and also prove the \(\widetilde{\mathcal{O}}(\sqrt{K})\)-online regret. In practice, we adapt MEX to deep RL methods in both model-based and model-free styles and apply them to sparse-reward MuJoCo environments, outperforming baselines significantly. We hope that our work can shed light on future research of designing both statistically efficient and practically effective RL algorithms with powerful function approximations. Figure 1: Model-free MEX-MF in sparse and standard MuJoCo locomotion tasks.
2307.01170
Online nearest neighbor classification
We study an instance of online non-parametric classification in the realizable setting. In particular, we consider the classical 1-nearest neighbor algorithm, and show that it achieves sublinear regret - that is, a vanishing mistake rate - against dominated or smoothed adversaries in the realizable setting.
Sanjoy Dasgupta, Geelon So
2023-07-03T17:29:58Z
http://arxiv.org/abs/2307.01170v1
# Online nearest neighbor classification ###### Abstract We study an instance of online non-parametric classification in the realizable setting. In particular, we consider the classical \(1\)-_nearest neighbor_ algorithm, and show that it achieves sublinear regret--that is, a vanishing mistake rate--against dominated or smoothed adversaries in the realizable setting. ## 1 Introduction In _online classification_, a learner observes a stream of data points \(x_{t}\) from an instance space \(\mathcal{X}\), and it is tasked with sequentially making predictions \(\hat{y}_{t}\) about their classes \(y_{t}\) coming from some label space \(\mathcal{Y}\). At each point in time \(t=1,2,\ldots\) * the learner is presented with an instance \(x_{t}\in\mathcal{X}\) * the learner makes a prediction \(\hat{y}_{t}\in\mathcal{Y}\) * the label \(y_{t}\) is revealed, and the learner incurs some loss \(\ell(x_{t},y_{t},\hat{y}_{t})\), where \(\ell(x,y,\hat{y})\) is a non-negative, bounded loss function satisfying \(\ell(x,y,y)=0\) (there is no penalty for a correct prediction). The learner's performance is given by its _regret_ at any time \(T\), defined as the difference between the learner's cumulative loss and that of the best fixed classifier \(h:\mathcal{X}\rightarrow\mathcal{Y}\) that the learner would have chosen in hindsight from some comparator class \(\mathcal{H}\), \[\text{regret}_{T}:=\sum_{t=1}^{T}\ell\big{(}x_{t},y_{t},\hat{y}_{t}\big{)}- \inf_{h\in\mathcal{H}}\,\sum_{t=1}^{T}\ell\big{(}x_{t},y_{t},h(x_{t})\big{)}.\] Learning in the online setting means achieving sublinear regret, \(\text{regret}_{T}=o(T)\), for then the average loss of the online learner is asymptotically no worse than the average loss of the offline learner who had access to the data \((x_{1},y_{1}),\ldots,(x_{T},y_{T})\) all at once. While in the worst-case setting, this sequence of instances and labels may be completely arbitrary, we consider the more restrictive _realizable setting_, in which a concept \(c:\mathcal{X}\rightarrow\mathcal{Y}\) is fixed at the onset (though it may be chosen adversarially) and describes the labels \(y_{t}=c(x_{t})\) for all time. In this paper, we further let \((\mathcal{X},\rho)\) be a metric space, and we consider online classification through the \(1\)-_nearest neighbor_ rule. This algorithm, first introduced by Fix and Hodges (1951), is a particularly appealing learning algorithm due to its simplicity: this learner memorizes everything it sees. Then, given some instance \(x\), it searches for the nearest neighbor among previously seen instances \(x_{1},\ldots,x_{t}\), returning the corresponding label as the prediction \(\hat{y}\). We ask: **Question** What are general conditions under which the \(1\)-_nearest neighbor_ rule achieves sublinear regret in the realizable _smoothed online classification_ setting? In our setting, when \(\mathcal{H}\) is the family of all nearest-neighbor classifiers, the best hindsight classifier in \(\mathcal{H}\) makes no mistakes, and so the regret consists only of the cumulative loss term; we simply aim to understand when the average loss of the nearest neighbor rule converges to zero: \[\text{average loss}_{T}:=\frac{1}{T}\sum_{t=1}^{T}\ell(x_{t},y_{t},\hat{y}_{t}) \to 0. \tag{1}\] ### A negative result: the worst-case adversary When the comparator class \(\mathcal{H}\) can interpolate the sequence of data, learning in the worst-case setting is generally intractable--even in the realizable setting. Unless the learner exactly recovers the underlying concept, a worst-case adversary (or indeed, a best-case teacher) can at each time step find test instances on which the learner errs; the average loss fails to converge to zero. **Example 1** (Failing to learn the sign function).: _Consider the sign function \(\operatorname{sign}(x):=\mathbb{1}_{x\geq 0}\) on \(\mathcal{X}=\mathbb{R}\). The nearest neighbor makes a mistake every round on the sequence of instances:_ \[x_{t}=\big{(}-1/3\big{)}^{t}.\] _At time \(t+1\), the nearest neighbor for \(x_{t+1}\) is \(x_{t}\), which has the opposite sign (see Figure 1)._ The above negative example relies on the worst-case adversary's ability to select instances with arbitrary precision in order to construct a hard sequence. For _nearest neighbor_, the hardness of a point can be related to its separation from points of different classes--constructing a hard sequence like the one above is possible precisely whenever the classes are not separated: **Proposition 2** (Non-convergence in the worst-case).: _Let \((\mathcal{X},\rho)\) be a totally bounded metric space and \(c\) be a concept. Let \(\ell\) be the zero-one loss \(\ell(x,y,\hat{y})=\mathbb{1}\left\{y\neq\hat{y}\right\}\). There is a sequence of instances \((x_{t})_{t}\) on which the nearest neighbor rule fails to achieve sublinear regret on \(c\) if and only if there is no positive separation between classes:_ \[\inf_{c(x)\neq c(x^{\prime})}\,\rho(x,x^{\prime})=0.\] This makes sense, since the inductive bias built into the nearest neighbor rule is that most points are surrounded by other points of the same class (though one might have to zoom in very close to a point before the labels of its surrounding neighbors become pure). Boundary points are not amenable to the nearest neighbor rule since their labels can't be learned from neighbors, nor do their labels consistently generalize to nearby points. Intuitively, the nearest neighbor learner fares poorly if faced with an adversary that can take advantage of boundary points by selecting instances with arbitrary precision. However, it may be able to perform well if its adversary doesn't have unbounded power to find these hard points near the boundary. In this paper, we make this intuition precise through the smoothed analysis of nearest neighbors. ### Smoothed analysis of online learning While the nearest neighbor algorithm does not perform well in all worlds, we might reasonably expect to not live in the worst-case world. In that case, the worst-case analysis of nearest neighbor does not necessarily help elucidate the behavior of the algorithm in practice. This motivates the _smoothed analysis_ of online learning algorithms, in which the adversary does not directly select instances, but rather distributions \(\mu_{t}\) from which the instances \(x_{t}\) are then drawn. If the distributions are fixed for all time, we recover the i.i.d. setting. If they may be point masses, we recover the worst-case setting. But somewhere in between, the smoothed online setting might also capture more tractable and realistic learning settings, and has been previously studied by Rakhlin et al. (2011); Haghtalab et al. (2020, 2022); Block et al. (2022). The following interaction protocol formalizes the _smoothed online classification_ setting: **Interaction Protocol**_smoothed online classification_ learner selects a prediction strategy \(\mathcal{A}\) adversary selects ground truth concept \(c\) with knowledge of \(\mathcal{A}\) **for**\(t=1,2,\ldots\) adversary selects data distribution \(\mu_{t}\) on \(\mathcal{X}\) and draws sample \(x_{t}\sim\mu_{t}\) learner makes prediction \(\hat{y}_{t}\) given \(x_{t}\) according to \(\mathcal{A}\) learner incurs loss \(\ell(x_{t},y_{t},\hat{y}_{t})\) where \(y_{t}=c(x_{t})\) smoothed online classification. By the end of each round, both the adversary and learner sees all of the triple \((x_{t},y_{t},\hat{y}_{t})\). The distribution \(\mu_{t}\) remains hidden to the learner. One common smoothed setting is the Gaussian perturbation model (Spielman and Teng, 2009), where the adversary selects \(\mu_{t}\) in the form of a Gaussian \(\mathcal{N}(\tilde{x}_{t},\sigma^{2}I)\). Another natural setting is the \(\sigma\)-smoothed adversary model (Haghtalab et al., 2020), where there is some base distribution \(\nu\) on the instance space \(\mathcal{X}\), and the adversary is constrained to not boost the probability mass of any region \(A\subset\mathcal{X}\) by more than a multiplicative factor \(\sigma^{-1}\), so \(\mu_{t}(A)\leq\sigma^{-1}\cdot\nu(A)\). We distill the key property of these smoothed adversaries through the notion of a _dominated adversary_ on a measure space \((\mathcal{X},\nu)\). The dominated adversary is simply one that cannot place a constant probability mass \(\mu(A)\) on region \(A\subset\mathcal{X}\) with arbitrarily small \(\nu\)-mass. We define: **Definition 3** (Dominated adversary).: _Let \((\mathcal{X},\nu)\) be a measure space. The measure \(\nu\) uniformly dominates a family \(\mathcal{M}\) of probability distributions on \(\mathcal{X}\) if for all \(\varepsilon>0\) there exists \(\delta>0\) such that:_ \[\nu(A)<\delta\quad\Longrightarrow\quad\mu(A)<\varepsilon,\] _for all \(A\subset\mathcal{X}\) measurable and distribution \(\mu\in\mathcal{M}\). A smoothed online classification adversary is \(\nu\)-dominated if at all times \(t\) it selects \(\mu_{t}\) from a family of distributions uniformly dominated by \(\nu\)._ To see why this is helpful, let's say that \(A_{t}\subset\mathcal{X}\) is the set of points on which the learner makes mistakes at time \(t\). For a learner's error rate to converge to zero against a dominated adversary, it suffices to prove that the sequence \(\nu(A_{t})\) converges to zero: the probability that the dominated adversary induces a mistake \(\mu_{t}(A_{t})\) must also converge to zero since the \(\mu_{t}\)'s are uniformly dominated by \(\nu\). Convergence of the average loss then follows from the law of large numbers for martingales. Of course, this captures only a narrow set of scenarios where learning succeeds--in general, the convergence of the mistake region to a null set is a much stronger than the convergence of the mistake rate to zero. For example, if the adversary never tests on some region of the space, the average loss could still converge to zero even though the size of the mistake region might not. Instead, we shall argue that under mild boundary conditions, all but finitely many mistakes that a nearest neighbor learner makes must come from a very small set of 'hard points' (small with respect to \(\nu\)). But as the adversary is \(\nu\)-dominated, those instances can come only very infrequently. ### Main results Let \((\mathcal{X},\rho,\nu)\) be a metric measure space. Assume that \(\rho\) is a separable metric and \(\nu\) is a finite Borel measure. We prove that under mild boundary conditions, the _nearest neighbor_ rule achieves sublinear regret in the _smoothed online classification_ setting against \(\nu\)-dominated adversaries. To state the boundary condition, let's formalize the notion of boundary points. Given a concept \(c:\mathcal{X}\to\mathcal{Y}\), define the margin \(m_{c}(x)\) of a point \(x\in\mathcal{X}\) as its distance to points of different classes: \[m_{c}(x):=\inf_{c(x)\neq c(x^{\prime})}\,\rho(x,x^{\prime}).\] We say that \(x\) is a _boundary point_ of \(c\) if \(m_{c}(x)=0\), which is to say that it is arbitrarily close to points of other classes. Denote the set of boundary points by \(\partial\mathcal{X}\). The condition we require is this: **Assumption A** (Boundary condition).: _The set of boundary points \(\partial\mathcal{X}\) is essentially countable. That is, it is the union of a countable set and a \(\nu\)-measure zero set._ This boundary condition is the same condition required by Cover and Hart (1967) to prove the consistency of \(1\)-_nearest neighbor_ in the i.i.d. setting. We can now state our main result: **Theorem 4** (Convergence of nearest neighbor).: _Let \((\mathcal{X},\rho,\nu)\) be a metric measure space, where \(\rho\) is a separable metric and \(\nu\) is a finite Borel measure. Let \(c:\mathcal{X}\to\mathcal{Y}\) satisfy Assumption A. Then, the nearest neighbor rule achieves sublinear regret when learning \(c\) against a \(\nu\)-dominated adversary. In particular, the average loss converges to zero:_ \[\lim_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\ell(x_{t},y_{t},\hat{y}_{t})=0 \quad\mathrm{a.s.}\] To show this, we prove a general condition in Section 3 under which online learning is possible against a dominated adversary. Section 4 shows that _nearest neighbor_ satisfies this condition. We also derive rates of convergence in Section 5. Here is a simple instantiation of more general rates: **Theorem 5** (Rate of convergence for nearest neighbor).: _Let \(\mathcal{X}\subset\mathbb{R}^{d}\) be the unit ball. Assume \(d>1\). Let the set of boundary points of \(c:\mathcal{X}\to\mathcal{Y}\) have finite Minkowski content with respect to the Lesbegue measure and let the adversary be \(\sigma\)-smoothed. Let \(p>0\). With probability at least \(1-p\), the nearest neighbor rule satisfies the error rate bound simultaneously for all time \(T\):_ \[\frac{1}{T}\sum_{t=1}^{T}\ell(x_{t},y_{t},\hat{y}_{t})\leq\left(\frac{T}{ \sigma}\right)^{(-1+o(1))/(d+1)}.\] ### Related works The \(1\)-nearest neighbor rule (Fix and Hodges, 1951) was shown by Cover and Hart (1967) to be consistent when the instances come i.i.d. under Assumption A. On the other hand, in the online learning setting where the sequence of instances can be arbitrary (Littlelestone, 1988; Cesa-Bianchi and Lugosi, 2006), there is no learning algorithm that can achieve sublinear regret in the worst-case even in the case of learning a threshold Figure 1: Learning the sign function \(\mathbb{1}_{\{x\geq 0\}}\) on \(\mathbb{R}\). The nearest neighbor classifier makes a mistake every single round on the sequence \(x_{t}=(-1/3)^{t}\), where each subsequent test point alternates sign. function. However, worst-case analyses of algorithms can fail to explain the observed behavior of algorithms, especially if hard instances are extremely rare in practice (Spielman and Teng, 2009; Roughgarden, 2021). This motivates the smoothed analysis of algorithms, first introduced by Spielman and Teng (2004). The setting of smoothed online learning was first studied by Rakhlin et al. (2011), and has recently been followed up by a series of work (Haghtalab et al., 2020, 2022; Block et al., 2022, and references therein). Our work fills in the gap between the i.i.d. and worst-case analysis of nearest neighbor, while also giving the first convergence result in smoothed non-parametric online learning. Appendix A further expands on related works. ## 2 Preliminaries In _smoothed online classification_, the learner incrementally updates its prediction rule as it receives more data. It does so according to a prediction strategy \(\mathcal{A}\), which constructs each subsequent hypothesis \(h_{t+1}:\mathcal{X}\to\mathcal{Y}\) based on previously seen data: \[\mathcal{A}:\big{\{}(x_{\tau},y_{\tau})\big{\}}_{\tau=1}^{t}\mapsto h_{t+1}.\] Suppose that \(c\) is the underlying concept to be learned. Then, every hypothesis \(h:\mathcal{X}\to\mathcal{Y}\) induces an error function \(\mathcal{E}:\mathcal{X}\to\mathbb{R}\), which is the loss that \(h\) achieves at any particular instance \(x\), \[\mathcal{E}(x):=\ell\big{(}x,c(x),h(x)\big{)}\] When the prediction strategy \(\mathcal{A}\) and concept \(c\) are clear from context, it shall be fruitful to let \(\mathcal{E}_{t}\) be the associated error function to \(h_{t}\) generated by \(\mathcal{A}\). Rewriting Equation (1), we say that the strategy \(\mathcal{A}\) learns if it achieves a vanishing error rate: \[\text{average loss}_{T}=\frac{1}{T}\sum_{t=1}^{T}\mathcal{E}_{t}(x_{t})\to 0.\] ### Online local consistency We introduce the _online local consistency_ (OLC) condition for learning against dominated adversaries. This is a condition that depends on both the learning algorithm and the concept to be learned. For intuition, let \(\mathcal{X}\) be composed of (countably many) known clusters, and suppose that we are guaranteed that points in the same cluster have the same label. A natural learning algorithm is to remember a single label from each cluster, and to return that label if a point from the same cluster is queried. In this setting, the learner makes at most one mistake per cluster. If \(\nu\) is a finite measure over \(\mathcal{X}\), then over time, a \(\nu\)-dominated adversary will find it increasingly harder to pick points from previously unseen clusters; the mistake rate will eventually converge to zero. We generalize these easily-learned clusters through the notion of _locally-learned sets_ for a learner. In the following, if \(U\subset\mathcal{X}\) is a locally-learned set, we can think of the online learning problem restricted to \(U\) as easy for the learner: no matter what sequence of points an adversary chooses, the learner will eventually incur arbitrarily small loss from \(U\). **Definition 6** (Locally-learned set).: _Let \(c:\mathcal{X}\to\mathcal{Y}\) be a concept. We say that \(c\) is locally learned on a subset \(U\subset\mathcal{X}\) by the prediction strategy \(\mathcal{A}\) when, for any sequence of instances \((x_{t})_{t}\), either:_ 1. \(x_{t}\) _falls into_ \(U\) _finitely often, or_ 2. _the error function_ \(\mathcal{E}_{t}\big{|}_{U}\to 0\) _restricted to_ \(U\) _uniformly converges to zero._ _In this case, we say that \(U\) is a locally-learned set for \(c\)._ For example, singleton sets are locally-learned by _consistent learners_, which are learners that exactly interpolate past data. But in general, if \(\mathcal{X}\) is uncountable, this family of locally-learned sets is too granular to work with, as the family also becomes uncountably large. The OLC condition ensures that there is a way to cut up the problem into a countable collection of 'easy' problems. **Definition 7** (Online local consistency).: _A prediction strategy \(\mathcal{A}\) is online locally consistent (OLC) for a concept \(c\) if there exists a countable collection \(\mathcal{U}_{c}:=\{U_{n}\}_{n}\) of locally learned sets for \(c\) that covers all but a \(\nu\)-negligible subset of \(\mathcal{X}\)._ The argument for why an OLC learner can perform well against a dominated adversary is not unlike the earlier example of learning labels for pure clusters. We can restrict the learning problem to a finite collection of locally-learned sets that covers all but a small part of \(\mathcal{X}\). Because the part of \(\mathcal{X}\) we covered consists only of finitely many easy learning problems, the learner's error rate will eventually converge to zero here. The uncovered portion of \(\mathcal{X}\) can be made sufficiently small so that its contribution to the error rate is made arbitrarily small--the adversary cannot test the learner with instances from this region very frequently because it is \(\nu\)-dominated. ### Mutually-labeling sets For the analysis of nearest neighbor, we introduce the notion of a _mutually-labeling set_. It is a set defined so that, upon receiving a label for any point within the set, the nearest neighbor learner will never make a subsequent mistake on any other point in that set (see Figure 2). **Definition 8** (Mutually-labeling set).: _A set \(U\subset\mathcal{X}\) is a mutually-labeling set for a concept \(c\) if:_ \[\rho(x,x^{\prime})<m_{c}(x),\qquad\forall x,x^{\prime}\in U.\] Naturally, mutually-labeling sets are locally learned (Lemma 11). The proof of convergence for OLC learners using locally-learned sets generalizes the following proof sketch for _nearest neighbor_: Proof Sketch of Theorem 4For simplicity, let's assume a stronger boundary condition: the set of boundary points of \(c\) has \(\nu\)-measure zero. It turns out that if \(x\) is not a boundary point, then sufficiently small open balls centered at \(x\) are mutually-labeling sets (see Lemma 12). Thus, \(\mathcal{X}\) is covered almost everywhere by open mutually-labeling sets. By separability of \(\rho\) and finiteness of \(\nu\), all but an arbitrarily small region of \(\mathcal{X}\) can be covered by a finite number of such sets. Because the _nearest neighbor_ learner makes at most one mistake on each mutually-labeling set, eventually all mistakes must come from the uncovered hard region. The average rate at which a \(\nu\)-dominated adversary can test the learner with these hard instances can almost surely be bounded above by any \(\varepsilon>0\), by selecting a sufficiently small hard region for our analysis. Thus, the average loss converges to zero almost surely, by the law of large numbers for martingales. Figure 2: The instance space \(\mathcal{X}\) is divided into two classes, the solid region in the lower left and the dotted region in the upper right. The orange ball is an example of a mutually-labeling set. Suppose _nearest neighbor_ previously received the label for \(x^{\prime}\). Then, it shall always classify \(x\) correctly in the future; \(x\) can never have a nearer neighbor of a different class. ## 3 Convergence of OLC learners **Theorem 9** (Convergence of error rate).: _Given an smoothed online classification problem on the measure space \((\mathcal{X},\nu)\) where \(\nu\) is a finite measure. Suppose the learner is online locally consistent with respect to \(c\) and that the adversary is \(\nu\)-dominated. Then, the learner's error rate converges:_ \[\lim_{T\to\infty}\,\frac{1}{T}\sum_{t=1}^{T}\ell(x_{t},y_{t},\hat{y}_{t})=0 \quad\text{a.s.}\] Before commencing the proof, recall that \(\mathcal{E}_{t}(x_{t})\) is the error incurred by the learner at time \(t\). Given error function \(\mathcal{E}\) and test distribution \(\mu\), let's also define the notation \(\mathcal{E}(\mu)\) to be the expected error, \[\mathcal{E}(\mu):=\operatorname*{\mathbb{E}}_{x\sim\mu}[\mathcal{E}(x)].\] If \(A\subset\mathcal{X}\) is measurable, let \(\mathcal{E}\mathbbm{1}_{A}\) denote the pointwise product of \(\mathcal{E}\) and the indicator on \(A\). Proof of Theorem 9We show that for any \(\varepsilon>0\), the following error rate bound holds: \[\lim_{T\to\infty}\,\frac{1}{T}\sum_{t=1}^{T}\mathcal{E}_{t}(x_{t})<2\varepsilon \quad\text{a.s.} \tag{2}\] If so, then this statement holds simultaneously for any countable sequence of \(\varepsilon\) converging to zero, implying that the error rate converges to zero almost surely. To prove Equation (2), fix \(\varepsilon>0\). Because the loss function is bounded above, say by \(C>0\), we have for any error function \(\mathcal{E}\) and any measurable \(A\subset\mathcal{X}\), \[(\mathcal{E}\mathbbm{1}_{A})(\mu)\leq(C\mathbbm{1}_{A})(\mu)=C\cdot\mu(A).\] The right-hand side can be bounded in terms of \(\nu(A)\) whenever \(\mu\) is chosen by a \(\nu\)-dominated adversary. In particular, we may select \(\delta>0\) such that: \[\nu(A)<\delta\quad\Longrightarrow\quad(\mathcal{E}\mathbbm{1}_{A})(\mu)<\varepsilon. \tag{3}\] Let us do so: any region whose \(\nu\)-mass is less than \(\delta\) contributes no more than \(\varepsilon\) to the error rate. We claim that there exists a subset \(V\subset\mathcal{X}\) with the properties that (a) there exists a random time \(T_{\varepsilon}\) such that the learner incurs less than \(\varepsilon\) error for any further instance \(x_{t}\) that lands in \(V\), \[\mathcal{E}_{t}(x_{t})<\varepsilon,\qquad\forall t>T_{\varepsilon}\text{ and }x_{t}\in V,\] and that (b) \(V\) covers all but a \(\delta\)-mass of \(\mathcal{X}\), so that \(\nu(V^{c})<\delta\). Assume this for now--we decompose \(\mathcal{E}_{t}\) into its pieces on \(V\) and \(V^{c}\), with \(\mathcal{E}_{t}=\mathcal{E}_{t}\mathbbm{1}_{V}+\mathcal{E}_{t}\mathbbm{1}_{V^{c}}\). We have: * By property (a) of \(V\), the sequence \((\mathcal{E}_{t}\mathbbm{1}_{V})(x_{t})\) eventually remains less than \(\varepsilon\), in particular when we have \(t>T_{\varepsilon}\). Because \(T_{\varepsilon}\) is almost surely finite, we have that: \[\lim_{T\to\infty}\,\frac{1}{T}\sum_{t=1}^{T}(\mathcal{E}_{t}\mathbbm{1}_{V})( x_{x})<\varepsilon.\] (4) * By property (b) of \(V\), the mass of \(V^{c}\) is less than \(\delta\). Equation (3) implies: \[(\mathcal{E}_{t}\mathbbm{1}_{V^{c}})(\mu_{t})<\varepsilon.\] By the law of large numbers for martingales (Theorem C.1), this implies that almost surely: \[\lim_{T\to\infty}\,\frac{1}{T}\sum_{t=1}^{T}(\mathcal{E}_{t}\mathbbm{1}_{V^{c} })(x_{t})=\lim_{T\to\infty}\,\frac{1}{T}\sum_{t=1}^{T}(\mathcal{E}_{t} \mathbbm{1}_{V^{c}})(\mu_{t})<\varepsilon.\] (5) Because the loss function is bounded, the error rates within the limits in Equations (4) and (5) are also bounded. Thus, we can sum the two equations and apply dominated convergence, interchanging limits and sum, to yield Equation (2). To finish the proof, we show that \(V\) exists. The learner is OLC, so there is a countable cover \(\{U_{n}\}_{n\in\mathbb{N}}\) of locally learned sets for \(\mathcal{X}\) almost everywhere. Let \(V\) satisfying \(\nu(V^{c})<\delta\) be chosen as a finite union: \[V:=\bigcup_{i=1}^{N}U_{n}.\] Such a \(N<\infty\) exists by the continuity of measure, since \(\bigcup_{n=1}^{\infty}U_{n}\) is essentially all of \(\mathcal{X}\). By now, we have constructed \(V\) in such a way such that property (b) holds. To show property (a), we use the fact that each \(U_{n}\) is locally learned: either (i) \((x_{t})_{t}\) eventually never returns to \(U_{n}\), which is to say that \(\mathbb{1}\left\{x_{t}\in U_{n}\right\}\) converges to zero over time, or (ii) for sufficiently large \(t\), \(\mathcal{E}_{t}\big{|}_{U_{n}}<\varepsilon\). Thus, almost surely, there exists some \(T_{n}\) such that for all \(t>T_{n}\), \[\mathcal{E}_{t}(x_{t})\cdot\mathbb{1}\left\{x_{t}\in U_{n}\right\}<\varepsilon.\] Property (a) follows by defining \(T_{\varepsilon}:=\max\{T_{1},\ldots,T_{N}\}\). ## 4 Nearest neighbor is an OLC learner **Theorem 10** (Nearest neighbors is OLC).: _Let \((\mathcal{X},\rho,\nu)\) be a metric measure space, where \(\rho\) is a separable metric and \(\nu\) is a finite Borel measure. If \(c\) is a concept whose boundary points satisfy Assumption A, then nearest neighbor is OLC with respect to \(c\)._ To show that _nearest neighbor_ is OLC, we need to prove that any concept \(c\) with essentially countable boundary also has a countable family of locally-learned sets. We define two types of locally-learned sets for _nearest neighbor_: singleton sets for the boundary points and mutually-labeling sets for everything else. Recall that mutually-labeling sets \(U\) satisfy: \[\rho(x,x^{\prime})<m_{c}(x),\qquad\forall x,x^{\prime}\in U,\] where \(m_{c}(x)\) is the margin between \(x\) and the boundary of \(c\). Note that all points in \(U\) share the same label. If this weren't the case, then there would exist \(x,x^{\prime}\in U\) with different labels such that: \[\rho(x,x^{\prime})<\underbrace{\inf_{c(x)\neq c(\tilde{x})}\rho(x,\tilde{x})} _{m_{c}(x)}\leq\rho(x,x^{\prime}),\] a contradiction. The following lemma further shows that these are locally learned sets: **Lemma 11** (Mutually labeling property).: _Consider learning the concept \(c\) via the nearest neighbor rule. If \(U\) is a mutually-labeling set for \(c\) and \(x_{t}\in U\), then for all time \(\tau>t\), the predictor \(h_{\tau}\) is correct on all of \(U\). Thus, \(U\) is locally learned._ Proof.: Let \(x\in U\) so that \(c(x)=c(x_{t})\). When \(\tau>t\), the nearest neighbor classifier errs on \(x\) only if the closest point to \(x\) among \(x_{1},\ldots,x_{\tau}\) is of the opposite class. But this is impossible since the closest point must be no more than a distance of \(\rho(x,x_{t})\) and \(U\) is mutually labeling. Sufficiently small balls around any non-boundary point \(x\) are mutually-labeling sets. **Lemma 12** (Mutually labeling balls).: _Let \(c:\mathcal{X}\to\mathcal{Y}\) be a concept, and suppose that \(x\) has positive margin \(m_{c}(x)>0\). Then, the open ball \(B\big{(}x,m_{c}(x)/3\big{)}\) is mutually labeling._ Proof.: Let \(x_{1},x_{2}\in B\big{(}x,m_{c}(x)/3\big{)}\). By the triangle inequality, \[\rho(x_{1},x_{2})\leq\rho(x_{1},x)+\rho(x,x_{2})<2m_{c}(x)/3.\] We also know for \(i\in\{1,2\}\) and for all \(\tilde{x}\) that \(\rho(x_{i},\tilde{x})\geq\rho(x,\tilde{x})-\rho(x_{i},x)\), by the reverse triangle inequality. Since \(c(x_{i})=c(x)\), we take infimums on both sides over \(\tilde{x}\) where \(c(\tilde{x})\neq c(x)\), so: \[\underbrace{\inf_{c(x_{i})\neq c(\tilde{x})}\rho(x_{i},\tilde{x})}_{m_{c}(x_{ i})}\geq\underbrace{\inf_{c(x)\neq c(\tilde{x})}\rho(x,\tilde{x})}_{m_{c}(x)}- \rho(x_{i},x)\geq 2m_{c}(x)/3.\] This implies that \(\rho(x_{1},x_{2})<m_{c}(x_{1})\), so that \(B\big{(}x,m_{c}(x)/3\big{)}\) is mutually labeling. Proof of Theorem 10.: Given a concept \(c\) with essentially countable boundary, we construct a countable cover of \(\mathcal{X}\) except for a \(\nu\)-measure zero set by locally-learned sets of \(c\). Let us denote by \(\partial\mathcal{X}\) the set of boundary points \(\{x:m_{c}(x)=0\}\). By Lemma 12, non-boundary points \(\mathcal{X}\setminus\partial\mathcal{X}\) can be covered by the family of open mutually-labeling sets, \[\big{\{}B(x,m_{c}(x)/3):x\in\mathcal{X}\setminus\partial\mathcal{X}\big{\}}.\] By the separability of \(\mathcal{X}\), there is a countable subcover of \(\mathcal{X}\setminus\partial\mathcal{X}\) by mutually-labeling sets. These are locally-learned sets, by Lemma 11. As for the boundary points, the set \(\partial\mathcal{X}\) is essentially countable \(\partial\mathcal{X}=\mathcal{N}\cup\mathcal{Z}\), where \(\mathcal{N}\) is countable and \(\mathcal{Z}\) is \(\nu\)-measure zero. Then, each \(\{x\}\) for \(x\in\mathcal{N}\) is a locally-learned set because nearest neighbors is a consistent learner. Together, these two collections of locally-learned sets is a countable cover of all of \(\mathcal{X}\) except for a measure zero set; thus, the _nearest neighbor_ is OLC. ## 5 Rates of convergence for nearest neighbor Rates of convergence for _nearest neighbor_ arise almost immediately out of the proof technique for asymptotic convergence. Recall that the proof technique consisted of decomposing \(\mathcal{X}\) into \(V\) and \(V^{c}\), where (i) \(V\) can be covered by finitely many mutually-labeling sets and (ii) \(V^{c}\) has small \(\nu\)-mass. The proof can be adapted to yield rates by quantifying (i) the number of mutually-labeling sets required to cover \(V\), and (ii) the rate at which a \(\nu\)-dominated adversary can boost the probability of selecting points from \(V^{c}\). To bound these, we respectively define the following: **Definition 13** (Mutually-labeling covering number).: _Let \(V\subset\mathcal{X}\). The mutually-labeling covering number \(\mathcal{N}_{\mathrm{ML}}(V)\) given a concept \(c\) is the size of a minimal covering of \(V\) by mutually-labeling sets._ **Definition 14** (Smoothness rate).: _An adversary has smoothness rate\(\varepsilon:\mathbb{R}_{\geq 0}\to[0,1]\) whenever all distributions \(\mu\) it can select satisfy:_ \[\mu(A)\leq\varepsilon\big{(}\nu(A)\big{)},\qquad\forall A\subset\mathcal{X}\ \text{measurable}.\] An adversary is \(\nu\)-dominated if \(\lim_{\delta\to 0}\,\varepsilon(\delta)=0\). It is \(\sigma\)-smooth if \(\varepsilon\) is further \(\frac{1}{\sigma}\)-Lipschitz. For simplicity, let us assume that the boundary \(\partial\mathcal{X}\) has \(\nu\)-measure zero. Then, the following mistake rate is obtained by separately counting mistakes on \(V\) and \(V^{c}\): \[\mathbb{E}\left[\#\text{mistakes by time }T\right]\leq\min\left\{T\,,\,\inf_{V \subset\mathcal{X}}\,\mathcal{N}_{\mathrm{ML}}(V)+T\varepsilon\big{(}\nu(V^{c}) \big{)}\right\}.\] By a standard application of Azuma-Hoeffding's, we can convert this into a high-probability bound: **Theorem 15** (Convergence rate).: _Let \((\mathcal{X},\rho,\nu)\) be a metric measure space with separable metric \(\rho\) and finite Borel measure \(\nu\). Let \(c\) be a concept with measure zero boundary. Let the \(\nu\)-dominated adversary have smoothness rate \(\varepsilon\). Fix \(p>0\). Then, with probability at least \(1-p\), the following mistake bound holds for nearest neighbor simultaneously for all \(T\in\mathbb{N}\) :_ \[\#\mathrm{mistakes}_{T}\leq\min\left\{T\,,\,\inf_{V\subset\mathcal{X}}\, \mathcal{N}_{\mathrm{ML}}(V)+T\varepsilon\big{(}\nu(V^{c})\big{)}+\sqrt{2T\log \frac{2T}{p}}\right\}.\] ### Convergence rate for length metric spaces In this section, we instantiate the convergence rate when \(\mathcal{X}\) is a length metric space. The appealing property of length spaces is that the margin of a point \(x\) is simply its distance to boundary points: **Lemma 16** (Margin in length spaces).: _Let \((\mathcal{X},\rho)\) be a length space. Let \(c\) be a classifier. Then,_ \[m_{c}(x)=\rho(x,\partial\mathcal{X}).\] In this case, it is natural to restrict \(V\subset\mathcal{X}\) in Theorem 15 to the sets of the form: \[V_{r}:=\big{\{}x\in\mathcal{X}:m_{c}(x)\geq r\big{\}}.\] These are the set of points whose margin is at least \(r\). Then, we need to control the mutual-labeling covering number of \(V_{r}\) and the \(\nu\)-masses of \(V_{r}^{c}\). When \(\mathcal{X}\) is a length space, these can be bounded in terms of the geometry of the boundary \(\partial\mathcal{X}\). The reason is that in length spaces, points with small margins are also close to boundary points: here, \(V_{r}^{c}\) precisely coincides with the \(r\)-_expansion_\(\partial\mathcal{X}^{r}\) of the boundary. And when \(\mathcal{X}\) is a doubling space, we can quantify the bounds in terms of the _box-counting dimension_\(d(\partial\mathcal{X})\) and the _Minkowski content_\(\mathfrak{m}(\partial\mathcal{X})\) of the boundary. In particular, Proposition E.9 shows that for small \(r\), \[\mathcal{N}_{\mathrm{ML}}\big{(}V_{r}\big{)}\lesssim r^{-d}\qquad\text{ and }\qquad\nu\big{(}V_{r}^{c}\big{)}\lesssim\mathfrak{m}\cdot r, \tag{6}\] where the hand-waving inequality can be made rigorous by replacing \(d=d+o(1)\) and \(\mathfrak{m}=\mathfrak{m}+o(1)\). For example, this yields convergence rates of _nearest neighbor_ against \(\sigma\)-smoothed adversaries, by plugging Equation (6) into Theorem 15. After optimizing \(r\), we obtain the following result: \[\#\mathrm{mistakes}_{T}\lesssim\left(\frac{\mathfrak{m}T}{\sigma}\right)^{d/( d+1)}.\] **Theorem 17** (Convergence rate against \(\sigma\)-smoothed adversaries).: _Let \((\mathcal{X},\rho,\nu)\) be a bounded length space with finite doubling dimension and Borel measure. Suppose the concept \(c\) satisfies \(\nu(\partial\mathcal{X})=0\). Let the adversary be \(\sigma\)-smooth for \(\sigma>0\). Denote the box-counting dimension and Minkowski content of \(\partial\mathcal{X}\) by \(d:=d(\partial\mathcal{X})\) and \(\mathfrak{m}:=\mathfrak{m}(\partial\mathcal{X})\) respectively. Assume \(d>1\)._ _The following holds for nearest neighbor: given \(c_{1},c_{2},p>0\), there exist constants \(C_{0},C_{1}>0\) such that with probability at least \(1-p\), the mistake bound holds simultaneously for all \(T\):_ \[\#\mathrm{mistakes}_{T}\leq C_{0}+C_{1}\left(\frac{(\mathfrak{m}+c_{2})T}{ \sigma}\right)^{(d+c_{1})/(d+1)}.\] See Appendix E for proofs.
2308.07384
Representation of convex geometries of convex dimension 3 by spheres
A convex geometry is a closure system satisfying the anti-exchange property. This paper, following the work of Adaricheva and Bolat (2019) and the Polymath REU (2020), continues to investigate representations of convex geometries with small convex dimension by convex shapes on the plane and in spaces of higher dimension. In particular, we answer in the negative the question raised by Polymath REU (2020): whether every convex geometry of $cdim=3$ is representable by the circles on the plane. We show there are geometries of $cdim=3$ that cannot be represented by spheres in any $\mathbb{R}^k$, and this connects to posets not representable by spheres from the paper of Felsner, Fishburn and Trotter (1999). On the positive side, we use the result of Kincses (2015) to show that every finite poset is an ellipsoid order.
Kira Adaricheva, Arav Agarwal, Na'ama Nevo
2023-08-14T18:08:28Z
http://arxiv.org/abs/2308.07384v1
# Representation of convex geometries of convex dimension 3 by spheres ###### Abstract. A convex geometry is a closure system satisfying the anti-exchange property. This paper, following the work of Adaricheva and Bolat (2019) and the Polymath REU (2020), continues to investigate representations of convex geometries with small convex dimension by convex shapes on the plane and in spaces of higher dimension. In particular, we answer in the negative the question raised by Polymath REU (2020): whether every convex geometry of \(cdim=3\) is representable by the circles on the plane. We show there are geometries of \(cdim=3\) that cannot be represented by spheres in any \(\mathbb{R}^{k}\), and this connects to posets not representable by spheres from the paper of Felsner, Fishburn and Trotter (1999). On the positive side, we use the result of Kincses (2015) to show that every finite poset is an ellipsoid order. Key words and phrases:Convex geometry, anti-exchange closure operator, convex hull operator, convex dimension, poset dimension, sphere order, ellipsoid order 2020 Mathematics Subject Classification: 06A07, 06A15, 52A37, 52C05, 52C07 The work on this paper was initiated while the second and third authors attended the New York Discrete Mathematics REU in the summer of 2022 and were mentored by the first author. The REU was supported by NSF grant # 2051026 and led by PI Adam Sheffer and Co-PI Pablo Soberon Bravo (both CUNY Baruch College). We appreciate the welcoming atmosphere of the Mathematics Department at Baruch College that generously hosted the REU, as well as the support of other mentors and student participants. Indeed, the question was solved in the negative by Felsner, Fishburn and Trotter [14] by presenting a 3-dimensional poset that cannot be a sphere order in any space \(\mathbb{R}^{k}\). Note that _poset dimension_ is defined as the smallest number \(t\) such that the poset embeds into a product (with component-wise ordering) of \(t\) chains. Equivalently, the partial order is recovered as an intersection of at most \(t\) of its linear extensions. It is well-known that 2-dimensional posets are representable by closed intervals in \(\mathbb{R}\), i.e., by spheres in one-dimensional space. We observe that the closure space of any convex geometry can be thought of as a poset of closed sets of its associated closure operator. However, poset dimension and convex dimension of a convex geometry are fairly different parameters. Recently, the relation was studied in Knauer and Trotter [19], who presented a series of convex geometries which have poset dimension \(=3\), while their convex dimension grows unboundedly. In the Propositions of Section 3, we show that the poset from [14] which is not a sphere order can be made isomorphic to the poset of join-irreducible elements of some convex geometry of \(cdim=3\). In Proposition 2.18 we connect the elements of the base set of the convex geometry with join-irreducible elements of its closure space, which allows to conclude that the convex geometry will not be represented by the spheres in any \(\mathbb{R}^{k}\), proving our main result in Theorem 3.1. Note that the result of [14] does not identify the smallest size of a poset which is not a sphere order. This suggests a question about _the smallest size \(\rho\) of (the base set of) a convex geometry of \(cdim=3\) that is not representable by spheres in any \(\mathbb{R}^{n}\)._ We propose that at least \(\rho>6\): **Hypothesis 1.2**.: _Every convex geometry of \(cdim=3\) on a 6-element set is representable by circles on the plane._ Note that according to the Online Encyclopedia of Integer Sequences (OEIS.org), there are almost 200,000 non-isomorphic convex geometries (equivalently, _antimatroids_) on a 6-element set, with the exact number being given in sequence A224913. Much less was known about _ellipsoid orders_, i.e., finite posets that are representable by ellipsoids in some \(\mathbb{R}^{k}\). We use a result of Kincses [18] regarding the representation of convex geometries by ellipsoids using the convex hull operator for ellipsoids to conclude that every finite poset is an ellipsoid order. ## 2. Terminology and Known Results A convex geometry is a special case of a closure system. It can be defined through a closure operator, or by means of a closure space. **Definition 2.1**.: Let \(X\) be a set. A mapping \(\varphi\colon 2^{X}\to 2^{X}\) is called a _closure operator_, if for all \(Y,Z\in 2^{X}\): 1. \(Y\subseteq\varphi(Y)\), 2. if \(Y\subseteq Z\) then \(\varphi(Y)\subseteq\varphi(Z)\), 3. \(\varphi(\varphi(Y))=\varphi(Y)\). A subset \(Y\subseteq X\) is _closed_ if \(\varphi(Y)=Y\). The pair \((X,\varphi)\), where \(\varphi\) is a closure operator, is called a _closure system_. **Definition 2.2**.: Given any (finite) set \(X\), a _closure space_ on \(X\) is a family \(\mathcal{F}\) of subsets of \(X\) which satisfies two properties: 1. \(X\in\mathcal{F}\), 2. if \(Y,Z\in\mathcal{F}\) then \(Y\cap Z\in\mathcal{F}\). Closure systems are dual to closure spaces in the following sense. If \((X,\varphi)\) is a closure system, one can define a family of closed sets \(\mathcal{F}_{\varphi}:=\{Y\subseteq X:\varphi(Y)=Y\}\). Then \(\mathcal{F}_{\varphi}\) is a closure space. If \(\mathcal{F}\) is a closure space, then define \(\varphi_{\mathcal{F}}:2^{X}\to 2^{X}\) in the following manner: for all \(Y\subseteq X\), let \(\varphi_{\mathcal{F}}(Y):=\bigcap\{Z\in\mathcal{F}:Y\subseteq Z\}\). Then \((X,\varphi)\) is a closure system. **Definition 2.3**.: A closure system \((X,\varphi)\) is called a _convex geometry_ if 1. \(\varphi(\emptyset)=\emptyset\), 2. for any closed set \(Y\subseteq X\) and any distinct points \(x,y\in X\setminus Y\), if \(x\in\varphi(Y\cup\{y\})\) then \(y\not\in\varphi(Y\cup\{x\})\). Property (2) above is called the _Anti-Exchange Property_. We can use duality between closure operators and closure spaces to provide another definition of a convex geometry. **Definition 2.4**.: A closure system \((X,\varphi)\) is a convex geometry iff the corresponding closure space \(\mathcal{F}_{\varphi}\) satisfies the following two properties: 1. \(\emptyset\in\mathcal{F}_{\varphi}\), 2. if \(Y\in\mathcal{F}_{\varphi}\) and \(Y\neq X\), then there exists \(a\in X\setminus Y\) such that \(Y\cup\{a\}\in\mathcal{F}_{\varphi}\). We now need to discuss an important parameter of convex geometries known as _convex dimension_, first introduced in [12]. **Definition 2.5**.: A closure space \(\mathcal{F}\) is called _monotone_ if sets of \(\mathcal{F}\) form a chain under inclusion. (See Definition 2.12 for a chain or linear order.) **Remark 2.6**.: Note that, due to Definition 2.4, a monotone closure space \(\mathcal{F}\) is a convex geometry iff \(\mathcal{F}\) has exactly \(|X|+1\) subsets of \(X\): \(\emptyset\subset\{x_{1}\}\subset\{x_{1},x_{2}\}\subset\{x_{1},x_{2},x_{3}\} \subset\ldots\{x_{1},x_{2},\ldots,x_{n-1}\}\subset X=\{x_{1},\ldots x_{n-1},x_ {n}\}\). Therefore, every monotone convex geometry, or _linear geometry_, on \(X\) is uniquely associated with some linear order on \(X\): \(x_{1}<x_{2}<\cdots<x_{n-1}<x_{n}\). **Definition 2.7**.: Given two closure spaces \(\mathcal{F},\mathcal{K}\) on the same base set \(X\), their _join_\(\mathcal{F}\vee\mathcal{K}\) is defined to be the smallest closure space \(\mathcal{T}\) such that \(\mathcal{F},\mathcal{K}\subseteq\mathcal{T}\). It can be easily verified that \(\mathcal{F}\vee\mathcal{K}=\{F\cap K:F\in\mathcal{F}\text{ and }K\in\mathcal{K}\}\). A known result in [12] about joins of convex geometries follows. **Theorem 2.8**.: _Let \(X\) be a finite set._ 1. _If closure spaces_ \(\mathcal{F},\mathcal{K}\) _on_ \(X\) _are convex geometries, then_ \(\mathcal{F}\vee\mathcal{K}\) _is a convex geometry as well._ 2. _Closure space_ \(\mathcal{F}\) _of any convex geometry on set_ \(X\) _can be expressed as the join of some collection of monotone convex geometries on the same base set._ This motivates the following definition. **Definition 2.9**.: [12] The _convex dimension_ of a convex geometry \(\mathbf{G}=(X,\varphi)\) is the minimal number \(k\) such that closure space \(\mathcal{F}_{\varphi}\) can be expressed as the join of \(k\) monotone convex geometries on set \(X\). To compute the convex dimension of a convex geometry, we can examine maximal cardinality antichains of meet-irreducibles in its closure space \(\mathcal{F}_{\varphi}\), as discussed in Edelman and Saks [13]. Thus, informally, the \(cdim\) parameter of a convex geometry represents the diversity of closed sets with respect to the closure operator \(\varphi\). A particular example of a closure operator on a set is the _convex hull operator_, where the base set \(X\) is a set of points in Euclidean space \(\mathbb{R}^{k}\). **Definition 2.10**.: 1. A set \(S\) in \(\mathbb{R}^{k}\) is called _convex_ if for any two points \(p,q\in S\), the line segment connecting \(p\) and \(q\) is also contained in \(S\). 2. Given a set \(S\) of points in \(\mathbb{R}^{k}\), the _convex hull_ of \(S\), denoted \(\operatorname{CH}(S)\), is the intersection of all convex sets in \(\mathbb{R}^{k}\) which contain \(S\). That is, it is the smallest convex set containing \(S\). Comparing with Definition 2.1, we see that CH is a closure operator acting on \(\mathbb{R}^{k}\). Finally, we recall the definition of the convex hull operator for spheres introduced in [10]. If \(x\) is a sphere in \(\mathbb{R}^{k}\), then by \(\tilde{x}\) we denote the set of points belonging to \(x\). It is allowed that a sphere has a radius \(0\), in which case it is a point. **Definition 2.11**.: Let \(X\) be a finite set of spheres in \(\mathbb{R}^{k}\). Define the convex hull operator for spheres, \(\mathrm{ch}_{s}:2^{X}\to 2^{X}\), as follows: \[\mathrm{ch}_{s}(Y)=\{x\in X:\tilde{x}\subseteq\mathrm{CH}\left(\bigcup_{y\in Y }\tilde{y}\right)\},\] for any \(Y\in 2^{X}\). See the figure below for an illustration of the \(\mathrm{ch}_{s}\) operator in \(\mathbb{R}^{2}\). Observe that \(\mathrm{ch}_{s}(\{a,b,c\})=\{a,b,c,d,e\}\) and \(\mathrm{ch}_{s}(\{a,c\})=\{a,c,e\}\). It was established in [10] that the closure operator \(\mathrm{ch}_{s}\) satisfies the Anti-exchange Property. Therefore, the closure system \((X,\mathrm{ch}_{s})\) on the set of spheres \(X\) in \(\mathbb{R}^{k}\) is a convex geometry. We say that a finite convex geometry \((X,\varphi)\) is _represented by spheres in \(\mathbb{R}^{k}\)_, when there are \(|X|\) spheres in \(\mathbb{R}^{k}\) such that the action of \(\varphi\) is identical to \(\mathrm{ch}_{s}\). We can similarly define convex hull operators in \(\mathbb{R}^{k}\) on convex shapes different from spheres, such as ellipsoids. The survey paper [12] was instrumental to start off the study of finite convex geometries, which are also the dual systems to _antimatroids_. The study of infinite convex geometries was initiated in Adaricheva, Gorbunov and Tumanov [4]. To see the development of the topic, including infinite convex geometries, one needs to consult the more recent survey by Adaricheva and Nation [2]. In [10] the representation of finite convex geometries was proposed by interpreting elements of base set \(X\) as circles in the plane, and closure operator \(\varphi\) as a convex hull operator acting on circles. The main result of the paper is that all finite convex geometries with _convex dimension_ at most \(2\) can be represented by circles on the plane. In [1] it was found that there is an obstruction for representation of convex geometries by circles on the plane, which allowed the authors to build an example of a convex geometry on a \(5\)-element set of \(cdim=6\). More obstructions for representation of geometries by circles were found in two papers written by subgroups of PolyMath-2020 team [9, 5]. Passing to the terminology on posets, we recall that a set \(X\) with a binary relation \(\leq\) is called a partially ordered set (poset) if \(\leq\) is reflexive, anti-symmetric and transitive. An important example of a poset is the family \(\mathcal{F}\) of closed sets of a closure operator with relation \(\subseteq\). **Definition 2.12**.: A _linear order_, or _chain_, is a partial order \((X,\leq)\) where any two elements of \(X\) are comparable, that is either \(u\leq v\) or \(v\leq u\), for any \(u,v\in X\). We freely use the terms linear order and chain interchangeably. Figure 1. Convex hull operator for spheres A relevant parameter of a poset is the _the order dimension_, also known as the Dushnik-Miller dimension. Recall that a linear extension \((X,\leq^{*})\) of poset \((X,\leq)\) is a poset with \(\leq\subseteq\leq^{*}\), where \(\leq^{*}\) is a linear order. **Definition 2.13**.: The order dimension of a poset \((X,\leq)\) is the least integer \(t\) for which we have a family of \(t\) linear extensions \(\leq_{1},\ldots,\leq_{t}\) of \(\leq\) such that \(\leq=\bigcap_{i=1}^{t}\leq_{i}\). Equivalently, order dimension is the minimal number of chains such that the poset embeds into their direct product. For a comprehensive monograph on the topic, see Trotter [17]. **Definition 2.14**.: Given a poset \(\mathbf{P}=(X,\leq)\), a function \(F\) which assigns to each \(x\in X\) a set \(F(x)\) is called an inclusion representation of \(\mathbf{P}\) when \(x\leq y\) if and only if \(F(x)\subseteq F(y)\). In other words, an inclusion representation is the mapping which realizes \(\mathbf{P}\) as an inclusion order of some objects. **Example 2.15**.: Any poset has an inclusion representation. Given \(\mathbf{P}=(X,\leq)\), we can associate to each element \(x\in X\) the set \[F(x)=\downarrow x=\{y\in X\,|\,y\leq x\}.\] This is the _down-set_ generated by the element \(x\). A down-set in general is a set \(S\subseteq X\) such that if \(s\in S\) and \(x\leq s\), then \(x\in S\). The transitivity of \(\leq\) tells us that \[\downarrow a\subseteq\downarrow b\text{ iff }a\leq b.\] Indeed, the idea of the above example is fundamental, and key to Birkhoff's representation theorem, often referred to as the fundamental theorem of finite distributive lattices. **Theorem 2.16**.: _[_7_]_ _The lattice of down-sets of a poset is distributive. Any finite distributive lattice \(L\) is isomorphic to the lattice of down sets of the partial order of the join-irreducible elements of \(L\)._ In the theorem, the join-irreducible elements of the lattice of all down-sets of the poset are precisely the down-sets generated by singletons as illustrated in Example 2.15. Note that mapping \(\varphi:Y\mapsto\downarrow Y\), that maps any subset \(Y\) of partially ordered set \((X,\leq)\) into smallest down-set \(\downarrow Y\) containing \(Y\) is a closure operator on \(X\) satisfying the Anti-Exchange Property. Thus, the lattice in Theorem 2.16 is also the lattice of closed sets of this closure operator, and finite distributive lattices are convex geometries. Similar to the representation of convex geometries by convex shapes, we have the concept of representation of posets by spheres. **Definition 2.17**.: A poset \(\mathbf{P}=(X,\leq)\) is a sphere order if there exists \(k\geq 1\) such that \(\mathbf{P}\) has an inclusion representation using spheres in \(\mathbb{R}^{k}\). That is, \(F(x)\) for any \(x\in X\) is required to be a sphere in \(\mathbb{R}^{k}\). Before discussing results concerning representation of posets as sphere orders, we establish the key connection between representation of convex geometries by spheres and posets as sphere orders. **Proposition 2.18**.: _Suppose a convex geometry \((X,\varphi)\) is represented by spheres in some \(\mathbb{R}^{k}\). Then, the poset of join-irreducible elements of the associated closure space \(\mathcal{F}_{\varphi}\) is a sphere order in \(\mathbb{R}^{k}\)._ Proof.: By the definition of representation of convex geometries by spheres, each element \(x\in X\) is given by some sphere \(F(x)\) in \(\mathbb{R}^{k}\) so that \(\varphi\) acts on spheres as the \(\mathrm{ch}_{s}\) operator. It is well-known that in _standard_ closure systems \((X,\varphi)\) there is a one-to-one correspondence between elements of \(X\) and \(\mathrm{Ji}(\mathcal{F}_{\varphi})\) (see, for example, [3, Lemma 4-2.8]), where \(\mathrm{Ji}(\mathcal{F}_{\varphi})\) denotes the set of join-irreducible elements of the closure space \(\mathcal{F}_{\varphi}\) associated to \(X\). Convex geometries are standard closure systems, so this correspondence works in convex geometries. Specifically, the one-to-one correspondence between \(X\) and \(\operatorname{Ji}(\mathcal{F}_{\varphi})\) is given by \(x\mapsto\varphi(\{x\})\). Ordering by set inclusion, we obtain the poset \((\operatorname{Ji}(\mathcal{F}_{\varphi}),\leq)=(\{\varphi(\{u\}):u\in X\},\leq)\), and we now show it is a sphere order. Consider \(\varphi(\{x\})\in\mathcal{F}_{\varphi}\). Since we represented the geometry by spheres, in terms of the sphere representation \(\varphi(\{x\})\) is exactly \(\operatorname{ch}_{s}(F(x))\). By definition of \(\operatorname{ch}_{s}\), we obtain \[\operatorname{ch}_{s}(F(x))=\{F(y)\,|\,F(y)\subseteq F(x),y\in X\}.\] Indeed, we have arrived at the case of Example 2.15, for if we consider the inclusion order of the spheres, we observe that \[\operatorname{ch}_{s}(F(x))=\downarrow F(x),\] and we have \(\downarrow F(u)\subseteq\downarrow F(v)\) iff \(F(u)\subseteq F(v)\) for any \(u,v\in X\). We can now write for any \(u,v\in X\): \[\varphi(\{u\})\leq\varphi(\{v\})\iff\operatorname{ch}_{s}(F(u))\subseteq \operatorname{ch}_{s}(F(v))\iff\downarrow F(u)\subseteq\downarrow F(v)\iff F (u)\subseteq F(v).\] Hence, we have shown that \(\varphi(\{u\})\leq\varphi(\{v\})\) iff \(F(u)\subseteq F(v)\). Therefore, the sphere representation of a convex geometry \((X,\varphi)\) simultaneously provides us the representation of the poset of join-irreducibles \((\{\varphi(\{u\}):u\in X\},\leq)\) of the closure space \(\mathcal{F}_{\varphi}\), proving the poset is a sphere order. As per the record in [14], the question of whether every finite \(3\)-dimensional poset has an inclusion representation using circles in \(\mathbb{R}^{2}\) was raised by Fishburn and Trotter at the Banff conference of 1984. In Sidney et al. [21] a finite \(4\)-dimensional poset was found that was not a _circle order_ (i.e., not represented by spheres on the plane), and in Scheinerman and Wierman [20] it was shown by a Ramsey theoretic argument that the countably infinite \(3\)-dimensional poset \(\mathbb{Z}^{3}\) is not a circle order. These results prompted the following more general question: **Question 2.19**.: _[_8_]_ _Is every finite 3-dimensional poset representable as an inclusion order of spheres in some \(k\)-dimensional space?_ In [14] we find the culmination of the search to answer this question, in the negative. With the next simple definition in hand, we can formally state the main theorem from [14]. **Definition 2.20**.: For positive integers \(n,t\), let \(\mathbf{n}\) denote the chain \[0<1<\cdots<n-1\] and \(\mathbf{n}^{t}\) the cartesian chain product of \(t\) copies of \(\mathbf{n}\), so that we obtain the following canonical partial ordering on \(\mathbf{n}^{t}\): \[(a_{1},a_{2},\ldots,a_{t})\leq(b_{1},b_{2},\ldots,b_{t})\text{ iff }a_{i}\leq b _{i}\text{ for all }i.\] **Theorem 2.21** (\(2.1\) of [14]).: _There exists an integer \(n_{0}\) such that if \(n\geq n_{0}\), the finite 3-dimensional poset \(\mathbf{n}^{3}\) is not a sphere order._ In Section 3, we will use this theorem and Proposition 2.18 to show that not every convex geometry of \(cdim=3\) is representable by spheres. We note that a convex geometry \((X,\varphi)\) can be thought of as a poset \((\mathcal{F}_{\varphi},\subseteq)\), which could be measured using order dimension. The relationship between order dimension and the convex dimension pertinent to convex geometries was studied in [19]. While in \(2\)-dimensional geometries the convex dimension is the same and equals \(2\), the picture is quite different for \(3\)-dimensional geometries. In particular, there are convex geometries \(\mathbf{P}_{n}\) with the \(dim(\mathbf{P}_{n})=3\) and \(cdim(\mathbf{P}_{n})=n+1\). ## 3. Existence of Convex Geometry with cdim=3 Not Representable by Spheres Our goal in this section is to prove the following result. **Theorem 3.1**.: _There exists a convex geometry with \(cdim=3\) that is not representable by spheres in any \(\mathbb{R}^{t}\)._ To achieve this, we provide here an explicit construction of a convex geometry with \(cdim=3\), such that its poset of join-irreducibles is isomorphic to \(\mathbf{n}^{3}\). Representing this convex geometry by spheres in \(\mathbb{R}^{t}\) would also result in an inclusion representation of \(\mathbf{n}^{3}\) by those spheres, as shown in Proposition 2.18, which we know for large enough \(n\) is not possible by Theorem 2.21. Hence, this would suffice in proving the existence of a convex geometry not representable by spheres in the space of any dimension. For the remainder of this section, we set the following notation. Fix \(X=\{0,1,2,\ldots,n-1\}^{3}\subseteq\mathbb{Z}^{3}\), and set \(\mathbf{P}\) to be the poset \(\mathbf{P}=(X,\leq)\), where \(\leq\) is the natural ordering induced from \(\mathbf{n}\) in the manner of Definition 2.20. We now consider linear extensions of this natural ordering in \(\mathbf{P}.\) In particular, we examine lexicographic orderings on the set \(X\). **Definition 3.2** (123-lex-ordering).: We define the poset \((X,\preceq_{1})\). Given any two points \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\) in \(X\), we say \[(x_{1},x_{2},x_{3})\preceq_{1}(y_{1},y_{2},y_{3}) \text{ iff }x_{1}\leq y_{1}\] \[\text{ OR }x_{1}=y_{1},x_{2}\leq y_{2}\] \[\text{ OR }x_{1}=y_{1},x_{2}=y_{2},x_{3}\leq y_{3}\] This is precisely lexicographic ordering where we prioritize the values of the first coordinates, and if those are equal then the second coordinate, and finally the third. We can change the order of axes comparisons to get analogous but different lexicographic orderings as follows. **Definition 3.3** (231-lex-ordering).: We define the poset \((X,\preceq_{2})\). Given any two points \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\) in \(X\), we say \[(x_{1},x_{2},x_{3})\preceq_{2}(y_{1},y_{2},y_{3}) \text{ iff }x_{2}\leq y_{2}\] \[\text{ OR }x_{2}=y_{2},x_{3}\leq y_{3}\] \[\text{ OR }x_{2}=y_{2},x_{3}=y_{3},x_{1}\leq y_{1}\] This time we look first at the second coordinates, then the third, and finally the first. **Definition 3.4** (312-lex-ordering).: We define the poset \((X,\preceq_{3})\). Given any two points \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\) in \(X\), we say \[(x_{1},x_{2},x_{3})\preceq_{3}(y_{1},y_{2},y_{3}) \text{ iff }x_{3}\leq y_{3}\] \[\text{ OR }x_{3}=y_{3},x_{1}\leq y_{1}\] \[\text{ OR }x_{3}=y_{3},x_{1}=y_{1},x_{2}\leq y_{2}\] This time we look first at the third coordinates, then the first, and finally the second. It is not hard to see that the lexicographic orderings are not just partial orderings, but rather organize the elements of \(X\) into a chain. For example, if we consider \(\mathbf{2}^{3}\), using 123-lex-ordering instead gives us the following chain: \[\begin{array}{ It is well known that any join-irreducible in a standard closure system is precisely the minimal closed set generated by some singleton \(x\) from base set \(X\). Indeed, in standard closure system there is a one-to-one correspondence between elements of the base set and join-irreducibles. Denote by \(J(x)\) the join-irreducible convex set in \(\mathcal{F}\) corresponding to \(x\). Recall the procedure of generating a convex geometry of \(cdim\leq m\) as the intersection of closed sets of \(m\) linear convex geometries on \(X\): this is how we constructed \((X,\mathcal{F})\) from our three linear lexicographic geometries. It follows that the smallest closed set generated by \(x\in X\), equivalently \(J(x)\), is the intersection of minimal closed sets generated by \(x\) in each linear geometry. In summary, our observations in this paragraph state the following: **Proposition 3.5**.: \(J(x)\) _will be obtained by intersecting the smallest convex set containing \(x\) in each of the three generating linear geometries._ Note importantly that the smallest convex set containing \(x\) in the linear geometry \((X,\mathcal{F}_{i})\) is precisely the down-set with respect to \(\preceq_{i}\). That is, the minimal convex set containing \(x\) in the \(i\)th of the three generating geometries is exactly \(\{y\in X:y\preceq_{i}x\}\). The intersection over \(i\) gives us \(J(x)\). So, we know there is a one-to-one correspondence between elements of \(X\) and \(\operatorname{Ji}(\mathcal{F})\), and we also understand the nature of this correspondence. The final two propositions show that the correspondence preserves ordering, and hence conclude proving that \((\operatorname{Ji}(\mathcal{F}),\subseteq)\) is isomorphic to \(\mathbf{P}=(X,\leq)\). **Proposition 3.6**.: _If \(x\leq y\) in \(\mathbf{P}\), then \(J(x)\subseteq J(y)\)._ Proof.: It suffices to show that \(x\preceq_{i}y\) for each \(i\), because of how we obtain \(J(x)\) and \(J(y)\) this would clearly imply \(J(x)\subseteq J(y)\). Write \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\). We are told \(x\leq y\), so we must have \(x_{i}\leq y_{i}\), but this directly implies \(x\preceq_{i}y\) by definition of the lex-ordering. This holds for any \(i=1,2,3\), and so finishes our proof. **Proposition 3.7**.: _If \(x,y\) are incomparable in \(\mathbf{P}\), then \(J(x),J(y)\) are incomparable (in terms of set-inclusion)._ Proof.: Write \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\). For \(x,y\) to be incomparable in \(P\), we must precisely have the state that \(x_{i}<y_{i}\), and \(y_{j}<x_{j}\), for some \(i\neq j\). Without loss of generality, assume that \(x_{1}<y_{1}\) and \(y_{2}<x_{2}\). Now, because \(x_{1}<y_{1}\), we have \(x\prec_{1}y\) by definition of 123-lex-ordering. This means \(y\notin J(x)\). Similarly, because \(y_{2}<x_{2}\), we have \(y\prec_{2}x\). This further implies that \(x\notin J(y)\). So, we have \(y\notin J(x)\) and \(x\notin J(y)\), but of course \(x\in J(x)\) and \(y\in J(y)\), and we have hence proved that \(J(x),J(y)\) are incomparable, when ordered by set-inclusion. Finally, we are ready to provide a quick proof of Theorem 3.1. Proof of Theorem 3.1.: For a given \(n\), we constructed in this section a convex geometry \((X,\mathcal{F})\). We showed that the join-irreducibles of \(\mathcal{F}\), denoted \(\operatorname{Ji}(F)\), are in bijection with elements of \(P\), and Propositions 3.6 and 3.7 further show that they are in fact isomorphic as posets. Now, by Proposition 2.18 we know that any representation by spheres of \((X,\mathcal{F})\) will result in \(\operatorname{Ji}(\mathcal{F})\), and hence, \(\mathbf{P}\), being a sphere order. But Theorem 2.21 tells us that for some \(n_{0}\), if \(n>n_{0}\) then \(\mathbf{P}\) is not a sphere order. So, for large enough \(n\), \((X,\mathcal{F})\) cannot be representable by spheres. Finally, \((X,\mathcal{F})\) by construction has \(cdim\leq 3\). However, in [10] it is shown that all convex geometries of \(cdim\leq 2\) are representable by circles on the plane. So, in fact for \(n>n_{0}\), \((X,\mathcal{F})\) must have \(cdim=3\), which proves Theorem 3.1. ## 4. Ellipsoid Orders In the survey on geometric containment orders, Fishburn and Trotter [16] discuss various results related to poset representations via containment orders of different geometric objects such as angles, polygons and spheres. They also mention that representations by ellipsoids were not intensely studied. In [15] they have shown that any 2-dimensional poset can be represented as a containment order by a family of _similar_ ellipsoids that share the same center. As for to convex geometries, the following result was shown by J. Kincses. **Theorem 4.1**.: _[_18_]_ _Any finite convex geometry with convex dimension \(t\) can be represented in \(\mathbb{R}^{t}\) with ellipsoids which are arbitrary close to a sphere._ The construction of ellipsoids in Theorem 4.1 does not assume similarity of ellipsoids, but all of them contain a unit sphere in \(\mathbb{R}^{t}\) and are themselves contained in the sphere of radius \(s\). Taking \(s\) close to 1 allows to make them close to the unit sphere. Another result in the same paper [18] formulates an Erdos-Szekeres type of obstruction from Dobbins et al. [11] that shows that not all convex geometries are represented by ellipses on the plane. We also mention the result in [5] about representation of all geometries on 5-element set by ellipses on the plane. Thus, it would be interesting to learn what is the smallest size of base set of a geometry so that representation by ellipsoids exists only in \(\mathbb{R}^{k}\) with \(k>2\). We can now connect representation of geometries by ellipsoids and the notion of ellipsoid order. We mention that Proposition 2.18 could be formulated for ellipsoids in place of spheres, which connects representation of convex geometries and posets. **Theorem 4.2**.: _Every finite poset \(\mathbf{P}=(X,\leq)\) is an ellipsoid order._ Proof.: First, we show that every poset is realized as the poset of join-irreducible elements of some convex geometry. Indeed, starting from \(\mathbf{P}=(X,\leq)\) we can build the lattice \(\mathbf{D}=\operatorname{Down}(X,\leq)\) of down-sets of \(\mathbf{P}\), which is a distributive lattice by Birkhoff's Theorem 2.16. Every finite distributive lattice is a convex geometry, since related closure operator satisfies the the Anti-Exchange Property. (See more general description of lattice properties of finite convex geometries in [2, Theorem 5-2.1]). Since the join-irreducibles of this convex geometry are precisely the down-sets of singletons, i.e. down-sets of the form \(\downarrow x\) for \(x\in X\), we can conclude in the manner of Example 2.15 that \(\mathbf{P}\) is realized by join-irreducible elements of this convex geometry. By Theorem 4.1 this geometry can be represented by ellipsoids in some space \(\mathbb{R}^{t}\), where \(t=cdim(\mathbf{D})\). In particular, the set of join-irreducible elements of \(\mathbf{D}\), which is isomorphic to \(\mathbf{P}\), will provide the ellipsoid containment representation of \(\mathbf{P}\).
2305.16275
CENSUS-HWR: a large training dataset for offline handwriting recognition
Progress in Automated Handwriting Recognition has been hampered by the lack of large training datasets. Nearly all research uses a set of small datasets that often cause models to overfit. We present CENSUS-HWR, a new dataset consisting of full English handwritten words in 1,812,014 gray scale images. A total of 1,865,134 handwritten texts from a vocabulary of 10,711 words in the English language are present in this collection. This dataset is intended to serve handwriting models as a benchmark for deep learning algorithms. This huge English handwriting recognition dataset has been extracted from the US 1930 and 1940 censuses taken by approximately 70,000 enumerators each year. The dataset and the trained model with their weights are freely available to download at https://censustree.org/data.html.
Chetan Joshi, Lawry Sorenson, Ammon Wolfert, Mark Clement, Joseph Price, Kasey Buckles
2023-05-25T17:31:39Z
http://arxiv.org/abs/2305.16275v1
# CENSUS-HWR: a large training dataset for offline handwriting recognition ###### Abstract Progress in Automated Handwriting Recognition has been hampered by the lack of large training datasets. Nearly all research uses a set of small datasets that often cause models to overfit. We present CENSUS-HWR, a new dataset consisting of full English handwritten words in 1,812,014 gray scale images. A total of 1,865,134 handwritten texts from a vocabulary of 10,711 words in the English language are present in this collection. This dataset is intended to serve handwriting models as a benchmark for deep learning algorithms. This huge English handwriting recognition dataset has been extracted from the US 1930 and 1940 censuses taken by approximately 70,000 enumerators each year. The dataset and the trained model with their weights are freely available to download at [http://censustree.org/data.html](http://censustree.org/data.html). Keywords:Handwriting recognition Information Extraction Big Data ## 1 Introduction With the advent of deep learning, researchers have made significant progress in handwriting recognition and transcription. The two most common dataset for the Handwriting Recognition (HWR) are the IAM [12] and RIMES [9] dataset. They both contain Latin characters with English and French handwritten sentences respectively. Although these datasets have been useful in creating handwriting models, additional training data is necessary to create more accurate models that can be generalized to more diverse handwritten documents. Limited training data has resulted in very complex models that usually lack explainability and end up overfitting. Such complex models are hard to replicate and need large GPUs to effectively train. This is not ideal for an average researcher or a student as usually they do not have access to such resources. Additionally, the current datasets for handwriting recognition consist of carefully written text with uniform spacing and do not reflect real-world handwriting. To better train models that are robust to real-world handwriting imperfections, a more diverse and natural dataset is needed. This concern is met by the dataset developed in this research. ## 2 Related Work The IAM (Institut fur Informatik und Angewandte Mathematik/Department of Computer Science and Applied Mathematics) dataset [12] was created to serve as a basis for a variety of offline handwriting recognition tasks. This English Handwriting dataset was particularly useful for recognition tasks where linguistic knowledge beyond the lexicon level is used. Linguistic knowledge can be derived from the underlying corpus [12]. To create this dataset, large collections of corpora with different appearances and contents were used. The Lancaster-Oslo/(LOB) [10] collection of 500 English texts, having 2000 words was used as a basis for the dataset. The texts in the LOB corpus were quite diverse in nature, ranging from review, religion, biography, and fiction to humour, romance and love stories and adventure. The texts were split into fragments of about 3 to 6 sentences with at Figure 1: A bar graph of word count comparing the CENSUS-HWR dataset (BYUHWR) with IAM and RIMES dataset. Figure 2: A bar graph of vocabulary size comparing the CENSUS-HWR (BYUHWR) dataset with IAM and RIMES dataset. least 50 words each which were then printed into forms and several people were asked to write the text printed on the forms using their everyday handwriting. To make the image processing part easy, the writers were asked to use rulers as the guiding lines with 1.5 cm space between them. The handwritten words did not contain any compression or deformed words, as they were asked to stop writing if they ran out of space. The handwritten text was written using a ballpoint pen or pencil. These forms were scanned and then labelling was performed. The labels were created by copying the text of the forms and the line feeds were filled manually. For perfect label creation, some manual corrections were made such as deletion, insertion or changes in the text. For text extraction, the skew of the document was corrected, and then a \begin{table} \begin{tabular}{|l|l|l|l|} \hline & **IAM** & **RIMES** & **CENSUS-HWR** \\ \hline Word count & 82,227 & 300,000 & **1,865,134** \\ Vocabulary size & **10,841** & 8,110 & 10,711 \\ Image/form count & 1,066 & 60,000 & **1,812,014** \\ Authors count & 400 & 1,300 & **70,000** \\ \hline \end{tabular} \end{table} Table 1: Comparison of number of words, vocabulary, number of authors and number of images/forms of CENSUS-HWR (BYUHWR) dataset with the IAM and the RIMES dataset. Figure 4: Handwriting data samples from the census images. Figure 3: A bar graph of number of authors comparing the CENSUS-HWR (BYUHWR) dataset with IAM and RIMES dataset. projection method was used to find the position of horizontal lines in the form. With this positional information, the handwritten section was extracted. Later on, the handwritten text was segmented into text lines and each of the text lines was segmented into individual words. The French RIMES(Reconnaissance et Indexation de donnees Manuscrites et de fac similES / Recognition and Indexing of handwritten documents and faxes) project created a training set for the handwriting recognition and document analysis communities [9]. The process of creating this dataset consisted of asking volunteers to write mail in exchange for gift vouchers. The writers were given a fictional identity with their own gender and up to five scenarios one at a time. Each of the scenarios consisted of nine realistic themes including change of personal information, information request, opening/closing of the customer account, modification of contracts, complaints, payment difficulties, reminders, damage declaration and destination providers. Each page was scanned and precisely annotated to extract the maximum information which could be useful for evaluation such as layout structure and textual content for transcription. 300,000 handwritten word snippets were extracted from the letters, where each snippet and corresponding ground truth were generated automatically but controlled manually to create an accurate training set. The ground truths obtained were faithful even to the spelling and grammatical errors. The other datasets for handwriting recognition are KOHTD [13], BanglaLekha [4] and the Kurdish dataset [1] by Rebin M. Ahmed. The KOHTD, written in the Kazakh language, has 3 different scripts which are Cyrillic, Latin and Arabic providing the diverse script with around 900,000 samples. The BanglaLekha, written in Bengali has around 166,000 samples. Ahmed's Kurdish dataset has 40,095 images written by 390 native writers. However, these datasets also suffer from the same issues, having fewer natural handwritten styles, fewer writers, and fewer training samples. The datasets described above do not reflect natural handwriting. They are intentionally collected for the aim of handwriting recognition and document analysis. They contain texts that were written carefully in a straight line and special care was taken so the words present in a line or sentence are almost equidistant from each other. Real historical documents are much more messy. For instance, the words in real documents may not be in a straight line, might consist of spelling and grammatical errors, words are overwritten or crossed out and rewritten, the same author might write cursive for a while and then transition to print handwritten words, or the characters might be equidistant in the beginning and then congested at the end due to lack of space. The handwriting community will benefit from a training set that is natural, contains all these errors and is also large enough to avoid over-fitting. The dataset developed in this research meets these criteria and can be used to train models that are more robust to such flaws. ## 3 Corpus and Forms The CENSUS-HWR dataset has been extracted from the US 1910 census, 1930 census and 1940 census. It includes entries for approximately 92 million people who are enumerated in 1910, 123 million people in 1930 and 132 million people in 1940 as described in National Archives microfilm publications for their respective years: T624, T626, T627. This collection, which is a part of Record Group 29 from the Bureau of the Census, includes the 48 states as well as Alaska, Hawaii, American Samoa, Guam, Consult Services, Panama Canal Zone, Puerto Rico, and the Virgin Islands. The census can be used to identify the place of residence on April 1, 1930, for each person that was enumerated. A manual transcription of these images was created by FamilySearch [6][7] and Ancestry.com [2][3]. The census forms consist of large sheets with rows and columns. The schedules were arranged by states, counties, place and enumeration districts, which were not always filed in sequential order. The census takers were asked to record information about all the people in a household. A county was the basic enumeration unit, which was divided into an enumeration district, one for each enumerator. Once the Census forms were completed, they were sent to the Census Office of the Commerce Department in Washington, D.C. 95-97% of the population was covered in the schedule. The information on these Federal Censuses was dependent on the informant and the care taken by the enumerator, and hence they are usually reliable. Some of the information in the forms may be incorrect or deliberately falsified. ## 4 Text extraction and segmentation The handwritten texts have been extracted from the scanned census images using an approach that utilizes scale-invariant feature transform (SIFT) [11] and Random sample consensus (RANSAC) [8]. SIFT is generally used to extract the Figure 5: Example of US 1930 Census key points of objects from reference images. The objects are recognized in a new image by comparing each feature from the new image to key points from the reference images. It finds matching features based on the Euclidean distance of their feature vectors. Subsets of key points that agree on the object and its location, scale & orientation in the new image are identified to filter and find better matches. Object matches that pass all the tests are identified as correct with high confidence [11]. In our implementation, we use a template (reference) image for each form type; where we label the points of its cells. We compare each sample to its respective template image to find the matching key points. RANSAC uses repeated random sub-sampling to correlate features in the images with the lines in a template image. The method is tolerant of changes in scale or rotation between the template and image. RANSAC is then used to filter out outliers. [8] Based on its findings it becomes easy to align the images and infer the locations of cells in the table. During the segmentation process, each cell of the census is extracted and saved to an image file for that word. Although the vast majority of census pages were segmented using this method, some pages had severe degradation due to physical damage or image-scanning artefacts and hence this method could not produce segmented word snippets. Figure 6: The pipeline of text extraction using segmentation on the census image and then applying handwriting recognition. ## 5 Labeling Each snippet that was extracted from the image by segmentation was assigned a unique image identifier, the row number and the field or column. This information was crucial and helped in matching each snippet with the corresponding FamilySearch transcription. This provided us with a labeled training set of millions of images. This is an unprecedented size for a handwriting training set. These human transcriptions of the names and dates were then used to train a deep learning model provided along with the CENSUS-HWR dataset. This automatic transcription model then generated estimated transcriptions of the other fields in the census. These fields include profession and other fields that were not transcribed by Ancestry and FamilySearch. Two tools have been developed to correct transcriptions that were automatically generated by the deep learning model. Both are novel ways to involve humans in transcription and take advantage of the fact that while reading handwriting is a challenging task, recognizing patterns is not. Thus, the first application that was developed is designed to take advantage of the natural human skill to notice cases that are different from those around them. The Reverse Indexing Application groups images based on the transcription produced by the deep learning model. Up to 12 of these images are presented to the user at once with the transcribed version of the name at the top of the screen. This process allows the user to see multiple versions of the same name and identify any that look different from the rest of the group. All transcriptions are converted to lowercase letters to remove complexity from the process, which allows user to validate only the text of images rather than capitalization. The images that are marked by the human as likely to be incorrect are loaded into a second application that allows the individuals to submit a free form responses to correct the transcription. These two applications are used to create labeled training data with an unprecedented level of efficiency and accuracy. This training set will allow us to Figure 7: Reverse indexing where humans spot error in transcriptions. continue to improve the results of the deep learning model progressively accurate versions of this training set will be shared with other researchers. These applications also have the potential to utilize a large numbers of volunteers as crowd-sourced citizen science projects. A version of the Reverse Indexing application is being used on tablets in the prison system for inmates who are willing to provide service hours in exchange for tablet use time. There are over 500,000 tablets in prisons that may be used for Reverse Indexing as this application is rolled out in more states. ## 6 Further characteristics of the dataset Unlike the intentionally curated English IAM and French RIMES dataset, which were very clean, correctly spaced and without distortions, the CENSUS-HWR dataset contains various distortions, imperfections, mistakes and errors. The images have crossed out words, rewritten or overwritten words, spelling mistakes, congested words, etc. Such a diverse handwritten and natural text truly represents the style people write in a real world setting. This real world representation of handwritten text with various imperfections will allow researchers to develop models which are more robust and can still work efficiently even with imperfect documents. This kind of training data is needed so that researchers can explore new algorithms in the realm of handwriting recognition. Note that all labels in the dataset are given in lowercase letters, due to the reverse indexing process. This was done to remove complexity from the indexing validation process, since almost all images are lowercase single words with only the first letter capitalized. Since the dataset is validated by crowd-sourced volunteers, it is expected that there are more errors than in professionally annotated datasets. We plan to explore systems for identifying and fixing the mislabeled data in future work, but the current dataset is provided as is. Table 2 details the composition and source of images in the dataset. ## 7 Handwriting model Along with the dataset, we also provide source code and weights for a handwriting model trained on the CENSUS-HWR dataset. The model architecture is based on Bluche et al., 2017 [5]. \begin{table} \begin{tabular}{|l|l|l|} \hline Source & Type & Count \\ \hline 1930 Census & Surname & 1,178,102 \\ 1940 Census & Education Level & 495,183 \\ 1940 Census & Occupation & 114,301 \\ 1940 Census & Industry & 24,428 \\ \hline \end{tabular} \end{table} Table 2: Sources and types of images in the CENSUS-HWR dataset. The model takes gray scale images as input, which it resizes to 64 x 512 pixels. Original image aspect ratios are preserved, and padding is added as necessary. Six gated convolution blocks reduce the image to 1 x 64 with 512 features. Two bidirectional RNN blocks then map the features to the output character set. The provided model defaults to the same character set as Start, Follow, Read [14]. The model is trained with CTC loss. Note that the provided model is limited to predicting 64 characters per image based on the input image size, since most of the dataset images are single words. We trained preliminary models using 10-fold cross validation on the full dataset without augmentation. The training set was randomly split into training/validation sets with an 85%-15% split. Models were trained for twelve epochs, and the weights with the lowest loss on the validation set were saved. Each model was then evaluated on the withheld validation section of the dataset. Table 3 contains the results of each test. We then trained a model on the full dataset under the same conditions. We estimate the character error rate (CER) of the model as the mean of the cross-validation tests to be 4.6478%. This model is provided as a proof of concept for using this dataset. ## 8 Conclusion This paper presents the CENSUS-HWR, which provides a large and natural dataset that represents a diverse variety of natural handwritten styles. This is the largest handwriting dataset with 1.8 million handwritten samples and 70,000 authors that was extracted from the US 1930 census and 1940 census. This dataset is intended to assist the handwriting recognition community in developing robust models. \begin{table} \begin{tabular}{|l|l|} \hline Section withheld & CER \\ \hline 1 & 4.6827\% \\ 2 & 4.6726\% \\ 3 & 4.6830\% \\ 4 & **4.7115\%** \\ 5 & 4.6662\% \\ 6 & 4.5915\% \\ 7 & 4.6460\% \\ 8 & 4.6218\% \\ 9 & **4.5682\%** \\ 10 & 4.6346\% \\ \hline Mean & **4.6478\%** \\ \hline \end{tabular} \end{table} Table 3: Character Error Rates (CER) from cross validation tests. The highest and lowest error rates are bolded.
2306.07628
Comment on "Gravitational Pair Production and Black Hole Evaporation"
We scrutinize the recent Letter "Gravitational pair production and black hole evaporation" by M.F. Wondrak, W.D. van Suijlekom and H. Falcke [Phys. Rev. Lett. 130, 221502 (2023); arXiv:2305.18521]. We show that some consequences based on the proposed imaginary part of the lowest order effective action are in sharp tension with exact results on pair creation in electrodynamics and cosmology. This casts serious doubt on their claims for particle production in a Schwarzschild spacetime.
Antonio Ferreiro, Jose Navarro-Salas, Silvia Pla
2023-06-13T08:55:32Z
http://arxiv.org/abs/2306.07628v2
# Comment on "Gravitational Pair Production and Black Hole Evaporation" ###### Abstract In the Letter [1], the authors claim to explore a new avenue to address black hole radiation within the general framework of quantum field theory interacting with a prescribed external background. This is a well-established field and one of its main predictions is the spontaneous creation of particles. The first fundamental example is the creation of pairs by a constant electric field [2]. The production rate is derived from the imaginary part of the "effective action" \(W\), where \(\langle 0_{+}|0_{-}\rangle=e^{iW}\) is the vacuum persistence amplitude. Gravitational pair creation was first derived in a cosmological scenario [3], where the frequency-mixing mechanism was introduced as the new basic tool [4; 5]. The frequency-mixing mechanism fully reproduce the Schwinger effect [6; 7], showing the self-consistency of the general framework. The last fundamental example is the Hawking effect. The late-time thermal radiation predicted by the frequency-mixing mechanism requires a time-dependent process (gravitational collapse) [8]. During the collapse a transient and tiny contribution to particle creation could be expected. However, at late times, only the steady thermal radiation remains due to the event horizon. Within the framework of quantum field theory in curved spacetime [4; 5; 9], all derivations of particle production agree with Hawking's result. Only the introduction of new ingredients such as backreaction effects or quantum gravity may lead to deviations. However, the approach of [1] is still in the realm of this framework. Hence, it raises questions regarding the authors' conclusions, which appear to deviate from Hawking's findings. In the rest of this comment we will point out physical inconsistencies of the main formula of [1] \[\mathrm{Im}(W) = \frac{\hbar N}{64\pi}\int\mathrm{d}^{4}x\sqrt{-g}\Big{[}\frac{1} {2}\left(\xi-\frac{1}{6}\right)^{2}R^{2}\] \[+ \frac{1}{180}\left(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-R_{ \mu\nu}R^{\mu\nu}\right)+\frac{1}{12}\Omega_{\mu\nu}\Omega^{\mu\nu}\Big{]}\,\] which is assumed to be the imaginary part of the effective action \(-\)obtained in the "weak field approximation" for massless fields\(-\) and the origin of the predicted local pair production. First, we stress that (1) is obtained to lowest order in a perturbative expansion, while the standard way to obtain the non-perturbative Schwinger effect using the "weak field approximation" is to perform a _resummation of all terms_ of the proper-time expansion for constant backgrounds [10; 11]. This infinite sum gives the Euler-Heisenberg action, from which the Schwinger pair production rates can be derived [12]. In contrast, the authors claim to recover the Schwinger effect just from the second order of the proper-time series \(\mathcal{O}(s^{2})\). The perturbative formula (1) predicts results contradicting, in general, the Schwinger pair creation rates. This tension can be easily shown for a constant electromagnetic background such that \(|\vec{E}|=|\vec{B}|\) (\(\Omega_{\mu\nu}\Omega^{\mu\nu}=0\)). (1) gives zero particle creation, irrespective of the value of the second electromagnetic invariant. However, _if \(\vec{E}\) and \(\vec{B}\) are parallel_ this result is in contradiction with the non-perturbative solution [12; 13] \[\mathrm{Im}\,L_{\mathrm{eff}}=\frac{q^{2}E^{2}}{16\hbar\pi^{2}}\sum_{n=1}^{ \infty}\frac{(-)^{n+1}}{n\sinh n\pi}\,. \tag{2}\] For fermions, this non-vanishing result is enforced by the axial anomaly. Even more, it is well-known that the particle production is _exactly zero for purely magnetic configurations_[2; 14], in sharp tension with the result derived from (1), which predicts a non-unitary outcome (\(\mathrm{Im}(W)<0\) and \(|\langle 0_{+}|0_{-}\rangle|^{2}>1\)). The inconsistencies of (1) also extend to gravitational pair creation. Consider a _conformally invariant_ neutral scalar field (\(\xi=1/6\)) in a FLRW space-time. It is a well-established exact result that _there is no particle creation_[3; 4; 5]. The same is true for any conformally invariant field theory (Maxwell theory or massless spin-1/2 fields). However, for this background configuration the second term in (1) is non-zero (\(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-R_{\mu\nu}R^{\mu\nu}\neq 0\)), and according to [1] it would lead to particle creation, which is not consistent with the mentioned exact result. This term is just the gravitational conformal anomaly, and in a FLRW space-time. it only accounts for vacuum polarization [5; 15]. The above discussion shows the incompleteness of the imaginary part of the (lowest-order) effective action (1) used to derive pair creation. Its perturbative predictions are significantly different from well-known exact results. It is very unlikely that finite next-to-leading order terms correct the predictions of the lowest order term (1). Note that, according to [1] (supplementary material), these higher-order contributions to (1) are inverse power of the mass, such that the massless limit may even be problematic. A more careful analysis including e.g., non-local terms could improve the predictions for particle creation consistently with well-established results. In summary, although the application to a Schwarzschild spacetime might appear appealing, the inconsistencies found in electrodynamics and cosmology raises serious doubts regarding its main claim. _Acknowledgements._ We thank C. Schubert for very useful comments. This work is supported by the Spanish Grants PID2020-116567GBC2-1 and PROMETEO/2020/079 (Generalitat Valenciana). A.F. is supported by the Margarita Salas fellowship MS21-085 of the University of Valencia. S.P. is supported by the Leverhulme Trust, Grant No. RPG-2021-299.
2307.08632
ITS3: A truly cylindrical inner tracker for ALICE
After the successful installation and first operation of the new Inner Tracking System (ITS2), which consists of about 10 m$^2$ of monolithic silicon pixel sensors, ALICE is pioneering the usage of bent, wafer-scale pixel sensors for the ITS3 for Run 4 at the LHC in 2029. Sensors larger than typical reticle sizes can be produced using the technique of stitching. At thicknesses of about 30 $\mu$m, the silicon is flexible enough to be bent to radii of the order of 1 cm. By cooling such sensors with a forced air flow, it becomes possible to construct a detector with minimal material budget. The reduction of the material budget and the improved pointing resolution will allow new measurements, in particular of heavy-flavor decays and electromagnetic probes. Mechanical studies have shown the sensors to be unaffected by bending, and bent sensors have been shown to be fully efficient in test beams. New sensor developments for the ITS3 have shown promising results for fluences even beyond those expected for ITS3.
Jory Sonneveld
2023-07-17T16:46:23Z
http://arxiv.org/abs/2307.08632v1
# ITS3: A truly cylindrical inner tracker for ALICE ###### Abstract: After the successful installation and first operation of the new Inner Tracking System (ITS2), which consists of about 10 m\({}^{2}\) of monolithic silicon pixel sensors, ALICE is pioneering the usage of bent, wafer-scale pixel sensors for the ITS3 for Run 4 at the LHC in 2029. Sensors larger than typical reticle sizes can be produced using the technique of stitching. At thicknesses of about 30 \(\mu\)m, the silicon is flexible enough to be bent to radii of the order of 1 cm. By cooling such sensors with a forced air flow, it becomes possible to construct a detector with minimal material budget. The reduction of the material budget and the improved pointing resolution will allow new measurements, in particular of heavy-flavor decays and electromagnetic probes. Mechanical studies have shown the sensors to be unaffected by bending, and bent sensors have been shown to be fully efficient in test beams. New sensor developments for the ITS3 have shown promising results for fluences even beyond those expected for ITS3. The current ALICE inner tracking system (ITS2) is the first with monolithic active pixel sensors (MAPS) at the Large Hadron Collider (LHC) at CERN [1]. It consists of 3 inner layers at distances from 22 mm to 42 mm from the interaction point (IP) that have a material budget of only 0.36% \(X_{0}\). The outer tracker consists of 4 layers of MAPS at 194 mm to 395 mm from IP with a material budget of 1.1% \(X_{0}\). It features ALICE PIxel DEtectors (ALPDEs) [2] with \(27\times 29\)\(\mu\)m\({}^{2}\) pixels. With 12.5 Gigapixels and 10 square meters active area, it is the largest pixel detector built to date. It has successfully taken data since September 2021. ## 1 The upgrade of the ALICE inner tracking system for Run 4 In 2027, it is foreseen to replace the inner barrel layers of ITS2 with new, truly cylindrical bent sensors, the ITS3. A model is shown on the right in Fig. 1. The 24120 chips from 200 mm wafers placed at distances down to 22 mm from IP will be replaced with stitched, wafer-scale sensors from 300 mm wafers (see Fig. 2) bent in half-cylindrical shapes (see Fig. 3), with a minimum radius of the innermost layer of 18 mm from the IP. The material budget will be decreased even further down from 0.36% \(X_{0}\) per layer to about 0.05% \(X_{0}\) per layer, as shown on the right in Fig. 3. To improve the proximity to IP, the current beam pipe with an outer radius of 18 mm [4] will be replaced with one of only 16 mm radius and 500 \(\mu\)m beryllium corresponding to 0.14% \(X_{0}\). An example of a pad wafer from one of the chip submissions, engineering run 1 (ER1), that uses stitching on a 30 cm wafer, is shown in Fig. 2. In the process of stitching, design blocks are put together during the processing of the silicon. This can make a chip larger than the field of view of the lithographic equipment. The ALICE ITS3 upgrade for Run 4 will feature 6 half-layer sensors of 26 cm long in the \(z\) direction, two in each layer, that will be thinned to 40-50 \(\mu\)m. Each sensor will consist of 3-5 wafer-scale stitched MAPS, with one layer 0 sensor spanning 53.3 mm in \(r\varphi\). These will be mechanically held in place by carbon foam. The structure that is foreseen is shown on the left in Fig. 3. The new beam pipe will allow to place the ITS3 innermost layer 0 at only 18 mm from the interaction point. Figure 1: Left: layout of a monolithic stitched sensor prototype. Right: ITS3 engineering model 1 made of 3 layers of 40-50 \(\mu\)m thick dummy silicon. Figure from [3]. Figure 2: A 30 cm pad wafer using stitching (middle lanes) from engineering run 1. ## 2 ITS3 requirements and performance The ITS3 vertex detector has to withstand a fluence resulting from a non-ionizing energy loss of \(\Phi_{\rm eq}=10^{13}\) 1 MeV \(n_{\rm eq}/\)cm\({}^{2}\) and a total ionizing dose of 10 kGy. The particle rates will be up to 2.2 MHz/cm\({}^{2}\) in the innermost layer for a Pb-Pb interaction rate of 50 kHz. The ITS3 is expected to improve the tracking efficiency compared to the current vertex detector ITS2, especially at low transverse momentum, as shown on the left in Fig. 4. It will also allow a factor of two improvement in the pointing resolution in the \(r\phi\) plane over the full range of transverse momenta, as can be seen in the middle of Fig. 4. To simulate this, a fast Monte Carlo tool that includes multiple scattering, secondary interactions and detector occupancy was used [5]. This tool ignores the particles' energy loss in the beampipe and in the detector. Figure 4: Comparison of the track-reconstruction efficiency (left panel) and pointing resolution in the transverse plane (middle panel) with the current ITS2 and the planned upgrade ITS3 detector using a fast simulation Monte Carlo Tool as well as a full Monte Carlo simulation (circles, middle panel only). Figures from [5]. Right: ratio of the statistical significances expected with the ITS3 and ITS2 for the reconstruction of the \(\rm B^{0}_{s}\to D^{-}_{s}\pi^{+}\) decay in Pb–Pb collisions at \(\sqrt{s_{\rm NN}}=5.5\) TeV. At transverse momenta of around 3 GeV and 14 GeV, the measurement is only possible with the ITS3 upgrade detector. Figure 3: Left: ITS3 layout with 3 half-layer sensors (green) held in place by carbon foam (gray) and surrounded by a cylindrical support structure (beige). The beam pipe (orange) has 16 mm radius to allow layer 0 to be placed at 18 mm from the interaction point. Right: Material budget in current ALICE ITS2 layer 0. By removing material such as kapton and aluminum for circuit boards, water for cooling, and and carbon and glue for mechanical support, wafer-size bent, stitched silicon sensors would result in a material budget of only 0.05% \(X_{0}\) (the bottom yellow area in this graph). Figures from [5]. The comparison of the production yields of strange and non-strange hadrons in the heavy-flavor sector is a necessary measurement to understand heavy-flavor hadronization. In the beauty sector, both ALICE and CMS made first measurements sensitive to the \(B_{\mathrm{s}}^{0}\)-to-non-strange B meson yield ratio, but the uncertainties prevented to conclude about a possible enhancement with respect to pp collisions [6, 7]. The ITS3 will allow for a large improvement on this measurement as well as for an extension of the measurement to much lower transverse momenta, as shown on the right in Fig. 4. The large improvements in tracking, vertexing, and physics performance expected with ITS3 are all results of the reduced pixel pitch, the very close proximity to the interaction point - a mere 18 mm - and the very low material budget. ## 3 Reduced material budget The very low material budget, which contributes to the excellent expected performance of the ITS3 vertex detector, results from removing all the "unnecessary" material in the current ALICE inner tracking system. Circuit boards with kapton and aluminum are, for example, not required if power distribution and data transmission are integrated into the silicon. This is achieved with long, stitched, wafer-scale sensors. These same large-area sensors that are bent around the beam pipe also need less mechanical support, reducing the amount of carbon and glue needed. Finally, if power consumption of these sensors can be kept below 20 mW/cm\({}^{2}\), water cooling can be replaced by air cooling, further reducing the amount of material budget. Overall, as can be deduced from Fig. 3, the material budget per layer of 0.35% \(X_{0}\) for the current ITS2 can be reduced to 0.05% for the ITS3. To test the concept of air cooling, a setup with a wind tunnel and a laser measurement system has been commissioned, as shown in Fig. 5. Mechanical and stability tests are ongoing. The bending of the silicon sensors is studied extensively, with first tests of 40 \(\mu\)m thick "superALPIDEs", or multiple ALPIDEs from a wafer that were not cut, having proven to show successful bending to 18 mm, as shown in Fig. 6. Subsequent beam tests have proven that the ALPIDE in an ITS3 mock-up called the \(\mu\)ITS3 with sensors bent to the ITS3 radii of 18, 24, and 30 mm show an efficiency and resolution consistent with flat ALPIDEs [8]. The spatial resolution was also uniform across different radii. ## 4 Sensor R&D Large-scale wafers of 30 cm and the process of stitching are available in a Tower Partners Semiconductor Co (TPSCo) 65 nm technology. As the ALPIDEs were produced in a 180 nm CMOS imaging process provided by TowerJazz [2], active sensor research and development is ongoing for the new 65 nm technology. There are several submissions planned, and a prototype of a final Figure 5: Left: an ALPIDE monolithic active pixel sensor bent by hand. Right: air cooling is being extensively studied in a commissioned setup at CERN. wafers the detector wafer-scale chip is expected at the end of 2024. Characterization of many small prototypes from a first prototype run on a multi-layer reticle proved operation at room temperature of 20 \({}^{\circ}\)C of a digital pixel test structure to be 100% efficient after being irradiated with the ITS3 expected fluence and dose, and operable at 99% efficiency after a fluence of 100 times that of \(\Phi_{\rm eq}=10^{15}\) 1 MeV \(n_{\rm eq}/\)cm\({}^{2}\)[9], as shown in Fig. 7. This digital pixel test structure has \(32\times 32\) pixels of 15 \(\mu\)m pitch whose position is time-encoded in an asynchronous digital readout. Charge loss occurs in the corners of a pixel far from the collection electrode, as expected from the very little to no charge sharing. The spatial track resolution was determined to be 2.4 \(\mu\)m. The first structures bent to a radius of 18 mm, as shown on the right in Fig. 6, were successfully tested in the laboratory with an \({}^{55}\)Fe source. The first stitched sensor prototypes are now being investigated. A layout is shown in on the left in Fig. 1. There are two different structures, the monolithic stitched sensor (MOSS) of 14 \(\times\) 259 mm\({}^{2}\) with \(6.72\times 10^{6}\) pixels, and the monolithic stitched sensor with timing (MOST) of 2.5 \(\times\) 259 mm\({}^{2}\) with \(0.9\times 10^{6}\) pixels. The full structure for the ITS3 will be 2.5 times as large. Pixels of both 18 and 22.5 \(\mu\)m pitch are available. The structures will be tested for uniformity and yield. Figure 6: Bent “SuperALPIDEs” (left), or multiple ALPIDEs that were produced for ITS2 from a single wafer (schematic, middle right). Bending tests have proven to be successful with this chip as well as with regular ALPIDEs bent and studied in beam test facilities. Right: a 65 nm prototype for ITS3 has also been successfully bent and tested. Figure 7: Detection efficiency and fake-hit rate for non-irradiated sensors and for sensors irradiated with different fluences and dose. This 65 nm digital pixel test structure was shown to be 100% efficient at room temperature after the ITS3 expected radiation load and still operable at 99% efficiency after a fluence of 100 times this expected load. Figure from [9]. ## 5 Summary and outlook The ALICE collaboration plans the installation of new inner layers (ITS3) in 2027 for the ALICE inner tracking system for LHC Run 4. The aim is to use truly cylindrical wafer-scale monolithic active pixel sensors. Silicon flexibility and bending have been proven with routine tests and a full mock-up of the ITS3 was shown to be efficient when bent to the ITS3 target radii. For access to wafer-scale stitched sensors, a new 65 nm CMOS imaging technology is used. The first prototypes reach 100% detection efficiency at room temperature at the ALICE ITS3 expected fluence of \(\Phi_{\rm eq}=10^{13}\) 1 MeV \(n_{\rm eq}/{\rm cm}^{2}\), and the sensor is still operable at room temperature after \(\Phi_{\rm eq}=10^{15}\) 1 MeV \(n_{\rm eq}/{\rm cm}^{2}\). The first stitched sensors are being tested now. The ITS3 R&D will pave the way to thin, low-power sensors that could be used in future experiments like ALICE3 [10] and will enable a wealth of new precision measurements.
2308.06924
FedEdge AI-TC: A Semi-supervised Traffic Classification Method based on Trusted Federated Deep Learning for Mobile Edge Computing
As a typical entity of MEC (Mobile Edge Computing), 5G CPE (Customer Premise Equipment)/HGU (Home Gateway Unit) has proven to be a promising alternative to traditional Smart Home Gateway. Network TC (Traffic Classification) is a vital service quality assurance and security management method for communication networks, which has become a crucial functional entity in 5G CPE/HGU. In recent years, many researchers have applied Machine Learning or Deep Learning (DL) to TC, namely AI-TC, to improve its performance. However, AI-TC faces challenges, including data dependency, resource-intensive traffic labeling, and user privacy concerns. The limited computing resources of 5G CPE further complicate efficient classification. Moreover, the "black box" nature of AI-TC models raises transparency and credibility issues. The paper proposes the FedEdge AI-TC framework, leveraging Federated Learning (FL) for reliable Network TC in 5G CPE. FL ensures privacy by employing local training, model parameter iteration, and centralized training. A semi-supervised TC algorithm based on Variational Auto-Encoder (VAE) and convolutional neural network (CNN) reduces data dependency while maintaining accuracy. To optimize model light-weight deployment, the paper introduces XAI-Pruning, an AI model compression method combined with DL model interpretability. Experimental evaluation demonstrates FedEdge AI-TC's superiority over benchmarks in terms of accuracy and efficient TC performance. The framework enhances user privacy and model credibility, offering a comprehensive solution for dependable and transparent Network TC in 5G CPE, thus enhancing service quality and security.
Pan Wang, Zeyi Li, Mengyi Fu, Zixuan Wang, Ze Zhang, MinYao Liu
2023-08-14T04:03:24Z
http://arxiv.org/abs/2308.06924v1
FedEdge AI-TC: A Semi-supervised Traffic Classification Method based on Trusted Federated Deep Learning for Mobile Edge Computing ###### Abstract As a typical entity of MEC (**Mobile Edge Computing**), 5G CPE (**C**ustomer **P**remise **E**uipment) has proven to be a promising alternative to traditional HGU (**H**ome **G**ateway **U**nit). Network TC (**T**raffic **C**assification) is a vital service quality assurance and security management method for communication networks, which has become a crucial functional entity in 5G CPE/HGU. In recent years, many researchers have applied Machine Learning (ML) or Deep Learning (DL) to TC, namely AI-TC, to improve its performance. However, AI-TC methods face significant challenges, including high data dependency, exhaustively costly traffic labeling, and network subscribers' privacy. Besides, as the AI-TC carrier, 5G CPE/HGU's limited computing resources often become the bottleneck of models for efficient classification. Furthermore, the long-standing problem of the "black box" for AI-TC models has always perplexed network operators regarding the model's transparency and credibility, i.e., AI model interpretability. Therefore, how to achieve an efficient and trusted classification carried on the "weak computing power" network entity while protecting user privacy has become the key to ensuring the service quality and security of the home network. This paper presents the FedEdge AI-TC framework, a novel AI-TC approach for implementing trusted Federated Learning (FL) based efficient Network TC in 5G CPE/HGU. First, FedEdge AI-TC effectively protects the data privacy of network subscribers by proposing an FL based framework of local training, model parameters iterating, and centralized training. Second, a semi-supervised TC algorithm based on Variational Auto-Encoder (VAE) and convolutional neural network (CNN) is designed to reduce data dependence while keeping the TC accuracy. Finally, XAI-Pruning, an AI model compression method, combined with the DL model interpretability, is proposed to condense the model and interpret it globally and locally to achieve light-weighted AI-TC model deployment while building the trust in their decision of network operators. To demonstrate the efficiency of the proposed method, we conducted some experimental evaluations on commonly used public benchmark datasets and real network datasets. The results show that FedEdge AI-TC can outperform the benchmarking methods regarding the accuracy and achieve excellent TC performance of model inference on 5G CPE/HGU with limited computing resources, which effectively protects the users' privacy and improve the model's credibility. traffic classification, edge computing, federated learning, variational auto-encoder, semi-supervised, model interpretability. ## I Introduction As a distinct entity of MEC, 5G CPE has gradually become an alternative to HGU. Network traffic classification has played a crucial role in ensuring service quality and managing security for home networks. It is a critical functional element within 5G CPE. It finds extensive applications in QoS (**O**uality of **S**ervice) / QoE (**Quality** of **E** Experience) management, network resource optimization, congestion control, and intrusion detection. With the popularity of smart homes, many applications such as video surveillance, fire and smoke detection, smart appliances, VR/AR, and others have emerged alongside traditional internet services like high-definition videos and online gaming. These applications impose demanding requirements on the network's QoS, including fast and flexible customization of services, real-time responsiveness, and high reliability. Thus, home networks exhibit four significant trends: "Terminals Heterogeneity, Applications Diversity, High Privacy, and Rapid Evolution." Traffic classification in home networks, as an important prerequisite for fine-grained network resource management, has become one of the crucial security measures for smart homes. As shown in Fig. 1, the 5G CPE/Edge Gateway serves as the "connection point" between the smart home and the wide area network. It is crucial for the reliable forwarding of household application traffic. The AI-TC based on 5G CPE/Edge Gateway is the key to achieving fine-grained network resource management, QoE assurance, and intrusion detection in home networks. The development of Network TC has generally gone through three stages. In the first phase, TC methods were mainly based on port matching or DPI (Deep Packet Inspection). However, this type of technology quickly became ineffective with the increasing use of techniques like tunneling, encryption, random ports, and concerns about security issues such as user privacy breaches. The second stage primarily leveraged machine learning techniques to extract underlying patterns of different services/applications/attacks traffic features and achieved TC by discrimina Fig. 1: The Scenario of AI-TC for MEC. in the data space. However, such methods require the extraction of high-quality traffic features as the training inputs for ML. The extraction and selection of these features heavily rely on the domain expertise of network specialists and are time-consuming and labor-intensive. In the third stage, with the rapid development of cloud computing, big data, especially deep learning, and high-performance computing technology, feature learning of massive traffic data has become feasible, bringing new imagination space for improving the TC's performance. DL has three excellent features: automatic feature extraction, exploration of deep nonlinear features, and many classical models in computer vision/image/text/speech that can be reused. These advantages are all lacking in ML-based TC methods. Several DL-based TC technologies have been proposed recently, including CNN/AE/MLP/LSTM/GAN-based methods, which have achieved better classification performance than ML-TC[1, 2, 3, 4, 5]. However, applying DL technology to smart home network traffic classification faces three major challenges. Firstly, DL models heavily rely on a large volume of online behavior data from home users, which raises concerns about highly sensitive user privacy. Additionally, the collection and labeling of traffic samples are time-consuming and labor-intensive. Secondly, the 5G CPE/edge gateway's limited computing resources often become the bottleneck of the AI-TC models for efficient classification. Thirdly, the "black box" problem of the DL classification model has always perplexed the trustworthiness of home users/network operators. Therefore, efficiently achieving trusted classification of home network traffic on a "weak computing power" gateway device while protecting user privacy is crucial in ensuring service quality and security. Federated Learning is a distributed machine learning technology providing privacy protection, presenting a novel application paradigm that balances data privacy protection and sharing computing[6]. FL constructs a global model based on virtual fusion data through distributed model training among multiple data sources with local data, without exchanging local data but model parameters or intermediate results. In recent years, FL has been widely used in industries with high sensitivity to data privacy, such as finance and medical care, and has made significant progress[7]. Inspired by this, this paper proposes FedEdge AI-TC, an AI traffic classification method based on federated learning of 5G CPE/edge gateway. This method uses the FL framework to train the traffic classification model of the home network without uploading home network data to a centralized server but executing local distributed model training on a 5G CPE/edge gateway. The global traffic classification model is constructed by exchanging model parameters with the centralized server while protecting the privacy of home users. In addition, considering that traffic sample collection and labeling are time-consuming and labor-intensive, we design a semi-supervised traffic classification algorithm based on VAE and CNN to reduce dependency on traffic sample data. Finally, XAI-Pruning, an AI model compression method, combined with the DL model interpretability, is proposed to condense the model and interpret it globally and locally to achieve light-weighted AI-TC model deployment while building the trust in their decision of network operators. To demonstrate the efficiency of the proposed method, we conducted some experimental evaluations on commonly used public benchmark datasets and real network datasets. The results show that FedEdge AI-TC can outperform the benchmarking methods regarding the accuracy and achieve excellent TC performance of model inference on 5G CPE/HGU with limited computing resources, which effectively protects the users' privacy and improve the model's credibility. The contributions of this paper are as follows: 1. We propose a 5G CPE traffic classification method FedEdge AI-TC based on federated learning, which effectively protects the privacy of home user data by constructing the FL framework of local training, parameter updating, and centralized training; 2. A semi-supervised traffic classification algorithm based on VAE and CNN is designed to reduce the dependence on traffic sample data; 3. A pruning method based on DL model interpretability (XAI-Pruning) is proposed for model compression, and the model is globally and locally explained to increase model transparency and credibility; 4. Experiments on public and self-built datasets show this method can achieve high traffic classification accuracy under limited computing resources. The chapter organization of this paper is as follows: Section I is an overall introduction; Section II is related research works; Section III presents the framework for the proposed approach; Section IV describes the FedEdge AI-TC method; Section V evaluates the proposed method through experiments and provides a comprehensive discussion of the results.; Section VI concludes the contributions and outlines potential directions for future research. Table I below is the list of abbreviations in alphabetical order. ## II Related works ### _Deep Learning based Traffic Classification_ Deep learning, also referred to as deep structured learning or hierarchical learning, is achieved by acquiring the repre \begin{table} \begin{tabular}{l l} \hline Acronym & Explanation \\ \hline AE & Auto Encoders \\ CPE & Customer Premise Equipment \\ CNN & Convolutional Neural Networks \\ DL & Deep Learning \\ DPI & Deep Packet Inspection \\ FL & Federated Learning \\ FAM & Flow Attribute Matrix \\ GAN & Generative Adversarial Network \\ HGU & Home Gateway Unit \\ LSTM & Long Short Term Memory \\ MSE & Mean Square Error \\ MLP & Multilayer Perceptron \\ QoE & Quality of Experience \\ QoS & Quality of Service \\ SSL & Semi-Supervised Learning \\ TC & Traffic Classification \\ VAE & Variational Auto Encoder \\ XAI & Model Explanation/Interpretability \\ ML & Machine Learning \\ GNN & Graph Neural Network \\ \hline \end{tabular} \end{table} TABLE I: List of abbreviations in alphabetical order sentation of data. In contrast to traditional machine learning algorithms, deep learning can automatically extract features without human intervention, rendering it an ideal approach for traffic classification. The application of deep learning techniques to Network TC[12, 13, 14, 15] involves three steps: firstly, characterizing the input data by defining and designing the model input using data packets, PCAP files, or traffic statistics vectors as features; secondly, selecting suitable models and algorithms based on classifier objectives and model characteristics; finally extracting traffic features automatically through training a DL-based classifier and associating input data with corresponding category labels. Recent research has demonstrated deep learning methods' superiority in traffic classification. For example, CNNs[16, 17] are widely used in traffic classification. They can automatically extract features from raw network traffic data and have end-to-end learning capabilities during training. In addition, RNNs and LSTMs[18, 19] are used to process traffic sequence data and can capture the temporal dependencies in the data. These types of networks are often used in traffic classification to identify persistent attacks such as DDoS (Distributed Denial of Service) attacks. GNN has proven to be a novel information representation method for DL, which has been applied in TC or IDS[20]. ### _Semi-supervised Learning based Traffic Classification_ Semi-supervised traffic classification[14, 15, 16, 17] is an approach that utilizes a small set of labeled data along with a substantial amount of unlabeled data to distinguish various network traffic types. There are four primary methods for semi-supervised traffic classification: cluster-based methods, generative models, GANs (Generative Adversarial Networks), and discriminative models. Cluster-based methods[18, 19] have low computational complexity but can be influenced by data distribution and may exhibit instability in practical usage. Generative model-based methods[21] are effective for unknown or dynamically changing network applications; however, they require prior knowledge to select appropriate statistical features and clustering parameters which might limit the generalization of classification results. GAN-based methods[22, 23] can enhance dataset diversity and quality, thereby improving the model's generalization performance. Nevertheless, GAN models have complex structures and numerous parameters that pose challenges in training and render them impractical for deployment on edge devices. Therefore, this study primarily adopts a discriminative model-based approach by directly learning the mapping function from feature space to class space. Model parameters are optimized by minimizing the classification error of labeled data while incorporating a regularization term for unlabeled data. Subsequently, the unlabeled data is classified based on predicted results. ### _FL and its applications in Traffic Classification_ Federated Learning (FL) is a distributed ML/DL framework that focuses on decentralized training data, aiming to obtain ML/DL models by distributing the data across numerous nodes while ensuring privacy and security. FL allows local clients to retain their data, sharing only the model parameters with a central server, thereby reducing communication overhead and preserving client data privacy[10, 11]. Recently, two main approaches have emerged for traffic classification tasks by combining federated learning with deep learning techniques. The first approach[18, 19, 20, 21] empowers child nodes to annotate the data through various means. The second approach[17, 18, 19, 22, 23, 24] involves transforming the model structure and training objectives so that sub-nodes can train the model using unlabeled data; subsequently, fine-tuning is performed by the server using labeled data to achieve semi-supervised traffic classification. We propose a semi-supervised traffic classification model based on VAE-CNN that incorporates a federated learning paradigm enabling edge devices' semi-supervised training of the model. ## III The Overall Framework ### _The Workflow of FedEdge AI-TC_ Smart home networks face four major challenges: terminal heterogeneity, application diversity, high privacy, and rapid evolution. A network TC system must continuously learn through long-term iterative optimization to overcome these challenges. The process follows the full life cycle of federated learning, as shown in Fig. 2. The edge-side AI-TC classification system workflow based on federated learning includes _initialization, broadcast, training, parameter uploading, model aggregation and evaluation, edge deployment, and model monitoring_. Here are the steps for implementing an AI-powered network traffic classification system: 1. **Initialization:** Provide the client node with an initialization model for efficient local/global model training. 2. **Broadcast:** The centralized server broadcasts the initialization model to all the client nodes like 5G CPE/HGU. 3. **Local Training:** The client node performs feature engineering, model construction, and local training. 1. Feature Engineering: Extract, select, represent, and compress network traffic features to build an optimal feature subset for the AI-TC classification system. 2. Model Construction and Local Training: This step involves selecting what kind of learning methods (supervised/semi-supervised/unsupervised/weakly supervised), training methods (centralized/distributed training, federated learning), whether to pre-train, whether to use classical models for transfer learning, to form a local model (Local Model) on the local client node. 4. **Parameters Uploading:** Upload encrypted parameter information obtained from local training to the centralized server. 5. **Aggregation:** The centralized server performs'secure aggregation' on the parameter information uploaded by each client node (such as using the _FedAvg_ algorithm) and performs global training. 6. **Model Evaluation:** Evaluate the global model obtained from the centralized server's global training. If the training process converges, it will enter the model deployment step; otherwise, it will inform the client node to continue training and iterative optimization. In addition, the model evaluation also needs to consider its computational complexity, time complexity, and the computing resources and time required for training/inference. 7. **Edge deployment:** Deploy the model on the edge or terminal side using the pull/push/subscribe model deploy method and update strategy. Model compression and interpretation are two important tasks in this step. 1. **Model Compression:** Compress the inference/classification model small enough to meet the fast classification under limited computing power. 2. **XAI(Model Interpretation/Explanation):** Solve the "black box" problem of the AI-TC model to make the classification model users trust the model. 8. **Model Monitoring:** Monitor the status of the classification system, including model, and real-time network flow, to report some key issues such as classification system failures and model degradation. 9. **Continuous Learning:** Initiate iterative optimizations and continuous learning from initialization to maintain high adaptability, robustness, and reliability of the classification system. The following sections of this article will focus on _initialization, model training, compression, and explanation._ ### _The Architecture of FedEdge AI-TC_ The overall architecture of FedEdge AI-TC is illustrated in Fig. 3, which is divided into the client node and central server/central aggregator based on our previous work[37]. HGU, functioning as the client node, performs local training and inference classification tasks[38]. On the other hand, the central aggregator acts as a centralized server responsible for aggregated training, model evaluation, compression, interpretation, and deployment of inference models. The workflow can be summarized as follows: Initially, the central aggregator broadcasts the initial classification model to HGU. Subsequently, HGU collects real-time packets and performs redundant/invalid packet filtering, network flow attribute calculation, and normalization to formulate a FAM (Flow Attribute Matrix), depicted in Fig.4. This matrix is then utilized for training an initialization model locally. Afterward, encrypted gradient information, loss, and other parameter details are uploaded to the central aggregator for aggregate training. Since each HGU exhibits similar flow characteristics, Horizontal Federated Learning (HFL) is employed for aggregate training by standard secure aggregation algorithms like FedAvg. Then the model performance metrics will be evaluated, including accuracy, precision, recall, and F1-Score for assessing model convergence. If convergence occurs successfully, the aggregated global model undergoes compression to meet computing resource constraints of HGU (including CPU/memory/Flash). Once an available inference model is obtained, XAI-based methods are applied to provide both global and local explanations of the model. Finally, the resulting inference model will be distributed across all HGUs. Otherwise, if convergence fails, the global model will be issued to each HGU, initiating a new epoch of iterative training and optimization until it converges. ## IV The Methodology of FedEdge AI-TC ### _Initial Model_ The initial model plays a crucial role in determining the convergence speed and performance of federated aggregation training, which serves as the initial stage of federated learning. While randomly initialized models can be used within the FL framework, it is essential to construct an initial model that enhances local/global model training efficiency. Unlike traditional AI domains like images and text, network traffic classification lacks pre-trained models, necessitating the development of the initial model by ourselves. In this study, Fig. 2: The Workflow of the End-to-End Traffic Classification based on Trusted Federated Learning for Edge Gateway. we utilize three benchmarking datasets including (ISCIX[39], UNSW-NB15[40], and MIRAGE[41]) as baseline datasets for constructing our initial model. We adopt CNN as the supervised training algorithm, and further details about the initial model can be found in our previous work[37]. ### _Federated Semi-Supervised Learning Traffic Classification Method using VAE+CNN_ #### Iii-B1 **The Introduction of FSSL** There are two types of network traffic data in the FedEdge AI-TC system. The first type, labeled data, is stored in the central aggregator. The second type, unlabeled data, is located in the local HGU, i.e., real-time traffic data. From the perspective of FL, it belongs to the disjoint scenario. This alignment with the home network scenario arises due to the absence of labels for real-time traffic forwarded by the HGU, making it impractical to annotate such data. Conversely, the central aggregator possesses powerful computing resources and can effectively accomplish this work. Semi-supervised Learning (SSL) aims at leveraging unlabeled data to enhance model training by learning classification boundaries within these unlabeled samples and evaluating their proximity to labeled ones. Consequently, this approach strengthens both the robustness and generalization ability of models. The advantages of SSL primarily are in two aspects: (1) enhancing the robustness of TC classification models; (2) mitigating loss in model generalization caused by domain differences--for instance, traffic forwarded by different HGU devices may exhibit non-independent and identically distributed (non-IID) characteristics. In the subsequent section, we will propose an SSL-based method for traffic classification using FSSL. #### Iii-B2 **Problem Formulation** 1. Basic definitions: \(D=[D_{l},D_{u}]\); \(D\) refers to the network traffic dataset, which consists of a labeled dataset \(D_{l}\) and an unlabeled dataset \(D_{u}\), \(D_{l}\cap D_{u}=\phi\); \(M\) and \(N\) represent the total number of records in labeled and unlabeled datasets, respectively; \(L=\{l_{1},l_{2},\cdots,l_{c}\}\), refers to the dataset of the traffic classification label, where \(l\in L,0\leq c\leq K\), and \(K\) is the total number of application types; \(F=F+F^{\prime}\), is a collection of flow feature vectors; \(F=[F_{1},F_{2},\cdots,F_{M}]\), refers to the set of labeled flow feature vectors; \(F=[F_{1}^{\prime},F_{2}^{\prime},\cdots,F_{N}^{\prime}]\), refers to the set of unlabeled flow feature vectors; \(F=F+F^{\prime}=\{f_{1}^{\prime},f_{2}^{\prime}\}=\{f_{1}^{\prime},f_{1}^{ \prime},\cdots,f_{l}^{\prime\prime},f_{u}^{\prime},\cdots,f_{u}^{\prime\prime},\}\), labeled/unlabeled flow feature vectors are denoted by \(f_{l}^{i}\) and \(f_{u}^{j}\), \(0\leq i\leq M\), \(0\leq j\leq N\). 2. Definitions related to network traffic: * **flow:** A flow is identified by a five-tuple consisting of the source address, destination address, source port, destination port, and TCP/UDP protocol. \(T=\{t_{1},t_{2},\cdots,t_{m+n}\}\), represents a set of flows. * **flow feature:** It includes packet-level features, flow-level features, and statistical features, formally defined as \(f=\{f^{1},f^{2},\cdots,f^{78}\}\). \(f\) is the flow feature vector, composed of a total of 78 feature sub-items [13], which consist of the following three types of features: * Packet-level features: The temporal and spatial features with packets as the granularity, including packet payload characteristics, packet length-related features, and time-related features. For example, packet length, inter-arrival time between packets, etc. * Flow-level features: The temporal and spatial features of flows, with flows as the granularity, including flow length, flow duration, number of packets in a flow, and so on. * Statistical features: The expectation, variance, maximum value, and minimum value of the relevant Fig. 3: The Architecture of FedEdge AI-TC. feature. Table II presents an example collection of traffic features for FedEdge AI-TC. From the example, it can be observed that specific feature entries involve a high level of user privacy. #### Iii-B3 Vae As we all know, an Autoencoder (AE) is specifically designed to acquire a low-dimensional latent representation of samples by constructing an encoder and a decoder. It is commonly employed for tasks such as data compression or generation. However, due to its sole focus on learning the encoding of the sample itself, AE-based models usually show weak generalization. Therefore, AE's capacity to capture the underlying data distribution needs to be improved. In addition, network traffic consistently exhibits characteristics such as large scale, dynamic, and heterogeneity. A conventional AE model usually fails to fully reconstruct comprehensive network traffic even when provided with extensive datasets. Consequently, accurate classification becomes imperative when encountering traffic samples beyond the dataset. Variational Autoencoder (VAE) is an extension of Autoencoder (AE). More precisely, it is a generative model widely employed for unsupervised pre-training of unlabeled data. Its excellent ability to learn the latent distribution enables the model to acquire strong generalization capabilities during subsequent fine-tuning. As depicted in Fig. 5, the core concept behind VAE lies in learning the implicit representation of actual samples \(X\) and the implicit distribution from these samples to generated samples \(X^{\prime}\). This approach enhances the robustness of the model in learning implicit feature representations. It estimates the overall data distribution by constructing a generative model \(p_{\varphi}(Z|X)\) based on data samples. However, commonly used methods for estimating data distributions rely on maximum likelihood estimation through parameter estimation techniques. Assuming that the distribution of data samples \(P(X)\) follows a Gaussian distribution \(N(\mu,\sigma^{2})\), this transforms the statistical problem of generating models into a parameter estimation problem. The remaining challenge lies in fitting the distributions of encoder \(q_{\phi}(Z|X)\) and decoder \(p_{\varphi}(Z|X)\). While autoencoders (AE) typically learn data distributions by minimizing reconstruction loss like Mean Square Error (MSE), this type of fitting primarily occurs at a sample level within datasets. It fails to capture underlying data distributions effectively. In contrast, VAE employs KL divergence as a measure for quantifying differences between two distributions, also known as relative entropy. Therefore, we can define our loss function as Eq. 1. \[Loss=L(X,X^{\prime})+\sum_{j}KL(q_{j}(Z|X)\parallel p(Z)) \tag{1}\] The loss function consists of two components: the first component is the reconstruction loss, and the second component is the KL divergence between the proper distribution and the distribution we have chosen. VAE aims to minimize this relative entropy, as expressed in Eq. 2. \[\begin{split} L(x)&=E_{z~{}q(z|x)}\log\frac{p(z,x) }{q(z|x)}\\ &=\log p(x)-KL(q(z|x)\parallel p(z|x))\end{split} \tag{2}\] \(L(x)\) is called the Variational Lower Bound. We aim to optimize this lower bound, as the closer it is to \(\log p(x)\), the smaller the KL divergence. In this case, \(q_{\phi}(X|Z)\) can estimate \(p_{\varphi}(Z|X)\) more accurately. The VAE further decomposes the sampling of \(z\) into two parts: one consists of fixed values such as the standard deviation \(\sigma\) and the mean \(\mu\), and the other is a random Gaussian noise \(\epsilon\). After applying the reparameterization trick, we can rewrite \(L(x)\) as Eq. 3: \[L(x)=\frac{1}{2}\sum_{i=1}^{J}(1+\log(\sigma_{j}^{2})-\mu_{j}^{2})+\frac{1}{L} \sum_{i=1}^{L}\log p(x|z_{i}) \tag{3}\] The optimization of the variational lower bound of \(L(x)\) implies that, while ensuring that the \(Z\) values generated by the encoder conform to a prior Gaussian distribution, the decoder can maximize the possibility of reconstructing the original \(X\). #### Iii-B4 VAE based Unsupervised Learning Algorithm for Network TC As shown in Alg.1, the entire algorithm mainly consists of the following three steps: 1. **Define the hyperparameters:** Input dimension is \(input\_dim\); Hidden layer dimensions are \(hi\_dim\) for Fig. 4: The Example of L_FAM. Fig. 5: The General Architecture of VAE. \(0<i<L\), where \(L\) is the number of hidden layers; Dependent variable dimension is \(z\_dim\); Batch size is \(batch\_size\); Number of epochs for training is \(num\_epochs\). 2. **Dataset:**\(X\in D_{u}\), \(X\) is equivalent to a \(U\_FAM\) with \(input\_dim\times batch\_size\). 3. **Model construction and training:** * _Define the exact architectures of Encoder and Decoder._ This includes determining the number of layers, \(input\_dim\), \(hi\_dim\), \(z\_dim\), and the loss function. The Encoder maps \(X\) into the latent space \(Z\), while the Decoder maps the randomly sampled \(z\) from \(Z\) into the data space \(X^{\prime}\). The ultimate goal is to make \(X\) and \(X^{\prime}\) as close as possible. 4. _The feed-forward propagation process from Encoder to Decoder._ * In this context, the input data \(X\), referred to as \(U\_FAM\), is fed into the Encoder. After sequential computations, the mean and variance of the posterior distribution \(Z\) in the latent space are obtained. * The technique of reparameterization is used to sample a latent variable \(z\) from \(Z\), i.e., \(Z=\mu+\sigma\odot\varepsilon\), where \(\varepsilon\sim N(0,1)\), and it is then fed into the Decoder. * The decoder performs layer-by-layer computations to obtain the reconstructed output \(X^{\prime}\) of the input data \(X\), which can be expressed as \(U\_FAM^{\prime}\). * Calculate the reconstruction error and KL divergence based on Eq. 1 to obtain. * _Backpropagation and Optimization._ By iterating through \(num\_epochs\) and utilizing the optimizer defined, the VAE model is trained in a loop to optimize the model parameters with the goal of minimizing \(L(x)\). The structure of the unsupervised model for network traffic based on VAE is shown in Fig. 6, and detailed parameters are provided in Table III. #### Iv-B5 **VAE+CNN based Semi-supervised Learning Algorithm for Network TC** As illustrated in Fig.7, there are three parts in the semi-supervised based network traffic classifier: the encoder of VAE model, CNN, and softmax classifier. The labeled data, i.e., L_FAM is fed into the Encoder of the VAE model obtained from Section.IV-B4, then the output of the VAE encoder is subsequently fed into a three layers CNN model with a softmax classifier, which is concatenated with the VAE encoder. Finally, the decision results will be outputted for classification. The overall process is commonly referred to as _Fine-Tuning_. Due to the limited space of this paper, we do not provide the detail of the CNN classifier, which can be found in our previous work[37]. ### _The Model Compression Method Based on Interpretation_ #### Iv-B1 Model Interpretation We propose to implement an interpretable framework for deep learning traffic classification models based on SHAP values, which are mainly used to quantify the contribution of each feature to the model prediction. The basic design idea is to calculate the marginal contribution SHAP value when features are added to the model so that the importance of the features can be interpreted according to \begin{table} \begin{tabular}{l l l l} \hline Flow Attributes & Definition & Category & Description \\ \hline Domain Name & DNS/SNI in TLS & Payload related & domain.com, which is applicable to applications such as HTTP/HTTPS. \\ TCP slide\_win & TCP Slide Window & Packet related & TCP flow control parameters \\ TLS\_handshake & TLS handshake packet information & Payload related & Handshake types, cipher suites, content types, key length, etc. \\ Total Fwd Pts & Packet length sequence & Packet related & The sequence of packet lengths in the flow. \\ Pkt IAT Min & Packet arrival time & Packet related & The sequence of arrival times of packets in the flow. \\ Flow Len & Flow length correlation & Flow related & The total number of bytes in the flow per unit of time. \\ Flow Duration & Flow duration & Flow related & The duration of the TCP flow. \\ \hline \end{tabular} \end{table} TABLE II: A typical example (partial) of a network traffic feature set \begin{table} \begin{tabular}{l l l} \hline \hline Parameter name & Parameter & Parameter Interpretation \\ \hline input\_dim & 78 & Model input dimension \\ Layers of Encoder/Decoder & 3 & Number of layers in the encoder and decoder \\ \(h_{1}\_dim\), \(h_{2}\_dim\), \(h_{3}\_dim\) & 78,64,32 & Dimension of each layer \\ loss function for Encoder & ReLU & Loss function \\ loss function for Decoder & ReLU & Loss function \\ loss function for Decoder’s & Sigmoid & Loss function \\ output & batch\_size & 128 & Batch size \\ learning rate & 0.01 & Learning rate \\ \hline \hline \end{tabular} \end{table} TABLE III: Detailed parameters of FedEdge AI-TC the SHAP value, which is calculated as Eq. 4 in this paper. Suppose the i-th sample of sample set M is \(x_{i}\), the j-th feature of sample \(x_{i}\) is \(x_{ij}\), \(f(x_{ij})\) is the shapely value of \(x_{ij}\). \[f(x_{ij})=\sum_{S}\frac{|S|!(p-|S|-1)!}{p!}(v_{x}(S\cup\{x_{j}\})-v_{x}(S)) \tag{4}\] where \(S\subseteq\{x_{1},\cdots,x_{p}\}\setminus\{x_{j}\}\), \(\{x_{1},\cdots,x_{p}\}\) is the set of all possible input features excluding \(\{x_{j}\}\), \(p\) is the number of features of the sample, and \(v_{x}(S)\) is the prediction result of the feature subset \(S\). The architecture of Model Interpretation is shown in Fig. 8. The left part is the traditional structure of the traffic classification model, and the process shown on the right allows for the interpretability of the traffic classification model and the optimization of the structure and parameters of the traffic classification model. This framework is divided into a local interpretation and a global interpretation. _Local interpretation_ means that for each data instance, the contribution of each feature to its predicted outcome is calculated and presented visually. The formula for calculating the local interpretation is as follows: \[y_{i}=y_{base}+f(x_{i1}+f(x_{i2}+\cdots+f(x_{ij}) \tag{5}\] where \(y_{i}\) is the predicted value of the model for sample \(x_{i}\), \(y_{base}\) is the mean of all sample evaluated values. As for _Global interpretation_, firstly, a matrix of feature SHAP values is calculated, with one instance per row and one feature per column. Secondly, in the traditional global interpretation, the feature j's contribution is obtained by summing the shapely mean of feature j for all samples with Eq. 6. And then, SHAP values are sorted in descending order to obtain the importance of the model features. \[f(x_{j})=\sum_{i=1}^{M}f(x_{ij}) \tag{6}\] #### Iv-B2 Model Pruning To address the issue of how to compress models to make them suitable for training and inference on resource-limited devices, in particular, we will focus on pruning, an easier-to-implement model compression technique. Model pruning is based on an underlying assumption: 'weight contribution,' which means that not every weight contributes equally to the output prediction. Therefore, the basic design idea of model pruning in this paper is to rank the feature importance by global interpreta Fig. 8: The Architecture of Model Interpretation. Fig. 6: The VAE Model Architecture of FedEdge AI-TC. Fig. 7: The VAE+CNN Semi-supervised Model Architecture of FedEdge AI-TC. tion, after which the importance ranking of the convolutional kernels is calculated using the causal evaluation mechanism. The convolutional kernels with importance below a threshold are filtered out and pruned. ## V Experimental Evaluation and Discussion ### _Evaluation Settings and Chosen Datasets_ We conducted the experimental evaluation on two datasets. One is a benchmarking public dataset, ISCX-VPN2016. The other is a private dataset built by ourselves. The latter comprises six popular applications and background traffic from terminals collected in the campus network scenario, including Bilibili, QQ music, Honor of Kings, TeamrightT Tactics, and Game for peace. These apps cover the five popular app categories of video, music, Moba games, First Person Shooting (FPS) and Role-playing game (RPG). To collect the data, we used a semi-automatic web traffic generation tool. We leveraged an automated traffic generator to collect traffic from Bilibili and QQ music, with 13,314 flows from Bilibili and 20703 flows from QQ music. For the other interactive games, we chose to collect them manually. We used PCAPDroid to mark the network traffic as it occurred at the endpoint and also at the router, where the two were compared and filtered. The network flow features were calculated by CICFlowMeter for each application's PCAP files. In addition, this experiment also provides the statistics of the background traffic information, which contains information about network location services, security components, and syslog services. Table IV shows the exact number of flows. The experimental environment is AMD Ryzen 3600, 16GB RAM, NVIDIA GTX 1660, CUDA 7.5, CDNN10.5. In this paper, Python3 is the primary programming language. The following is a description of the evaluation metrics: Precision, Recall, F1, Accuracy, and AUC. Time-complexity-related metrics about training like training time are not included in this paper because we think those are highly dependent on the hardware resources. ### _Performance Evaluation_ The training process of the VAE part in the VAE+CNN (E-CNN) model is divided into two parts. Firstly, the labels were removed from the datasets, which can be acted as the unlabeled flows for the unsupervised learning training. After training, one can save the trained encoder in the VAE model for further semi-supervised learning. The training process for the CNN part mainly aim to convert the labeled data into digital encoding for fine-tuning. In the single CNN model, we process the dataset and divide it into training and testing sets according to ratio, which is then trained and evaluated in the CNN. For the E-CNN and single CNN models, we obtained the following two diagrams by adjusting the ratio of the dataset to the training and testing sets. The horizontal axis in the diagram represents the partitioning ratio. For example, 0.2 means that 80\(\%\) of the entire dataset is allocated to the training set, and the remaining 20\(\%\) is allocated to the testing set. The vertical axis represents the accuracy of the model training results under this partition. Fig. 9 displays the model results of E-CNN and single CNN in real-life scenarios, while Fig. 10 shows the results in the public dataset. According to the figures, as the partition ratio increases, meaning that the training set data decreases, the accuracy of both models tends to decrease and becomes similar, both at \begin{table} \begin{tabular}{c c c} \hline Applications Name & Type & Number \\ \hline bilibili & Video & 13314 \\ QQ music & Music & 20703 \\ Honor of Kings & Moba & 9475 \\ Teamright Tactics & RPG & 14005 \\ Game for peace & FPS & 7763 \\ Background & Log & 13017 \\ \hline \end{tabular} \end{table} TABLE IV: The private experimental dataset Fig. 10: Accuracy of different rates in public dataset. Fig. 9: Accuracy of different rates in real-life scenarios. around 0.7. Within the partition ratio interval [0.2, 0.8], E-CNN achieves good results with higher accuracy than a single CNN. We have set the partition ratio to 0.45. After training and testing, we obtained the respective Confusion Matrix and Classification Report for E-CNN and single CNN using this ratio. The Confusion matrix is shown in Fig. 11 and Fig. 12. In the figures, the x-axis represents the prediction labels, the y-axis represents the actual labels, and the color intensity indicates the count of correct and incorrect predictions. According to the figure, under the partition with this ratio, E-CNN shows relatively accurate predictions for 'QQmusic,' 'Teamfight Tactics,' and 'Bilibili.' On the other hand, compared to CNN, E-CNN's predictions for 'QQmusic,' 'IQiyi,' and 'Teamfight Tactics' are relatively accurate, but there are also several errors. The XAI-Pruning model effectively filters out unimportant weights and parameters, substantially reducing the number of parameters and the overall model size. As a result, the model becomes more compact, enabling significantly faster inference and quicker predictions. On the other hand, the baseline model, which includes all weights and parameters, generally exhibits higher prediction accuracy due to its capacity to capture more intricate model details and complexity. As depicted in Table VII, the baseline model size is approximately three times larger than the XAI-Pruning model, and the inference time is doubled. However, despite these changes, both models still exhibit comparable prediction accuracy without significant differences. Consequently, the pruning method proposed in this paper effectively reduces the model's compression size and inference time to some extent while having a relatively minor impact on prediction accuracy. ## VI Conclusion and Future work This paper presents the FedEdge AI-TC approach for trusted Federated Learning (FL) based efficient Network TC in 5G CPE/HGU. Firstly, FedEdge AI-TC effectively protects the data privacy of network subscribers by proposing an FL-based framework of local training, model parameters iterating, and centralized training. Secondly, a semi-supervised TC algorithm based on Variational Auto-Encoder (VAE) and convolutional neural network (CNN) is designed to reduce data dependence while keeping the TC accuracy. Finally, XAI-Pruning, an AI model compression method, combined with the DL model interpretability, is proposed to condense the model and interpret it globally and locally to achieve light-weighted AI-TC model deployment while building the trust in their decision of network operators. To demonstrate the efficiency of the proposed method, we conducted some experimental evaluations on commonly used public benchmark datasets and real network datasets. The results show that FedEdge AI-TC can outperform the benchmarking methods regarding the accuracy and achieve excellent TC performance of model inference on 5G CPE/HGU with limited computing resources, which effectively protects the users' privacy and improve the model's credibility. However, besides reliability, robustness, and generalization are still two important topics when handling network traffic classification, especially using ML/DL. In the future, we will continuously focus on how to leverage ML/DL algorithms like generative models or large language models to enhance the reliability, robustness, and generalization of network traffic classification models. ## Acknowledgment The paper is supported by National Natural Science Fundation (General Program) of China under Grant 61972211
2306.10432
Universal quantification makes automatic structures hard to decide
Automatic structures are structures whose universe and relations can be represented as regular languages. It follows from the standard closure properties of regular languages that the first-order theory of an automatic structure is decidable. While existential quantifiers can be eliminated in linear time by application of a homomorphism, universal quantifiers are commonly eliminated via the identity $\forall{x}. \Phi \equiv \neg (\exists{x}. \neg \Phi)$. If $\Phi$ is represented in the standard way as an NFA, a priori this approach results in a doubly exponential blow-up. However, the recent literature has shown that there are classes of automatic structures for which universal quantifiers can be eliminated by different means without this blow-up by treating them as first-class citizens and not resorting to double complementation. While existing lower bounds for some classes of automatic structures show that a singly exponential blow-up is unavoidable when eliminating a universal quantifier, it is not known whether there may be better approaches that avoid the na\"ive doubly exponential blow-up, perhaps at least in restricted settings. In this paper, we answer this question negatively and show that there is a family of NFA representing automatic relations for which the minimal NFA recognising the language after eliminating a single universal quantifier is doubly exponential, and deciding whether this language is empty is \expspace-complete. The techniques underlying our \expspace lower bound further enable us to establish new lower bounds for some fragments of B\"uchi arithmetic with a fixed number of quantifier alternations.
Christoph Haase, Radosław Piórkowski
2023-06-17T22:48:21Z
http://arxiv.org/abs/2306.10432v2
# Universal quantification makes automatic structures hard to decide ###### Abstract Automatic structures are structures whose universe and relations can be represented as regular languages. It follows from the standard closure properties of regular languages that the first-order theory of an automatic structure is decidable. While existential quantifiers can be eliminated in linear time by application of a homomorphism, universal quantifiers are commonly eliminated via the identity \(\forall x\,.\,\Phi\equiv\neg(\exists x\,.\,\neg\Phi)\). If \(\Phi\) is represented in the standard way as an NFA, a priori this approach results in a doubly exponential blow-up. However, the recent literature has shown that there are classes of automatic structures for which universal quantifiers can be eliminated by different means without this blow-up by treating them as first-class citizens and not resorting to double complementation. While existing lower bounds for some classes of automatic structures show that a singly exponential blow-up is unavoidable when eliminating a universal quantifier, it is not known whether there may be better approaches that avoid the naive doubly exponential blow-up, perhaps at least in restricted settings. In this paper, we answer this question negatively and show that there is a family of NFA representing automatic relations for which the minimal NFA recognising the language after eliminating a single universal quantifier is doubly exponential, and deciding whether this language is empty is ExpSpace-complete. automatic structures, universal projection, state complexity, tiling problems 2012 acmcopyrightmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermarginmargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innermargin=0pt, innertopmargin=0pt, innermargin=0pt, innermargin=0pt, innermarginmargin=0pt, innertopmargin=0pt, innermargin=0pt, innermarginmargin=0pt, innermargin=0pt, innermargin=0pt, innermarginmargin=0pt, innermargin=0pt, innertopmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innertopmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innertopmargin=0pt, innermargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innertopmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innertopmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt,marginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt,marginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt,marginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt,marginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt,marginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt,marginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmarginmargin=0pt, innermarginmarginmarginmarginmarginmarginmarginmargin=0pt, inner automaton whose language encodes the complement of \(\Phi\) is computationally difficult and may lead to an automaton with \(2^{\Omega(|\mathcal{A}|)}\) many states. In particular, due to double complementation, eliminating a universal quantifier may _a priori_ lead to an automaton with \(2^{2^{\Omega(|\mathcal{A}|)}}\) many states. Notable examples of automatic structures are Presburger arithmetic [15], the first-order theory of the structure \(\langle\mathbb{N},0,1,+,=\rangle\), and its extension Buchi arithmetic [5, 3, 4]. Tool suites such as Lash[1], Tapas[13] and Walnut[14] are based on the automata-theoretic approach and have successfully been used to decide challenging instances of Presburger arithmetic and Buchi arithmetic from various application domains. Those tools eliminate universal quantifiers via double complementation. Yet another approach to deciding Presburger arithmetic is based on manipulating semilinear sets [9, 7], which are generalisations of ultimately periodic sets to arbitrary tuples of integers in \(\mathbb{N}^{d}\). They are similar to automata-based methods in terms of the computational difficulty of existential projection and complementation: the former is easy whereas the latter is difficult. Neither syntactic quantifier elimination nor automata-based quantifier elimination methods seem to suffice to obtain optimal complexity bounds for deciding fragments of Presburger or Buchi arithmetic with, e.g., a fixed number of variables, quantifier alternations or further structural restrictions. For example, it was shown in [6] that deciding sentences of quantified integer programming \(\exists\bar{x}_{1}\,\forall\,\bar{x}_{2}\ldots\exists\bar{x}_{n}\,A\cdot \bar{x}\geq\bar{b}\) is complete for the \(n\)-th level of the polynomial hierarchy. The upper bound was obtained by manipulating so-called hybrid linear sets, which characterise the sets of integer solutions of systems of linear equations \(A\cdot\bar{x}\geq\bar{b}\). A key technique introduced in [6] is called _universal projection_ and enables directly eliminating universal quantifiers instead of resorting to double complementation and existential projection. Given \(S\subseteq\mathbb{N}^{d+k}\), the universal projection of \(S\) onto the first \(d\) coordinates is defined as \[\pi_{d}^{\forall}(S)\coloneqq\left\{\bar{u}\in\mathbb{N}^{d}\,\middle|\,(\bar {u},\bar{v})\in S\text{ for all }\bar{v}\in\mathbb{N}^{k}\right\}.\] It is shown in [6] that if \(S\) is a hybrid linear set then \(\pi_{d}^{\forall}(S)\) is a hybrid linear set that can be obtained as a finite intersection of hybrid linear sets. Moreover, the growth of the constants in the description of the hybrid linear set is only polynomial. Neither syntactic quantifier elimination nor automata-based methods are powerful enough to derive those tight upper bounds for quantified integer programming. While instances of quantified integer programming allow for an unbounded number of variables in a quantifier block, it follows from the results established in [7] that, if the number of variables of an arbitrary Presburger formula is fixed, then the number of hybrid linear sets representing the complement of such a formula, as well as the bit size of the constants appearing in the description of those hybrid linear sets, is only polynomial. Those positive algorithmic and structural results are specific to Presburger arithmetic and leave open the possibility that it may be possible to establish analogous results for general automatic structures. The starting point of this paper is the question of whether, give a non-deterministic finite automaton \(\mathcal{A}\) whose language \(\mathcal{L}(\mathcal{A})\subseteq(\Sigma^{d+k})^{*}\) encodes the set of solutions of some quantifier-free formula \(\Phi\), there is a more efficient way to eliminate a (block of) universally quantified variable(s) than to first complement \(\mathcal{A}\), next to perform an existential projection step, and finally to complement the resulting automaton again, especially in the light of the results of [6, 7]. Such a method would have direct consequences for tools such as Walnut which perform the aforementioned sequence of operations in order to eliminate universal quantifiers. In particular, Walnut is not restricted to automata resulting from formulas of linear arithmetic and allows users to directly specify a finite-state automaton when desired. For better or worse, however, as the main result of this paper, we show that deciding whether the universal projection \(\pi_{d}^{\vee}(\mathcal{L}(\mathcal{A}))\) of some language regular language \(\mathcal{L}(\mathcal{A})\subseteq\left(\Sigma^{d+k}\right)^{*}\) is empty is complete for ExpSpace. In particular, the lower bound already holds for \(d=k=1\), meaning that, in general, even for fixed-variable fragments of automatic structures, there is no algorithmically more efficient way to eliminate a single universal quantifier than the naive one. The challenging part is to show the ExpSpace lower bound, which requires an involved reduction from a tiling problem. This reduction also enables us to show that there is a family of non-deterministic finite automata \(\mathcal{A}_{n}\) such that the smallest non-deterministic finite automaton recognising the universal projection of \(\mathcal{L}(\mathcal{A}_{n})\) has \(\Omega\left(2^{2^{n}}\right)\) many states. ## 2 Preliminaries ### Regular languages and their compositions For a word \(w=a_{1}a_{2}\cdots a_{n}\in\Sigma^{*}\), we write \(w[i]\) to denote its \(i\)-th letter \(a_{i}\), and \(w[i,j]\) to denote the infix \(a_{i}a_{i+1}\cdots a_{j}\) (\(i\leq j\)). We write \(|w|\) for the length of \(w\). A _proper suffix_ of \(w\) is any infix \(w[i,n]\) for some \(1<i\leq n\). Regular expressionsA _regular expression_ over the alphabet \(\Sigma\) is a term featuring Kleene star, concatenation and union operations, as well as \(\emptyset\) and all symbols from \(\Sigma\) as constants: \[\mathcal{E},\mathcal{E}^{\prime}\mathrel{\mathop{:}}=\mathcal{E}^{*}\mid \mathcal{E}\cdot\mathcal{E}^{\prime}\mid\mathcal{E}+\mathcal{E}^{\prime}\mid \emptyset\mid a\text{ for every }a\in\Sigma\] For notational convenience, we also use sets of symbols \(A\subseteq\Sigma\) as constants, and a \(k\)-fold concatenation \(\mathcal{E}^{k}\) for every \(k\in\mathbb{N}\); we also drop the concatenation dot most of the time. The language \(\mathcal{L}(\mathcal{E})\subseteq\Sigma^{*}\) is defined by structural induction, by interpreting constants as \(\mathcal{L}(\emptyset)\mathrel{\mathop{:}}=\emptyset\) and \(\mathcal{L}(a)\mathrel{\mathop{:}}=\{a\}\), and using the standard semantics of the three operations. The class of languages definable by regular expressions is called _regular languages_. The size \(|\mathcal{E}|\) of a regular expression \(\mathcal{E}\) is defined recursively as \(1\) plus the size of its subexpressions. For \(\rho:\Sigma\to\Gamma\) and a regular expression \(\mathcal{E}\), \(\rho(\mathcal{E})\) is a regular expression over \(\Gamma\) obtained through substituting every constant \(a\in\Sigma\) appearing in \(\mathcal{E}\) by \(\rho(a)\). Finite-state automataRegular languages can also be represented by _non-deterministic finite-state automata_ (nfa). Such an automaton is a tuple \(\mathcal{A}=(Q,\Sigma,\delta,Q_{\mathrm{F}})\), where \(Q\) is a finite non-empty set of _states_, \(\Sigma\) is a finite _alphabet_, \(\delta\subseteq Q\times\Sigma\times Q\) is the _transition relation_, \(Q_{\mathrm{I}}\subseteq Q\) is the set of _initial states_, and \(Q_{\mathrm{F}}\subseteq Q\) is the set of _final states_. A triple \((p,a,q)\in Q\times\Sigma\times Q\) is called a _transition_ and denoted as \(p\xrightarrow{a}q\). A _run_ of \(\mathcal{A}\) from a state \(q_{0}\) to a state \(q_{n}\) (\(n\in\mathbb{N}\)) on a word \(w=a_{1}a_{2}\cdots a_{n}\in\Sigma^{*}\) is a finite sequence of transitions \(\left(q_{i-1}\xrightarrow{a_{i}}q_{i}\right)_{1\leq i\leq n}\) such that \(q_{i-1}\xrightarrow{a_{i}}q_{i}\in\delta\) for every \(i\). A word \(w\in\Sigma^{*}\) is _accepted_ by \(\mathcal{A}\) if there exists a run of \(\mathcal{A}\) from some \(q_{\mathrm{I}}\in Q_{\mathrm{I}}\) to \(q_{\mathrm{F}}\in Q_{\mathrm{F}}\) over \(w\). The _language_ of \(\mathcal{A}\) is defined as \(\mathcal{L}(\mathcal{A})\mathrel{\mathop{:}}=\{w\in\Sigma^{*}\mid w\text{ is accepted by }\mathcal{A}\}\). We define the size of \(\mathcal{A}\) as \(|\mathcal{A}|\mathrel{\mathop{:}}=|Q|+|Q|^{2}\cdot|\Sigma|\). This definition only depends on \(Q\) and \(\Sigma\) and ensures that \(|\mathcal{A}|\mathrel{\mathop{:}}=|Q|+|\delta|\cdot|\Sigma|\). Subsequently, we will implicitly apply the well-known fact that the number of states of an nfa accepting the complement of \(\mathcal{L}(\mathcal{A})\) is bounded by \(2^{|Q|}\). Below we state, without proofs, a few folklore properties of nfa: [nfa closed under language union]_For any nfa \(\mathcal{A},\mathcal{B}\) over \(\Gamma\), there exists an nfa \(\mathcal{A}\oplus\mathcal{B}\) of size \(O(|\mathcal{A}|+|\mathcal{B}|)\) such that \(\mathcal{L}(\mathcal{A}\oplus\mathcal{B})=\mathcal{L}(\mathcal{A})\cup\mathcal{ L}(\mathcal{B})\)._ **Fact 2** (nfa closed under inverse language homomorphisms).: _For any nfa\(\mathcal{A}\) and a homomorphic mapping \(\rho\colon\Sigma^{*}\to\Gamma^{*}\), there exists an nfa\(\rho^{-1}(\mathcal{A})\) of size \(O(|\mathcal{A}|)\) such that \(\mathcal{L}\big{(}\rho^{-1}(\mathcal{A})\big{)}=\rho^{-1}(\mathcal{L}(\mathcal{ A}))\)._ **Fact 3** (nfa closed under concatenation of languages).: _For any nfa\(\mathcal{A},\mathcal{B}\) there exists an nfa\(\mathcal{A}\odot\mathcal{B}\) of size \(O(|\mathcal{A}|+|\mathcal{B}|)\) such that \(\mathcal{L}(\mathcal{A}\odot\mathcal{B})=\mathcal{L}(\mathcal{A})\cdot\mathcal{ L}(\mathcal{B})\coloneqq\{u\cdot v\:|\:u\in\mathcal{L}(\mathcal{A})\text{ and }v\in\mathcal{L}(\mathcal{B})\}\)._ **Fact 4** (translating regular expressions into nfa).: _For any regular expression \(\mathcal{E}\), there exists an nfa\(\mathcal{A}(\mathcal{E})\) such that \(|\mathcal{A}(\mathcal{E})|=O(|\mathcal{E}|)\) and \(\mathcal{L}(\mathcal{A}(\mathcal{E}))=\mathcal{L}(\mathcal{E})\) (see [17])._ FitersA _filter_ is an auxiliary term introduced to simplify the proofs in Section3, allowing for a modular design of regular languages. Fix a finite alphabet \(\Sigma\) and let \(\Phi\coloneqq\{\top,\bot\}\). Define homomorphisms \(\psi_{\text{in}},\psi_{\text{out}}\colon\left(\Sigma\times\Phi\right)^{*}\to \Sigma^{*}\) by their actions on a single letter \[\psi_{\text{in}}(a,b)\coloneqq a \psi_{\text{out}}(a,\top)\coloneqq a \psi_{\text{out}}(a,\bot)\coloneqq\varepsilon\,.\] \[\text{(output every symbol from $\Sigma$)} \text{(output only symbols paired with $\top$)}\] A filter over an alphabet \(\Sigma\) is any language \(F\subseteq\left(\Sigma\times\Phi\right)^{*}\). It induces a binary _input-output relation_\(\mathcal{R}(F)\subseteq\Sigma^{*}\times\Sigma^{*}\) between input words \(u\) and their subsequences \(v\): \[(u,v)\in\mathcal{R}(F)\ \ \stackrel{{\text{def}}}{{ \Longleftrightarrow}}\ \ u=\psi_{\text{in}}(w)\text{ and }v=\psi_{\text{out}}(w)\text{ for some }w\in F\,.\] We define \(F(u)\coloneqq\{v\:|\:(u,v)\in\mathcal{R}(F)\}\) to be the set of all possible outputs of \(F\) on \(u\). Filtering regular expressionsA _filtering regular expression_\(\mathcal{F}\) over alphabet \(\Sigma\) is any regular expression over \(\Sigma\times\Phi\). We write \(\mathcal{F}(w)\coloneqq\mathcal{L}(\mathcal{F})(w)\). To simplify the notation, we only write the \(\Sigma\) component of the constants, and underline parts of the expression. A symbol \(a\) appearing in an underlined fragment represents a pair \((a,\top)\), and in a fragment which is not underlined--a pair \((a,\bot)\). Intuitively, underlined portions correspond to parts of the words being output. We apply the same notational convention to words \(w\in(\Sigma\times\Phi)^{*}\). Additionally, for \(\rho:\Sigma\to\Gamma\), we abuse the notation and extend it to the naturally defined homomorphism of type \(\Sigma\times\Phi\to\Gamma\times\Phi\), which just preserves the coordinate belonging to \(\Phi\). Fix \(A=\{\mathtt{a},\mathtt{b},\mathtt{c},\ldots,\mathtt{z}\}\). Consider a filtering regular expression \(\mathcal{F}\) and a word \(w\), both over \(A\cup\{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ \mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ \mathtt{ \mathtt{ set \(a_{i,j}\coloneqq\hash\). The _convolution_\(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k}\) of \(w_{1},\ldots,w_{k}\) is defined as \[w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k}\coloneqq\begin{bmatrix}a_{1,1}\\ \vdots\\ a_{k,1}\end{bmatrix}\begin{bmatrix}a_{1,2}\\ \vdots\\ a_{k,2}\end{bmatrix}\cdots\begin{bmatrix}a_{1,\ell}\\ \vdots\\ a_{k,\ell}\end{bmatrix}\subseteq\left(\Sigma_{\hash}^{k}\right)^{*}\,.\] For \(R\subseteq(\Sigma^{*})^{k}\) and \(L\subseteq\left(\Sigma_{\hash}^{k}\right)^{*}\) define \[\text{\it Rel2Lang}(R) \coloneqq\left\{w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k}\mid( w_{1},w_{2},\ldots,w_{k})\in R\right\},\] \[\text{\it Lang2Rel}(L) \coloneqq\left\{(w_{1},w_{2},\ldots,w_{k})\mid w_{1}\otimes w_{2} \otimes\cdots\otimes w_{k}\in L\right\}.\] A relation \(R\subseteq(\Sigma^{*})^{k}\) is _automatic_ whenever \(\text{\it Rel2Lang}(R)\) is regular1. Throughout this paper, we assume that \(\text{\it Rel2Lang}(R)\) is given by some nfa\(\mathcal{A}_{R}=(Q,\Sigma_{\hash}^{k},\delta,Q_{1},Q_{F})\). Footnote 1: For the purposes of this paper and for presentational convenience, we assume that \(R\) is over the same alphabet as the corresponding regular language. Clearly, not every nfa\(\mathcal{A}=(Q,\Sigma_{\hash}^{k},\delta,Q_{1},Q_{F})\) is associated with an automatic relation \(R\subseteq\Sigma^{k}\) since there are _a priori_ no restrictions on the occurrences of the padding symbol "\(\hash\)". The language \(L_{\hash}\subseteq(\Sigma_{\hash}^{k})^{*}\) of all incorrect words that cannot be obtained as a convolution of words \(w_{1},\ldots,w_{k}\in\Sigma^{*}\) can be characterized by the following regular expression: \[\left(\Sigma_{\hash}^{k}\right)^{*}\cdot\left(\{\hash\}^{k}+\sum_{1\leq i \leq k}\left(\left(\Sigma_{\hash}^{i-1}\times\{\hash\}\times\Sigma_{\hash}^{ k-i}\right)\cdot\left(\Sigma_{\hash}^{i-1}\times\Sigma\times\Sigma_{\hash}^{k-i} \right)\right)\right)\cdot\left(\Sigma_{\hash}^{k}\right)^{*}.\] This regular expression "guesses" that either a letter consisting solely of \(k\hash\) symbols occurs, or in some row of a word in \(\left(\Sigma_{\hash}^{k}\right)^{*}\) a "\(\hash\)" symbol is followed by a symbol in \(\Sigma\). The language of this regular expression can be implemented by an nfa with \(k+2\) many states. Hence, the complement \(L_{\hash}\coloneqq\overline{L_{\hash}}\) of \(L_{\hash}\), characterizing all "good" words, can be recognized by an nfa with \(2^{k+2}\) many states. For the sake of readability, we do not parameterize \(L_{\hash}\) explicitly with \(k\); the relevant \(k\) will always be clear from the context. The _(existential) projection_ of \(R\subseteq(\Sigma^{*})^{d+k}\) onto the first \(d\) components is defined as \[\pi_{d}^{\exists}(R)\coloneqq\left\{\bar{u}\in(\Sigma^{*})^{d}\,\middle|\,( \bar{u},\bar{w})\in R\text{ for some }\bar{w}\in(\Sigma^{*})^{k}\right\}.\] The dual of existential projection is _universal projection_: \[\pi_{d}^{\forall}(R)\coloneqq\left\{\bar{u}\in(\Sigma^{*})^{d}\,\middle|\,( \bar{u},\bar{w})\in R\text{ for all }\bar{w}\in(\Sigma^{*})^{k}\right\}.\] It is clear that \(\pi_{d}^{\forall}(R)=\overline{\pi_{d}^{\exists}(\overline{R})}\). We overload the projection notation for languages \[\pi_{d}^{\exists}(L)\coloneqq\text{\it Rel2Lang}\big{(}\pi_{d}^{\forall}( \text{\it Lang2Rel}(L))\big{)}\qquad\pi_{d}^{\forall}(L)\coloneqq\text{\it Rel2Lang }\big{(}\pi_{d}^{\forall}(\text{\it Lang2Rel}(L))\big{)}\,.\] In this article, given \(\mathcal{A}_{R}\) such that \(\text{\it Rel2Lang}(R)=\mathcal{L}(\mathcal{A}_{R})\subseteq\left(\Sigma_{ \hash}^{d+k}\right)^{*}\), we are concerned with the computational complexity of deciding whether \(\pi_{d}^{\forall}(R)=\emptyset\), measured in terms of \(\left|\mathcal{A}_{R}\right|\). Deciding whether \(\pi_{d}^{\forall}(R)\neq\emptyset\) for an automatic relation \(R\subseteq(\Sigma^{*})^{d+k}\) with an associated nfa\(\mathcal{A}_{R}\) is ExpSpace-complete. The lower bound already holds for \(d=k=1\). ## 3 Emptiness after universal projection is ExpSpace-hard ### Tiling problems Let \(\mathcal{T}\subseteq_{\text{fin}}\mathbb{N}^{4}\) be a set of _tiles_ with colours coded as tuples of numbers in top-right-bottom-left order. We define natural projections \(\mathit{top}\), \(\mathit{right}\), \(\mathit{bottom}\), \(\mathit{left}:\mathbb{N}^{4}\rightarrow\mathbb{N}\) to access individual colours of a tile, and let \(\mathit{colours}(\mathcal{T})\coloneqq\mathit{top}(\mathcal{T})\cup\mathit{ right}(\mathcal{T})\cup\mathit{bottom}(\mathcal{T})\cup\mathit{left}( \mathcal{T})\). **Example 8** (a tile).: A tile \(t=(1,3,2,2)\) is drawn as \(\mathbb{Z}^{3}\) with various auxiliary background shades corresponding to colour values. A \(\mathcal{T}\)-_tiling of size \((h,w)\in\mathbb{N}^{2}_{+}\)_ is any \(h\times w\) matrix \(T=[t_{i,j}]_{i,j}\in\mathcal{T}^{h\times w}\). It is _valid_, whenever colours of the neighboring tiles match, and outer colours are all \(0\): \[\mathit{bottom}(t_{i,j}) =\mathit{top}(t_{i+1,j}) \text{for every $1\leq i\leq h-1$ and $1\leq j\leq w$}, \tag{1}\] \[\mathit{right}(t_{i,j}) =\mathit{left}(t_{i,j+1}) \text{for every $1\leq i\leq h$} \text{and $1\leq j\leq w-1$},\] (2) \[\mathit{left}(t_{i,1}) =\mathit{right}(t_{i,w})=0 \text{for every $1\leq i\leq h$},\] (3) \[\mathit{top}(t_{1,j}) =\mathit{bottom}(t_{h,j})=0 \text{for every $1\leq j\leq w$}. \tag{4}\] See Appendix A for an example of a valid tiling. A _\(\mathcal{T}\)-tiling of width \(w\in\mathbb{N}_{+}\)_ is any tiling in \(\mathcal{T}^{h\times w}\) for some \(h\in\mathbb{N}_{+}\). We define \[\mathcal{T}^{\star\times w}\coloneqq\bigcup_{h\in\mathbb{N}_{+}}\mathcal{T}^{ h\times w}\,.\] **Problem 9**.: CorridorTiling__ * A triple \((\mathcal{T},n)\), where * \(\mathcal{T}\subseteq_{\mathrm{fin}}\mathbb{N}^{4}\) is a finite set of tiles, * \(n\in\mathbb{N}\) given in unary. * Does there exist a valid \(\mathcal{T}\)-tiling of width \(2^{n}\)? By \(\mathbb{T}\coloneqq\mathcal{P}_{\mathrm{fin}}(\mathbb{N}^{4})\times\mathbb{N}_{+}\) we denote the set of all valid instances of the above problem. **Fact 10**.: CorridorTiling _(Problem 9) is_ ExpSpace_-hard._ It is part of the folklore of the theory of computation that tiling problems can simulate the computation of Turing machines, the width of the requested tiling corresponding to the length of tape the machine is allowed to use. ExpSpace-completeness of the variant presented above is shown in [16]. ### The reduction We prove Theorem 7 by a reduction from CorridorTiling. We will show that the ExpSpace-hardness occurs in the simplest case of universal projection--projecting a binary relation to get a unary one. Intuitively, for each instance \(\mathcal{I}\) of CorridorTiling, we want to construct an automaton \(\mathcal{A}_{\mathcal{I}}\) such that \(\pi^{\forall}_{1}(\mathcal{L}(\mathcal{A}_{\mathcal{I}}))\) is not empty if, and only if, \(\mathcal{I}\) is a YES-instance. Formally, we provide a family of LogSpace-constructible nfa\((\mathcal{A}_{\mathcal{I}})_{\mathcal{I}\in\mathbb{T}}\), each over the alphabet \((\Sigma_{\mathcal{I}}\cup\{\#\})^{2}\) for some \(\Sigma_{\mathcal{I}}\) and representing relation \(\mathit{Lang2Rel}(\mathcal{L}(\mathcal{A}_{\mathcal{I}}))\subseteq(\Sigma_{ \mathcal{I}}^{*})^{2}\) s.t. \[\pi^{\forall}_{1}(\mathcal{L}(\mathcal{A}_{\mathcal{I}}))\neq\emptyset\iff \text{there exists a valid $\mathcal{T}$-tiling of width $2^{n}$}. \tag{5}\] For the rest of this section, we fix an instance \(\mathcal{I}=(\mathcal{I},n)\in\mathbb{T}\). Due to technical reasons, we assume that \(n\geq 6\). Note that every instance \((\mathcal{I},n)\) with \(n<6\) can be easily transformed into \((\mathcal{I}^{\prime},6)\), while preserving the (non)existence of a valid tiling. In Section 3.3, we define \(\Sigma_{\mathcal{I}}\), specify a language \(L_{\mathcal{I}}\in\Sigma_{\mathcal{I}}^{*}\), and prove that: **Lemma 11**.: \(L_{\mathcal{I}}\neq\emptyset\iff\text{there exists a valid $\mathcal{T}$-tiling of width $2^{n}$}\)_._ In turn in Section 3.4, we construct in LogSpace an nfa\(\mathcal{A}_{\mathcal{I}}\) such that **Lemma 12**.: \(\pi^{\forall}_{1}(\mathcal{L}(\mathcal{A}_{\mathcal{I}}))=L_{\mathcal{I}}\)_._ This completes the proof of Theorem 7, the correctness of the reduction stemming directly from Lemmas 11 and 12. ### Word encoding of tilings Here, we provide \(\Sigma_{\mathcal{I}}\) and an encoding \(\mathit{enc}_{\mathcal{I}}\colon\mathcal{T}^{\star\times 2^{n}}\to\Sigma_{ \mathcal{I}}^{*}\). Then we define \(L_{\mathcal{I}}\) as an intersection of six conditions, and prove Lemma 11 by showing that it coincides with the language of encodings of valid tilings. Let \(N_{n}\coloneqq\mathbb{N}\cap[0,n]\). Additionally, let \(N_{n}^{\gamma k}\coloneqq\{i\in N_{n}\mid i\?\ k\text{ for }?\in\{<,=,>\}\text{ and }k\in\mathbb{N}\}\) (to be used in the next section). The alphabet \(\Sigma_{\mathcal{I}}\) consists of three groups of symbols--tiles from \(\mathcal{T}\), numbers from \(N_{n}\), and auxiliary symbols: \[\Sigma_{\mathcal{I}}\coloneqq\mathcal{T}\cup N_{n}\cup\{\mathtt{A},\mathtt{I},\mathtt{I},\mathtt{I},\mathtt{\zeta},\mathtt{\}\}\,.\] Above, the symbol \(\mathtt{A}\) is a mnemonic--it marks places in Section 3.4 where we enforce "for-all"-type properties. In what follows, we print some symbols in colours (e.g., \(\mathtt{3010}\,t\,\mathtt{20103}\)) to assist in understanding the construction--such designations are auxiliary and are not reflected in the alphabet. The encoding of runs makes use of the word \(\mathit{\textsc{Comb}}_{n}\in N_{n}^{*}\) \[\mathit{\textsc{Comb}}_{n}\coloneqq i\,\mathit{\textsc{Comb}}_{i-1}^{\prime}\,i\,,\] where the words \(\big{(}\mathit{\textsc{Comb}}_{i}^{\prime}\big{)}_{0\leq i\leq n}\) are defined recursively as \[\mathit{\textsc{Comb}}_{0}^{\prime} \coloneqq\mathtt{0}\] \[\mathit{\textsc{Comb}}_{i}^{\prime} \coloneqq\mathit{\textsc{Comb}}_{i-1}^{\prime}\,i\,\mathit{\textsc{ Comb}}_{i-1}^{\prime} \text{for }0<i\leq n.\] Observe that \(\mathit{\textsc{Comb}}_{n}\) has length exactly \(2^{n}+1\). **Example 13**.: \(\mathit{\textsc{Comb}}_{4}\) is \(\mathtt{40102010301020104}\) and has length \(17\). We define the _encoding_ function \(\mathit{enc}_{\mathcal{I}}\colon\mathcal{T}^{\star\times 2^{n}}\to\Sigma_{ \mathcal{I}}^{*}\) in three steps. Let \(T=[t_{i,j}]_{i,j}\in\mathcal{T}^{h\times 2^{n}}\) for some \(h\in\mathbb{N}\). The tile \(t_{i,j}\) in \(T\) is represented as \[\mathit{encCell}_{\mathcal{I}}(T,i,j)\coloneqq\mathtt{\zeta}\,\mathit{\textsc{ Comb}}_{n}[1,j]\,t_{i,j}\,\mathit{\textsc{Comb}}_{n}[j+1,2^{n}+1]\,\mathtt{A} \,\mathtt{\rangle}\,,\] a single row is encoded as \[\mathit{encRow}_{\mathcal{I}}(T,i)\coloneqq\mathtt{I}\prod_{1\leq j\leq 2^{n}} \mathit{encCell}_{\mathcal{I}}(T,i,j)\,\mathtt{I}\,\mathtt{I}\,,\] and finally, the encoding of the entire tiling is defined as \[\mathit{enc}_{\mathcal{I}}(T)\coloneqq\mathtt{A}\prod_{1\leq i\leq h} \mathit{encRow}_{\mathcal{I}}(T,i)\,.\] **Example 14**.: The tiling \(T=[t_{i,j}]_{i,j}\) of size \((2,2^{4})\) is encoded as \[\mathtt{A}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I }\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{I}\, \mathtt{I}\,\mathtt{I}\,\mathtt{I}\,\mathtt{ **Condition 1**.: Language \(\textsc{Cond}^{1}_{\mathcal{Y}}\) is given by the regular expression Intuitively, encodings consist of rows bounded by \(\llbracket\!\!\!\rrbracket\) and \(\llbracket\!\!\!\rrbracket\); each row comprised of cells delimited by \(\boldsymbol{\zeta}\) and \(\boldsymbol{\gamma}\); the first cell begins with the number \(n\) followed by a tile, while last one ends with a tile, \(n\) and \(\boldsymbol{\Lambda}\). As \(|\mathcal{E}^{1}_{\mathcal{Y}}|=O(n)\), by Fact 4 the language \(\textsc{Cond}^{1}_{\mathcal{Y}}\) is recognised by an nfa\(\mathcal{B}^{1}_{\mathcal{Y}}\coloneqq\mathcal{A}\big{(}\mathcal{E}^{1}_{ \mathcal{Y}}\big{)}\) of size \(O(n)\). **Condition 2**.: Let \(\mathcal{T}_{\mathcal{Y}},\mathcal{T}_{\Delta}\subseteq\mathcal{T}\) contain tiles \(t\) such that \(\mathit{top}(t)=0\), and \(\mathit{bottom}(t)=0\), respectively. The language \(\textsc{Cond}^{2}_{\mathcal{Y}}\) is defined by the regular expression Intuitively, the first row has tiles with colour \(0\) on their top side, and the last row--on their bottom side. As in Condition 1, \(\textsc{Cond}^{2}_{\mathcal{Y}}\) is recognised by an nfa\(\mathcal{B}^{2}_{\mathcal{Y}}\coloneqq\mathcal{A}\big{(}\mathcal{E}^{2}_{ \mathcal{Y}}\big{)}\) of size \(O(n)\). **Condition 3**.: Let \(\mathcal{B}^{3}_{\mathcal{Y}}=(\mathit{colours}(\mathcal{T}),\Sigma_{\mathcal{ Y}},\delta,\{0\},\{0\})\), where \(\delta\) has transitions \[i\xrightarrow{t}j\] for every \[i,j\in\mathit{colours}(\mathcal{T})\] and \[t\in\mathcal{T}\] s.t. \[\mathit{left}(t)=i\] and \[\mathit{right}(t)=j\] , \[i\xrightarrow{a}i\] for every \[i\in\mathit{colours}(\mathcal{T})\] and \[a\in\Sigma_{\mathcal{Y}}\setminus(\mathcal{T}\cup\{\llbracket\!\!\!\rrbracket)\] , and additionally a single transition \(0\xrightarrow{1}0\). We set \(\textsc{Cond}^{3}_{\mathcal{Y}}\coloneqq\mathcal{L}\big{(}\mathcal{B}^{3}_{ \mathcal{Y}}\big{)}\). Intuitively, the language contains encodings where tile colours match horizontally, also requiring leftmost and rightmost colours in every row to be \(0\). **Condition 4** (each cell contains a \(\textsc{Cond}_{\mathcal{Y}}\)).: The definition of \(\textsc{Cond}^{4}_{\mathcal{Y}}\) uses a filtering regular expression \(\mathcal{F}^{4}_{\mathcal{Y}}\): \[\mathcal{F}^{4}_{\mathcal{Y}} \coloneqq\big{\{}\,N^{*}_{\mathcal{Y}}\,N^{*}_{n}\,\boldsymbol{ \Lambda}\,\big{\}}\,\Sigma^{*}_{\mathcal{Y}}\] \[\textsc{Cond}^{4}_{\mathcal{Y}} \coloneqq\big{\{}w\in\Sigma_{\mathcal{Y}}\,\big{|}\,\textsc{Cond }_{\mathcal{Y}}\,N^{*}_{\mathcal{Y}}(v)\text{ for every proper suffix $v$ of $w$ such that $v[1]=\boldsymbol{\zeta}$}\big{\}}\] **Condition 5** (prefix of a cell and first symbols of following cells' suffixes form a \(\textsc{Cond}_{\mathcal{Y}}\)).: \[\mathcal{F}^{5}_{\mathcal{Y}} \coloneqq\big{\{}\,N^{*}_{\mathcal{Y}}\,N^{*}_{n}\,\boldsymbol{ \Lambda}\,\big{\}}\,\big{(}\,N^{*}_{n}\,\mathcal{T}\,N^{*}_{n}\,N^{*}_{n}\, \boldsymbol{\Lambda}\,\big{)}^{*}\,\big{\{}N^{*}_{n}\,\boldsymbol{\Upsilon} \,N_{n}\,N^{*}_{n}\,\boldsymbol{\Lambda}\,\big{\}}\,\big{\}}\,\big{\}}\, \Sigma^{*}_{\mathcal{Y}}\] \[\textsc{Cond}^{5}_{\mathcal{Y}} \coloneqq\big{\{}w\in\Sigma_{\mathcal{Y}}\,\big{|}\,\textsc{Cond }_{\mathcal{Y}}\,N^{*}_{\mathcal{Y}}(v)\text{ for every proper suffix $v$ of $w$ such that $v[1]=\boldsymbol{\zeta}$}\big{\}}\] **Condition 6** (tile colours match vertically).: Let \(\boldsymbol{\Psi}_{t}\coloneqq\{t^{\prime}\in\mathcal{T}\,|\,\mathit{top}(t^{ \prime})=\mathit{bottom}(t)\}\) be the set of tiles with the top colour matching to the bottom of a tile \(t\). Define \[\mathcal{F}^{6}_{\mathcal{Y}} \coloneqq\sum_{t\in\mathcal{T}}\Big{(} \big{\{}\,N^{*}_{n}\,t\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Let us define \(L_{\_}{\_}{j}\coloneqq\mathbb{A}\bigcap_{\_}{1\leq i\leq 5}\textsc{Cond}_{\_}{j}^{\_}\). To prove Lemma 11, it suffices to show the following statement. \(L_{\_}{\_}{\mathcal{Y}}=\textsc{ValidEnc}_{\_}{\_}{j}\) Proof.: The inclusion \(L_{\_}{\_}{j}\supseteq\textsc{ValidEnc}_{\_}{j}\) is trivial. Inclusion \(L_{\_}{\_}{j}\subseteq\textsc{ValidEnc}_{\_}{\_}{j}\). Take any \(u\in L_{\_}{\_}{j}\). Due to Condition 1, it has the form \(\textsc{A}\,\prod_{\_}{1\leq i\leq h}\textsc{I}\,v_{\_}{i}\,\mathchoice{ \hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 0.4pt depth -0.299993pt\hss} \hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299904pt width 0.4pt depth -0. **Lemma 17** (modular design).: _For any six nfa\((\mathcal{C}^{i}_{\mathcal{j}})_{1\leq i\leq 6}\) over \(\Sigma_{\mathcal{j}}\times N_{n}\), there exists an nfa\(\mathcal{A}^{\prime}_{\mathcal{j}}\) over \(\Sigma_{\mathcal{j}}\times N_{n}\) such that_ \[\rho^{\forall}_{\mathcal{j}}(\mathcal{L}(\mathcal{A}^{\prime}_{\mathcal{j}}))= \mathtt{A}\bigcap_{1\leq i\leq 6}\rho^{\forall}_{\mathcal{j}}\big{(}\mathcal{L} \big{(}\mathcal{C}^{i}_{\mathcal{j}}\big{)}\big{)}\,.\] Proof.: Define \[\mathcal{H} \coloneqq\left(\{\mathtt{A}\}\times N_{n}\setminus\{\mathtt{1}, 2,\ldots,\mathtt{6}\}\right)(\Sigma_{\mathcal{j}}\times N_{n})^{*}\] \[\mathcal{A}^{\prime}_{\mathcal{j}} \coloneqq\mathcal{A}((\mathtt{A},\mathtt{1}))\odot\mathcal{C}^{ 1}_{\mathcal{j}}\ \ \oplus\ \ \mathcal{A}\left((\mathtt{A},\mathtt{2})\right)\odot\mathcal{C}^{2}_{ \mathcal{j}}\ \oplus\ \ \cdots\ \ \oplus\ \ \mathcal{A}((\mathtt{A},\mathtt{6}))\odot\mathcal{C}^{6}_{ \mathcal{j}}\ \ \oplus\ \ \ \mathcal{A}(\mathcal{H})\,.\] Observe that \[\mathtt{A}w\in\rho^{\forall}_{\mathcal{j}}(\mathcal{L}(\mathcal{ A}^{\prime}_{\mathcal{j}})) \iff\rho^{-1}_{\mathcal{j}}(\{\mathtt{A}w\})\subseteq\mathcal{L}( \mathcal{A}^{\prime}_{\mathcal{j}})\iff(\{\mathtt{A}\}\times N_{n})\,\rho^{-1 }_{\mathcal{j}}(\{w\})\subseteq\mathcal{L}(\mathcal{A}^{\prime}_{\mathcal{j}}) \iff\] \[\iff\forall i\in N_{n}\,.\,(\mathtt{A},i)\,\rho^{-1}_{\mathcal{j} }(\{w\})\subseteq\mathcal{L}(\mathcal{A}^{\prime}_{\mathcal{j}})\,,\] but trivially \[\mathcal{L}\Big{(}\mathcal{A}((\mathtt{A},j))\odot\mathcal{C}^{ j}_{\mathcal{j}}\Big{)}\cap(\mathtt{A},i)\,\rho^{-1}_{\mathcal{j}}(\{w\}) =\emptyset\] for any \[i\neq j\] \[\mathcal{L}(\mathcal{A}(\mathcal{H}))\cap(\mathtt{A},i)\,\rho^{-1 }_{\mathcal{j}}(\{w\}) =\emptyset\] for any \[i\]. Therefore, \(\mathtt{A}w\in\rho^{\forall}_{\mathcal{j}}(\mathcal{L}(\mathcal{A}^{\prime}_{ \mathcal{j}}))\) if, and only if, \(\rho^{-1}_{\mathcal{j}}(\{w\})\subseteq\rho^{\forall}_{\mathcal{j}}\big{(} \mathcal{L}\big{(}\mathcal{C}^{i}_{\mathcal{j}}\big{)}\big{)}\) for all \(i\), as required. By definition of \(L_{\mathcal{j}}\), it only remains to construct automata \(\mathcal{C}^{i}_{\mathcal{j}}\) such that \(\rho^{\forall}_{\mathcal{j}}\big{(}\mathcal{L}\big{(}\mathcal{C}^{i}_{\mathcal{ j}}\big{)}\big{)}=\textsc{Cond}^{i}_{\mathcal{j}}\) for \(1\leq i\leq 6\). The construction is easy for Conditions 1-3: \[\mathcal{C}^{i}_{\mathcal{j}}\coloneqq\rho^{-1}_{\mathcal{j}}\big{(}\mathcal{ B}^{i}_{\mathcal{j}}\big{)}\] for \[i\in\{1,2,3\}\] as \(\rho^{\forall}_{\mathcal{j}}\big{(}\mathcal{L}\big{(}\rho^{-1}_{\mathcal{j}}( \mathcal{A})\big{)}\big{)}=\mathcal{L}(\mathcal{A})\) for any nfa\(\mathcal{A}\). Observe that the remaining Conditions 4-6 all speak about "every proper suffix" satisfying some simple regular condition. We handle that in a general way. For \(L\subseteq\left(\Sigma_{\mathcal{j}}\times N_{n}\right)^{*}\), define \[L_{\forall\text{\rm{suf}}}(L)\coloneqq\big{\{}w\,\big{|}\,v\in \rho^{\forall}_{\mathcal{j}}(L)\text{ for all proper suffixes $v$ of $w$}\big{\}}\] **Lemma 18** (recognising "for all proper suffixes").: _For any nfa\(\mathcal{A}\) over \(\Sigma_{\mathcal{j}}\times N_{n}\), there exists an nfa\(\textsc{AllSuf}(\mathcal{A})\) of size \(O(|\mathcal{A}|)\) such that_ \[\rho^{\forall}_{\mathcal{j}}(\mathcal{L}(\textsc{AllSuf}(\mathcal{A})))=L_{ \forall\text{\rm{suf}}}(\mathcal{L}(\mathcal{A}))\,.\] Proof.: Fix any nfa\(\mathcal{A}=(Q,\Sigma_{\mathcal{j}}\times N_{n},\delta,Q_{\text{\rm{\rm{\rm{\rm {\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\, 0 0 0 0}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\\ {\\ \\\\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Let \(r\) be an accepting run of \(\textsc{{AllSuf}}(\mathcal{A})\) over \(w^{\prime}\). By construction, the run stays in state \(s\) while reading \(\tau(u)\) and goes to some \(q\in Q_{\mathrm{I}}\) upon reading \((a,1)\). Therefore, the remaining suffix of \(r\) is an accepting run of \(\mathcal{A}\) over \(v^{\prime}\). Inclusion "\(\supseteq\)". Fix \(w\in L_{\forall\mathsf{suf}}(\mathcal{L}(\mathcal{A}))\). Take any \(w^{\prime}\in\rho_{j}^{-1}(\{w\})\). We will show that \(w^{\prime}\in\mathcal{L}(\textsc{{AllSuf}}(\mathcal{A}))\). Let \(u^{\prime}(a,k)v^{\prime}\coloneqq w^{\prime}\) be such that \(u^{\prime}\) is the maximal prefix arising as \(\tau(u)\) for some \(u\) (possibly empty). Note that \(k\neq 0\). By assumption, \(v^{\prime}\in\mathcal{L}(\mathcal{A})\), so there exists an accepting run \(r_{2}\) of \(\mathcal{A}\) over \(v^{\prime}\) starting in some \(q\in Q_{ini}\). By construction, there exists a run \(r_{1}\) from \(s\) to \(q\) over \(u^{\prime}(a,k)\) in \(\textsc{{AllSuf}}(\mathcal{A})\). Hence the run \(r_{1}r_{2}\) accepts \(w^{\prime}\). To handle conditions "beginning with \(\,\mathsf{C}\)" and "containing \(\,\mathsf{I}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _Define \(\hat{\mathbb{C}}^{i}_{\mathcal{J}}\coloneqq(Q,\Sigma\times N_{n},\delta,Q_{I},Q_{F})\), where_ \[Q \coloneqq Q^{(1)}\times Q^{(2)}, Q_{I} \coloneqq Q^{(1)}_{I} \times Q^{(2)}_{I}, Q_{F} \coloneqq Q^{(1)}_{F} \times Q^{(2)}_{F},\] _and the transition relation is_ \[\delta \coloneqq\left\{(p,q)\xrightarrow{(a,\alpha)}(r,s)\,\Big{|}\,q \xrightarrow{(a,\top)}s\in\delta^{(2)}\wedge p\xrightarrow{(a,\alpha)}r\in \delta^{(1)}\right\}\cup\] \[\left\{(p,q)\xrightarrow{(a,\alpha)}(p,s)\,\Big{|}\,q \xrightarrow{(a,\perp)}s\in\delta^{(2)}\wedge p\in Q^{(1)}\right\}.\] _Intuitively, \(\hat{\mathbb{C}}^{i}_{\mathcal{J}}\) runs \(\mathcal{C}_{n}\) over the fragments of the input which were underlined by \(\mathcal{F}^{i}_{\mathcal{J}}\)._ **Fact 24**.: \(w\in\mathcal{L}\big{(}\hat{\mathbb{C}}^{i}_{\mathcal{J}}\big{)}\) _if, and only if, \(\exists v\in\mathcal{L}\big{(}\rho^{-1}_{\mathcal{J}}\big{(}\mathcal{F}^{i}_{ \mathcal{J}}\big{)}\big{)}\,.\,\psi_{\mathrm{in}}(v)=w\wedge\psi_{\mathrm{out} }(v)\in\mathcal{L}(\mathcal{C}_{n})\)._ To finish the construction, we need to prove that **Lemma 25**.: _For \(i\in\{4,5,6\}\)_ As the proofs for \(i\in\{4,5,6\}\) are analogous, we focus on the hardest one, and then only comment how it can be adapted for \(i\in\{4,5\}\). Proof (\(i=6\)).: A. Inclusion "\(\subseteq\)". Take any \(w\in\rho^{\forall}\!\big{(}\mathcal{L}\big{(}\hat{\mathbb{C}}^{6}_{\mathcal{J} }\big{)}\big{)}\). Define \[U\coloneqq\left\{u\in\mathcal{L}\big{(}\mathcal{F}^{6}_{\mathcal{J}}\big{)} \,\big{|}\,\psi_{\mathrm{in}}(u)=w\right\}\] Note that if \(U=\emptyset\), then \(\mathcal{F}^{6}_{\mathcal{J}}(w)=\emptyset\), so by Fact 24\(\mathcal{L}\big{(}\hat{\mathbb{C}}^{6}_{\mathcal{J}}\big{)}=\emptyset\), and \(\rho^{\forall}\!\big{(}\mathcal{L}\big{(}\hat{\mathbb{C}}^{6}_{\mathcal{J}} \big{)}\big{)}=\emptyset\), a contradiction. Therefore, \(U\neq\emptyset\), and \(w\in\mathcal{L}\big{(}\psi_{\mathrm{in}}\big{(}\mathcal{F}^{6}_{\mathcal{J}} \big{)}\big{)}\), so it has the form \[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Observe that \(w^{\prime}\) is properly defined, as positions \(\alpha_{u}\) are pairwise different (corresponding to the last letters of \(s_{1},s_{2},\ldots,s_{k}\)). Since \(\rho_{j}(w^{\prime})=w\), from assumption \(w\in\rho^{\forall}\!\left(\mathcal{L}\!\left(\mathbb{G}_{j}^{6}\right)\right)\) we have that \(w^{\prime}\in\mathcal{L}\!\left(\mathbb{G}_{j}^{6}\right)\). By Fact 24, we obtain \(v\in\mathcal{L}\!\left(\rho_{j}^{-1}\!\left(\mathcal{F}_{j}^{6}\right)\right)\) such that \[\psi_{\text{in}}(v)=w^{\prime}\wedge\psi_{\text{out}}(v)\in\mathcal{L}\!\left( \mathcal{C}_{n}\right)\] However, \(\rho_{j}(\psi_{\text{out}}(v))=\rho_{j}(\psi_{\text{out}}(v_{u}))\) for some \(u\in U\) and last symbols of \(\psi_{\text{out}}(v)\) and \(\psi_{\text{out}}(v_{u})\) are identical. Since by construction \(\mathcal{C}_{n}\) ignores the component \(N_{n}\) of its alphabet \(\Sigma_{I}\times N_{n}\) for all letters but the last one, we get that \[\psi_{\text{out}}(v)\in\mathcal{L}\!\left(\mathcal{C}_{n}\right)\iff\psi_{ \text{out}}(v_{u})\in\mathcal{L}\!\left(\mathcal{C}_{n}\right).\] We conclude that \(\psi_{\text{out}}(v)\notin\mathcal{L}\!\left(\mathcal{C}_{n}\right)\), a contradiction. **B. Inclusion "\(\supseteq\)". Take any \(w\) such that \(\textsc{Comb}_{n}\texttt{A}\in\mathcal{F}_{j}^{6}(w)\). Using definition of \(\mathcal{F}_{j}^{6}(w)\), fix \(v\in\mathcal{L}\!\left(\mathcal{F}_{j}^{6}\right)\) such that \(\psi_{\text{in}}(v)=w\) and \(\psi_{\text{out}}(v)=\textsc{Comb}_{n}\texttt{A}\). We have to show \(\rho_{j}^{-1}(\{w\})\subseteq\mathcal{L}\!\left(\mathbb{G}_{j}^{6}\right)\). Take any \(w^{\prime}\in\rho_{j}^{-1}(\{w\})\). Let \(u\in(\Sigma_{j}\times\mathbb{N}_{n}\times\Phi)^{*}\) be the unique word such that \(\psi_{\text{in}}(u)=w^{\prime}\) and \(\rho_{j}(u)=v\). Observe that \(\psi_{\text{out}}(w^{\prime})\in\rho_{j}^{-1}(\{\psi_{\text{out}}(w^{\prime}) \})\subseteq\mathcal{L}\!\left(\mathcal{C}_{n}\right)\), thus \(w^{\prime}\in\mathcal{L}\!\left(\mathbb{G}_{j}^{6}\right)\), as required. Proof (\(i\in\{4,5\}\)).: The proof is analogous to the case \(i=6\). As the cases are distinguished by the filter \(\mathcal{F}_{j}^{i}\) being used, the only differences are related to the shape of words matched by \(\psi_{\text{in}}\!\left(\mathcal{F}_{j}^{i}\right)\). In particular, the set \(U\) for \(i\in\{4,5\}\) is now a singleton containing \(u_{i}\): \[u_{4} =\textbf{\{}p\,t\,\underline{s}\,\texttt{A}\,\gamma\] \[u_{5} =\textbf{\{}p_{1}\,\underline{t}\,\underline{s^{\prime}}\,s_{1} \texttt{A}\,\texttt{\}\texttt{\}\texttt{\}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ \(L(\mathcal{B}^{\prime})=h(\mathcal{L}(\mathcal{B}))\). The homomorphism \(h\) acts almost like existential projection, but in general, we do not have that \(\pi_{d}^{\exists}(S)\) is automatic via \(\mathcal{B}^{\prime}\). For instance, suppose that \[w=\begin{bmatrix}a\\ a\end{bmatrix}\begin{bmatrix}b\\ a\end{bmatrix}\begin{bmatrix}\boldsymbol{\pi}\\ c\end{bmatrix}\begin{bmatrix}\boldsymbol{\pi}\\ a\end{bmatrix}\in\mathcal{L}(\mathcal{B})\,.\] Then \(h(w)=aa\boldsymbol{\#}\not\in L_{\boldsymbol{\nu}}\). In order to remove the superfluous "\(\boldsymbol{\#}\)" symbols, we define \[\textsc{Strip}(L)\coloneqq\left\{w\,\Big{|}\,\text{there exists $v\in(\left\{ \boldsymbol{\#}\right\}^{d})^{*}$ such that $wv\in L$}\right\}.\] It is then the case that \(\pi_{d}^{\exists}(S)\) is automatic via \(\textsc{Strip}(\mathcal{L}(\mathcal{B}^{\prime}))\cap L_{\boldsymbol{\nu}}\). Note that an nfa for \(\textsc{Strip}(L)\) can be computed in linear time from an nfa for \(L\) without changing the set of states by making all states accepting that can reach a final state via a sequence of "\(\left\{\boldsymbol{\#}\right\}^{d}\)" symbols. Recall that \(\pi_{d}^{\forall}(R)=\overline{\pi_{d}^{\exists}(\overline{R})}\), consequently an automatic presentation of \(\pi_{d}^{\forall}(R)\) is given by \[\overline{\Big{(}\textsc{Strip}\Big{(}h\Big{(}\overline{\mathcal{L}(\mathcal{ A}_{R})}\Big{)}\Big{)}\cap L_{\boldsymbol{\nu}}\Big{)}}\cap L_{\boldsymbol{\nu}}.\] Assuming \(Q\) is the set of states of \(\mathcal{A}_{R}\), and recalling that \(L_{\boldsymbol{\nu}}\subseteq(\Sigma_{\boldsymbol{\ell}}^{d})^{*}\) is given by an nfa with \(2^{d+2}\) many states, it can easily be checked that the number of states of an nfa whose language gives the universal projection of \(R\) is bounded by \(2^{\left(2^{|Q|+d+2}\right)+d+2}\). With those characterisations and estimations at hand, the ExpSpace upper bound stated in Theorem 3.3 can now easily be established. The proof is relegated to Appendix B Deciding whether \(\pi_{d}^{\forall}(R)\neq\emptyset\) is in ExpSpace, measured in terms of the size of its associated nfa\(\mathcal{A}_{R}\). ## 6 Conclusion In this paper, we studied the computational complexity of eliminating universal quantifiers in automatic structures. We showed that, in general, this is a computationally challenging problem whose associated decision problem is ExpSpace-complete. It would be interesting to understand whether it is possible to identify natural sufficient conditions on regular languages for which a universal projection step does not result in a doubly-exponential blow-up and only results in, e.g., a polynomial or singly exponential growth. Results of this kind have been obtained in model-theoretic terms for structures of bounded degree [12, 8], but we are not aware of a systematic study of questions of this kind on the level of regular languages. Figure 1: The unique valid \(\mathcal{T}_{\text{inc}}\)-tiling of width 5 (rotated by 90 degrees counterclockwise).
2304.11155
Commutation relations and stability of switched systems: a personal history
This expository article presents an overview of research, conducted mostly between the mid-1990s and late 2000s, that explores a link between commutation relations among a family of asymptotically stable vector fields and stability properties of the switched system that these vector fields generate. This topic is viewed through the lens of the author's own involvement with it, by interspersing explanations of technical developments with personal reminiscences and anecdotes.
Daniel Liberzon
2023-04-21T17:54:38Z
http://arxiv.org/abs/2304.11155v1
# Commutation relations and stability of switched systems: ###### Abstract This expository article presents an overview of research, conducted mostly between the mid-1990s and late 2000s, that explores a link between commutation relations among a family of asymptotically stable vector fields and stability properties of the switched system that these vector fields generate. This topic is viewed through the lens of the author's own involvement with it, by interspersing explanations of technical developments with personal reminiscences and anecdotes. The author dedicates this article to himself on the occasion of his 50th birthday. ## 1 How it all began for me In January 1998, having just defended my PhD thesis, I came to Yale to do a postdoc with Steve Morse (for what eventually turned out to be 2.5 very enjoyable years). At that time Steve was getting interested in stability of switched systems, a subject I knew nothing about. A particular result that caught his attention was by Leonid Gurvits who, I believe, had given a seminar at Yale shortly before my arrival. This result--which we will examine in detail in Section 7 below--established a link between stability of a switched linear system and the property that the Lie algebra generated by the individual system matrices is nilpotent. Steve knew that I was trained as a mathematician and that I did my PhD under Roger Brockett, who was a pioneer in the use of Lie algebras in control theory (see, e.g., [10, 11]). Grossly overestimating my knowledge of this subject, Steve suggested that I try to improve and generalize Gurvits' result. To be able to explain what happened next, I first need to more formally introduce switched systems and their stability properties. In what follows, I assume that the reader is familiar with differential equations and understands what the notation \(\dot{x}=f(x)\) means, but not much else. ## 2 Switched systems Suppose we are given a collection \(f_{p}:\mathbb{R}^{n}\to\mathbb{R}^{n}\), \(p\in\mathcal{P}\) of vector fields or, what is the same, a collection of dynamical systems (which we will call _modes_) \[\dot{x}=f_{p}(x),\qquad p\in\mathcal{P} \tag{1}\] with state \(x\in\mathbb{R}^{n}\). Here \(\mathcal{P}\) is an index set, which we assume for simplicity to be finite throughout this article by setting \(\mathcal{P}=\{1,\ldots,m\}\) for some positive integer \(m\), although this assumption is not really necessary for many of the results that we will discuss. A second ingredient needed to define a switched system is a function of time that specifies which of the above modes is to be activated when. We denote this function by \(\sigma:[0,\infty)\to\mathcal{P}\) and call it a _switching signal_. While the total number of switches (discontinuities of \(\sigma\)) can be infinite, we ask that on every _bounded_ time interval the number of switches be finite (to avoid some unpleasant technical issues). Since \(\mathcal{P}\) is a finite set, it is clear that \(\sigma(\cdot)\) is a piecewise constant function. Therefore, we are dealing with time variation of a fairly restricted type. The resulting dynamics can be written as \[\dot{x}(t)=f_{\sigma(t)}(x(t))\] or, more compactly, as \[\dot{x}=f_{\sigma}(x). \tag{2}\] We call the system (2) a _switched system_, and it will be our main object of study. While in writing (2) I am using the notation that Steve Morse adopted and showed to me in 1998, switched systems had been studied in the control theory literature for some years prior to that and, in view of their proximity to differential inclusions [7] and discontinuous systems [15], their history goes back several decades. ## 3 Stability notions Let us suppose that all the individual modes (1) share an equilibrium at the origin, i.e., \(f_{p}(0)=0\) for all \(p\in\mathcal{P}\), and that we are interested in (asymptotic) stability properties of this equilibrium for the switched system (2). More precisely, we want to know when such stability properties hold uniformly over all (piecewise constant) switching signals \(\sigma(\cdot)\). The simplest such property to define is _global uniform exponential stability (GUES)_, which asks that all solutions of (2) satisfy \[|x(t)|\leq ce^{-\lambda t}|x(0)|\qquad\forall\,t\geq 0,\;\forall\,\sigma(\cdot)\] (GUES) for some fixed positive constants \(c\) and \(\lambda\). (Here \(|\cdot|\) is some chosen norm in \(\mathbb{R}^{n}\), usually the Euclidean norm.) Note that the previous inequality is of the form \[|x(t)|\leq\beta(|x(0)|,t)\qquad\forall\,t\geq 0,\;\forall\,\sigma(\cdot)\] (GUAS) where \(\beta(r,t):=ce^{-\lambda t}r\). The reason we labeled this more general property as "GUAS" is that, with \(\beta:[0,\infty)\times[0,\infty)\to[0,\infty)\) an arbitrary continuous function such that \(\beta(\cdot,t)\) is \(0\) at \(0\) and increasing for each fixed \(t\) and \(\beta(r,\cdot)\) is decreasing to \(0\) for each fixed \(r\), this gives us precisely _global uniform asymptotic stability_, which is GUES but without the requirement that the solution bound decay exponentially in time or grow linearly in the norm of the initial state. Functions \(\beta\) with the above properties are known as "class \(\mathcal{KL}\) functions"; for example, \(\beta(r,t):=r^{2}/(1+t)\) is one such function. For the case of a single (non-switched) system, the above properties (GUES) and (GUAS) reduce to the classical global exponential stability and global asymptotic stability, respectively. If GUAS still seems like too strong a requirement, we can consider its local version (by requiring the bound to hold only for initial states in some neighborhood of \(0\)). We can also drop the uniformity over \(\sigma\), and ask instead that for each _fixed_ switching signal \(\sigma(\cdot)\), the corresponding (time-varying) system be asymptotically stable (globally or locally), or just ask that its solutions converge to \(0\) (and ignore Lyapunov stability). But we should admit that even this latter property--_attractivity for each fixed \(\sigma\)_--is still very strong. Indeed, it is easy to construct examples where all modes are asymptotically stable but switching between them can lead to instability (see, e.g., [31]). It may thus seem more reasonable--and more practically relevant--to ask for asymptotic stability to be preserved only for _some_, but not all, switching signals, and to investigate what these switching signals are. So, why insist on asymptotic stability for _all_ switching signals? I have at least two answers to this question. One is that examining stability under arbitrary switching, and understanding possible instability mechanisms, paves the way to studying the more realistic scenario of stability under constrained switching. Another answer is that mathematically speaking, the problem of asymptotic stability under arbitrary switching turns out to be interesting and admits elegant solutions, as I try to demonstrate below. It should be clear that a prerequisite for stability (in the asymptotic, exponential, or any other sense) is that each individual mode possess this stability property, since constant switching signals \(\sigma\equiv p\in\mathcal{P}\) are allowed. This necessary condition will be assumed throughout. ## 4 Common Lyapunov functions Readers familiar with basic stability theory for nonlinear systems can skim or skip the following two paragraphs, while others can consult a textbook such as [21] for further details. Let us go back for a moment to a single (non-switched) system \[\dot{x}=f(x) \tag{3}\] with \(x\in\mathbb{R}^{n}\) and \(f(0)=0\). A classical approach known as _Lyapunov's direct method_ analyzes stability of (3) with the help of a _Lyapunov function_. As a candidate Lyapunov function, one takes a continuously differentiable function \(V:\mathbb{R}^{n}\to\mathbb{R}\) such that \(V(0)=0\) and \(V(x)>0\) for all \(x\neq 0\) (such \(V\) are called _positive definite_). Then one defines a new function, \(\dot{V}(x):=(\partial V/\partial x)\cdot f(x)\). Its significance lies in the fact that the derivative of \(V(x(t))\) along solutions of the system (3) is given by \(\frac{d}{dt}V(x(t))=\dot{V}(x(t))\). The key ideas here are that the function \(\dot{V}\) can be obtained without the knowledge of the system's solutions, and that this allows us to reduce the analysis of the complex behavior of \(x(t)\in\mathbb{R}^{n}\) to that of the scalar-valued quantity \(V(x(t))\). Specifically, Lyapunov's stability theorem asserts that the system (3) is stable (in the sense of Lyapunov) if \(\dot{V}(x)\leq 0\) for all \(x\); asymptotically stable if \(\dot{V}(x)<0\) for all \(x\neq 0\); and globally asymptotically stable if \(V\) also has the property that \(V(x)\to\infty\) whenever \(|x|\to\infty\) (called _radial unboundedness_). Exponential stability can be concluded under some additional structure, for example, if both \(V\) and \(\dot{V}\) are quadratic (or are sandwiched between quadratic bounds). It is important to know that _converse_ Lyapunov theorems also exist, whereby stability of the system (asymptotic or exponential, local or global) can be used to prove the existence of a Lyapunov function with requisite properties. The associated formulas for the Lyapunov functions involve the system's solutions, and are thus not constructive. Let us denote by \(\phi(t,x)\) the solution of (3) at time \(t\) corresponding to the initial condition \(x(0)=x\). Under the assumption that (3) is asymptotically stable, a construction due to Massera produces a Lyapunov function of the form \[V(x)=\int_{0}^{\infty}G(|\phi(t,x)|)dt \tag{4}\] where \(G:[0,\infty)\to[0,\infty)\) is a function with certain properties. In the particular case when (3) is exponentially stable, we can take \(G(r)=r^{2}\), resulting in \[V(x)=\int_{0}^{\infty}|\phi(t,x)|^{2}dt. \tag{5}\] Another construction, due to Kurzweil, does not involve integration and instead works with the function \(g(x):=\inf_{t\leq 0}|\phi(t,x)|\) and then defines \[V(x):=\sup_{t\geq 0}g(\phi(t,x))k(t) \tag{6}\] for an appropriate auxiliary function \(k(\cdot)\). Kurzweil's construction works globally if asymptotic stability is global, while Massera's construction only works on a bounded region. On the other hand, Massera's construction is capable of providing an upper bound on the norm of the gradient \(\partial V/\partial x\), which is important for perturbation analysis. Let us return to the switched system (2). Just like in the non-switched case, we can analyze its stability with the help of a Lyapunov function \(V:\mathbb{R}^{n}\to\mathbb{R}\). But now we want \(V\) to decay along solutions of each mode, i.e., we want to have \((\partial V/\partial x)\cdot f_{p}(x)<0\) for all \(x\neq 0\) and all \(p\in\mathcal{P}\). Moreover, we want a uniform lower bound on this decay rate over all modes. For a finite collection of modes (which is the situation considered here) the existence of such a mode-independent decay rate is automatic, but it still helps to write this property explicitly as follows: \[\frac{\partial V}{\partial x}f_{p}(x)\leq-W(x)<0\qquad\forall\,x\neq 0,\ \forall\,p\in\mathcal{P} \tag{7}\] for some continuous function \(W\). A function \(V\) satisfying (7) is referred to as a _common Lyapunov function_ for our family of systems (1) or, with a slight abuse of terminology, for the switched system (2). We see from (7) that when we start switching, \(V(x(t))\) will always decay along solutions of (2) at the rate of at least \(W(x(t))\). This is all that is needed to show uniform asymptotic stability, by arguing in the same way as in the proof of Lyapunov's classical stability theorem (the fact that \(V(x(\cdot))\) is not differentiable at the switching times does not really affect the proof). If \(V\) is also radially unbounded, we conclude GUAS. Converse Lyapunov theorems are also available, stating that the existence of a common Lyapunov function is necessary for GUAS. (Some of these results were actually derived in the more general setting of differential inclusions; see, e.g., [27, p. 188] for further discussion and references.) When trying to prove GUAS (or one of its variants), we typically have two choices: one is to directly analyze the trajectories in the time domain, and the other is to look for a common Lyapunov function. We will soon see examples of successful application of both of these approaches. However, proving GUAS by either method is very challenging in general, and so we need to identify some additional system structure that can help us. ## 5 Switched linear systems and their stability Students of systems and control theory (particularly those in engineering disciplines rather than in mathematics) are typically first introduced to linear systems before being exposed to more general nonlinear systems. What makes the analysis of linear systems more manageable is the availability of an explicit formula for system solutions in terms of matrix exponentials, as well as computationally tractable stability conditions in terms of quadratic Lyapunov functions. Accordingly, a very widely studied special case of the switched system (2) is obtained by taking the individual modes to be linear, of the form \(\dot{x}=A_{p}x\) with each \(A_{p}\) a real-valued \(n\times n\) matrix. This leads to the _switched linear system_ \[\dot{x}=A_{\sigma}x. \tag{8}\] For reasons explained previously, we take the matrices \(A_{p}\), \(p\in\mathcal{P}\) to be Hurwitz (i.e., with eigenvalues having negative real parts). For our purposes, there is one more feature that makes studying switched linear systems convenient. As shown by David Angeli around 1999 [3], the different stability notions that we introduced in Section 3 for the switched nonlinear system (2) actually all turn out to be equivalent in the linear case. Namely, for the switched linear system (8) we have \[\mbox{GUES}\ \Leftrightarrow\ \mbox{GUAS}\ \Leftrightarrow\ \mbox{(local) UAS}\ \Leftrightarrow\ \mbox{attractivity for each}\ \sigma(\cdot). \tag{9}\] All the "\(\Rightarrow\)" implications are of course obvious from the definitions (and do not require linearity). The converse implications are deduced with the help of the fact that solutions of (8) scale linearly with initial conditions.1 The middle "\(\Leftarrow\)" implication follows immediately from this observation. The leftmost "\(\Leftarrow\)" implication is also not difficult to justify by noting that, if \(\beta\) is the function on the right-hand side of (GUAS) and if \(T>0\) is such that, say, \(\beta(1,T)\leq 1/2\), then by linearity we must have \(|x(T)|\leq|x(0)|/2\), \(|x(2T)|\leq|x(0)|/4\), and so on, hence the convergence is exponential. The remaining (rightmost) "\(\Leftarrow\)" implication is a deeper result, and relies on a general theorem about uniform attractivity proved by Sontag and Wang in [44]. Footnote 1: It is enough to know that the dependence on initial conditions is homogeneous of degree 1. I learned about the above equivalences from David Angeli when I met him during one of my visits to Rutgers University hosted by Eduardo Sontag, who during my postdoc years at Yale regularly invited me and other young researchers to come and interact. A few years later, when I published my switched systems book [27], David's paper was still cited there as a manuscript submitted for publication. Apparently, the reviewers felt that the result was too simple and should be well known, probably buried somewhere in the Russian literature (although neither David nor the reviewers were able to locate a precise reference). Eventually, this result was added to the paper [4] and this way it was finally published. We know from Section 4 that the equivalent stability notions listed in (9) are also equivalent to the existence of a common Lyapunov function \(V\) satisfying (7). In general, constructing a common Lyapunov function is hard and there is no systematic procedure for doing it. But maybe the linear structure can help us? After all, for a single linear system \(\dot{x}=Ax\), with \(A\) a Hurwitz matrix, finding a Lyapunov function is straightforward. As is well known, for every matrix \(Q=Q^{T}>0\) there is a unique matrix \(P=P^{T}>0\) solving the Lyapunov equation \[PA+A^{T}P=-Q \tag{10}\] and this yields a quadratic Lyapunov function \[V(x)=x^{T}Px \tag{11}\] whose derivative along solutions is \(-x^{T}Qx\). For future reference, we note that \(P\) is given by the explicit formula \[P=\int_{0}^{\infty}e^{A^{T}t}Qe^{At}dt \tag{12}\] although in practice one just solves the linear system of equations (10) directly. Remembering that the flow of \(\dot{x}=Ax\) is given by \(\phi(t,x)=e^{At}x\), we also see that for \(Q=I\) this Lyapunov function construction is a special instance of the one given by the formula (5). Now, for the switched linear system (8) it seems reasonable to ask whether we can always search for a common Lyapunov function within the class of positive definite quadratic forms (11). This means finding a matrix \(P=P^{T}>0\) that satisfies \[PA_{p}+A_{p}^{T}P<0\qquad\forall\,p\in{\cal P} \tag{13}\] or, more explicitly (but equivalently in view of the finiteness of \({\cal P}\)), \[PA_{p}+A_{p}^{T}P\leq-Q<0\qquad\forall\,p\in{\cal P}. \tag{14}\] What is attractive about this possibility is that (13) is a system of _linear matrix inequalities_ (LMIs), and there are efficient numerical methods for solving them (see [9] for a comprehensive introduction, and [32] for an alternative approach that handles the inequalities sequentially rather than simultaneously). Unfortunately, it turns out that working with quadratic common Lyapunov functions is not sufficient even for switched linear systems. In their 1999 paper [14], Dayawansa and Martin gave an example of a switched linear system that is GUES but does not possess a quadratic common Lyapunov function.2 Their example involves constructing two \(2\times 2\) Hurwitz matrices \(A_{1}\) and \(A_{2}\) for which the pair of inequalities (13) can be directly shown to be infeasible. The proof that the corresponding planar switched linear system is nevertheless GUES is more conceptually interesting, as it relies on an idea that will resurface prominently later in this article. This idea is to analyze the behavior of the system under "worst-case switching," which in this particular case is described as follows. The linear vector fields \(A_{1}x\) and \(A_{2}x\) are collinear on two lines passing through the origin (the dashed lines in Figure 1). In each conic region between these two lines, one of the two vector fields points outwards relative to the other. The worst-case switching strategy thus consists of following the vector field that points outwards, with switches occurring on the two lines. It can be verified that this produces a trajectory converging to the origin, because the distance from the origin after one rotation decreases (see Figure 1). It is easy to see that all other trajectories of the switched system must then also converge to the origin. To conclude GUES, we can use the worst-case trajectory to obtain a uniform upper bound on the norm of all other trajectories (with the same initial state), or alternatively we can appeal to (9). Footnote 2: A much older paper by Pyatnitskiy [38] contains a different example that illustrates the same point, although it takes more effort to extract this observation from [38] because that paper studies absolute stability of a class of time-varying systems and does not explicitly treat switched systems. The fact that GUES of a switched linear system is not always certifiable by a quadratic common Lyapunov function does not mean, of course, that such functions cannot still be useful in specific scenarios. In fact, we will see that they are quite useful when the Hurwitz matrices \(A_{p}\), \(p\in\mathcal{P}\) satisfy certain commutation relations. ## 6 Commuting matrices We are now finally ready to begin talking about what these commutation relations are and why they are relevant to the problem of stability under arbitrary switching. Our starting point is the following simple observation: _If the Hurwitz matrices \(A_{p}\), \(p\in\mathcal{P}\) commute pairwise, then the switched linear system (8) is GUES._ To see why this is true, take for simplicity \(\mathcal{P}=\{1,2\}\). The commutativity condition of course just means that \(A_{1}A_{2}=A_{2}A_{1}\), which we can also write as \([A_{1},A_{2}]=0\) with the _commutator_, or _Lie bracket_, of two Figure 1: Worst-case switching matrices defined as \[[A_{1},A_{2}]:=A_{1}A_{2}-A_{2}A_{1}. \tag{15}\] For a switching signal that takes the value, say, 1 for \(t_{1}\) units of time, then the value 2 for \(s_{1}\) units of time, then 1 for \(t_{2}\) units of time, and so on, the corresponding solution of (8) at time \(t\) is of the form \[x(t)=e^{A_{2}s_{k}}e^{A_{1}t_{k}}\cdots e^{A_{2}s_{2}}e^{A_{1}t_{2}}e^{A_{2}s_ {1}}e^{A_{1}t_{1}}x(0). \tag{16}\] Since \(A_{1}\) and \(A_{2}\) commute, their matrix exponentials commute as well. Hence, we can rearrange the above expression as3 Footnote 3: Here we are also using the fact that, e.g., \(e^{A_{1}t_{1}}e^{A_{1}t_{2}}=e^{A_{1}(t_{1}+t_{2})}\). \[x(t)=e^{A_{2}(s_{k}+\cdots+s_{2}+s_{1})}e^{A_{1}(t_{k}+\cdots+t_{2}+t_{1})}x( 0). \tag{17}\] As \(t\to\infty\), at least one of the two sums (which represent the total activation times of modes 1 and 2) must also converge to \(\infty\). Both \(A_{1}\) and \(A_{2}\) being Hurwitz, this ensures that at least one of the matrix exponentials in (17) converges to 0. Therefore, the switched system is attractive for every switching signal, and we know from (9) that this implies GUES. This result (more precisely, the attractivity part) was briefly noted in the paper [37], whose main contribution we are about to discuss, but it was almost certainly known much earlier. The expressions (16) and (17) are equivalent because the flows of the two linear systems \(\dot{x}=A_{1}x\) and \(\dot{x}=A_{2}x\) commute. This equivalence can also be interpreted as follows: _every state that can be reached from \(x(0)\) with an arbitrary number of switches can also be reached (at the same time \(t\)) with at most one switch_. Thinking about the commutativity condition in terms of reachability with a bound on the number of switches suggests a path that will prove very fruitful for us later on, although initially it is easy to overlook it (as I certainly did when I first learned about this result). As we discussed in Section 4, an alternative to a direct trajectory-based stability proof--such as the one just given--is to search for a common Lyapunov function. And when dealing with a switched linear system, we have a natural candidate in the form of a quadratic Lyapunov function (11), even though we cannot be sure a priori that a quadratic common Lyapunov function exists (see Section 5). Fortunately, for the case of commuting matrices such a function does exist, and it can be constructed in an elegant way proposed by Narendra and Balakrishnan in [37]. With \(\mathcal{P}=\{1,\ldots,m\}\) as before, one iteratively solves the sequence of Lyapunov equations \[\begin{split} P_{1}A_{1}+A_{1}^{T}P_{1}&=-I,\\ P_{2}A_{2}+A_{2}^{T}P_{2}&=-P_{1},\\ &\vdots\\ P_{m}A_{m}+A_{m}^{T}P_{m}&=-P_{m-1}\end{split} \tag{18}\] (placing each \(P_{i}\) obtained at step \(i\) on the right-hand side of the equation to be solved at step \(i+1\)). One then defines \[V(x):=x^{T}P_{m}x. \tag{19}\] It is obvious that this is a Lyapunov function for the \(m\)th mode, \(\dot{x}=A_{m}x\), but commutativity implies that it is a Lyapunov function for all the other modes as well. One way to see this is to repeatedly apply the formula (12) for the solution of the Lyapunov equation (10) to express \(P_{m}\) as a nested integral involving products of matrix exponentials: \[P_{m}=\int_{0}^{\infty}e^{A_{m}^{T}t_{m}}\ldots\Big{(}\int_{0}^{\infty}e^{A_{1 }^{T}t_{1}}e^{A_{1}t_{1}}dt_{1}\Big{)}\ldots e^{A_{m}t_{m}}dt_{m}.\] Since these matrix exponentials commute, we can arbitrarily reorder the integrals. This shows that the final matrix \(P_{m}\) does not depend on the ordering of the modes in (18), giving the desired claim. It is also fairly easy (especially for the case \(m=2\)) to reach the same conclusion by manipulating the Lyapunov equations in (18), as done in [37]. It is interesting to note that the work [37] was done at Yale, by a different group and a few years before I arrived there to work with Steve Morse. I guess one could say that switched systems were in the air there during that time period. ## 7 Matrices generating a nilpotent Lie algebra We have now arrived at the point in our story where Leonid Gurvits comes in, with the result that I mentioned at the very beginning of the article. I first need to define the terms appearing in the title of this section. A (matrix) _Lie algebra_ generated by our collection of real-valued \(n\times n\) matrices \(A_{p}\), \(p\in{\cal P}\) is the smallest set of matrices containing \(\{A_{p},p\in{\cal P}\}\) which is closed under addition and scalar multiplication (hence it is a vector space of matrices) and also under the Lie bracket operation \([\cdot,\cdot]\). In the commuting case considered in Section 6, this Lie algebra is simply the linear span of the original matrices. In general, however, it must also contain their pairwise Lie brackets \([A_{i},A_{j}]\), the iterated (second-order) Lie brackets \([A_{i},[A_{j},A_{k}]]\), and so on. Of course, even though the number of Lie brackets to account for is potentially infinite, the resulting vector space has dimension at most \(n^{2}\), so the process of adding new Lie brackets linearly independent from the previous ones will terminate after finitely many steps. The Lie algebra is _nilpotent_ if there is a positive integer \(k\) such that all \(k\)th-order Lie brackets are \(0\). For example, for \(m=2\) (two matrices) and \(k=2\) (second-order nilpotency) this means that \[[A_{1},[A_{1},A_{2}]]=[A_{2},[A_{1},A_{2}]]=0. \tag{20}\] This scenario represents the next natural step after the commuting case, in the sense that the first-order Lie bracket \([A_{1},A_{2}]\) is the only one that needs to be added. In the 1995 paper [17], Gurvits made the following conjecture:4 Footnote 4: Actually Gurvits worked in discrete time, so here we are paraphrasing his statements in the continuous-time setting. The difference disappears if we restrict the switching times to be integer multiples of a fixed positive number. _If the Hurwitz matrices \(A_{p}\), \(p\in{\cal P}\) generate a nilpotent Lie algebra, then the switched linear system (8) is GUES._ Gurvits only proved this claim for the special case of two matrices generating a second-order nilpotent Lie algebra, represented by the condition (20). His method was the following. We saw that for two commuting matrices, an arbitrary product of matrix exponentials as in (16) representing a solution of the switched system at time \(t\) can be equivalently written as a product of the form \(e^{A_{2}\tau_{2}}e^{A_{1}\tau_{1}}\) with \(\tau_{1}+\tau_{2}=t\), appearing in (17). For the second-order nilpotent case, Gurvits showed that an arbitrary product from (16) can be similarly represented as a shorter product, but this time of _five_ rather than two exponentials: \[x(t)=e^{A_{2}\tau_{5}}e^{A_{1}\tau_{4}}e^{A_{2}\tau_{3}}e^{A_{1}\tau_{2}}e^{A _{2}\tau_{1}}x(0). \tag{21}\] In the terminology of [17], the semigroup generated by the two matrices can be factored into a product of five semigroups, each generated by one of the matrices. (In the commuting case, we have a product of two semigroups.) Here the numbers \(\tau_{1},\ldots,\tau_{5}\) do not necessarily add up to \(t\), but still have the property that at least one of them must go to \(\infty\) as \(t\to\infty\). We can thus conclude GUES in the same way as in the commuting case. We can also interpret the decomposition (21) in terms of reachability with a bound on the number of switches, similarly to how we did in the commuting case: _every state that can be reached from \(x(0)\) with an arbitrary number of switches can also be reached (not necessarily at the same time \(t\)) with at most four switches._ To prove (21), Gurvits used the _Baker-Campbell-Hausdorff formula_ \[e^{A_{1}}e^{A_{2}}=e^{A_{1}+A_{2}+\frac{1}{2}[A_{1},A_{2}]+\frac{1}{12}([A_{1},[A_{1},A_{2}]]+[A_{2},[A_{1},A_{2}]])+\ldots}\] which, in the case of (20), simplifies to just \(e^{A_{1}}e^{A_{2}}=e^{A_{1}+A_{2}+\frac{1}{2}[A_{1},A_{2}]}.\) It follows that an arbitrary product of matrix exponentials as in (16) can be written as \(e^{m_{1}A_{1}+m_{2}A_{2}+\frac{1}{2}m_{3}[A_{1},A_{2}]}\) for suitable coefficients \(m_{1},m_{2},m_{3}\). Using this fact, Gurvits analyzed the effect of an "elementary shuffling", i.e., of switching the order of two adjacent matrix exponentials in a product, and showed that products of five terms as in (21) are sufficient to generate all possible values of \(m_{1},m_{2},m_{3}\). As for the general nilpotent case (Lie algebra nilpotent of order \(k\geq 2\) generated by \(m\geq 2\) matrices), Gurvits wrote that it "looks quite hopeful, but requires more sophisticated combinatorics". This is exactly what Steve Morse suggested I try to work out. Of course, it is not important that the factorization have 5 terms as in (21); as long as we have some fixed bound \(N\) on the number of terms in the product, the stability proof goes through. Initially I was optimistic about finding such an \(N\) by using manipulations similar to the ones employed by Gurvits. After spending some time trying to do this myself, without success, I asked for help. One person to whom I described the problem was Gerry Schwarz, an expert in Lie theory at Brandeis University's math department, where I received my PhD. After a few weeks Gerry sent me an email, announcing that he was very close to a solution and asking if I was still interested in one. I responded with an emphatic "yes" and then never heard from him again. Another mathematician I consulted was George Seligman at Yale, who was kind enough to meet with me several times and educate me about various aspects of Lie algebras. While he did not have a direct solution to my problem, he eventually helped me stumble upon a different approach which not only led to a positive result, but in fact applied to a larger class of systems than the one studied by Gurvits. ## 8 Matrices generating a solvable Lie algebra A well-known class of Lie algebras that contains all nilpotent Lie algebras is that of _solvable_ Lie algebras. While nilpotent Lie algebras are characterized by all Lie brackets of sufficiently high order being 0, in a solvable Lie algebra only Lie brackets of sufficiently high order having a certain structure must be 0. To make this a bit more precise, recall that 2nd-order nilpotency means that Lie brackets of the form \([A_{i},[A_{j},A_{k}]]\) vanish; 3rd-order nilpotency means that Lie brackets of the form \([A_{i},[A_{j},[A_{k},A_{\ell}]]]\) vanish; and so on. Here at step \(k\) one considers Lie brackets of matrices obtained at step \(k-1\) with matrices from the original Lie algebra. By contrast, when defining solvability, at step \(k\) one only considers Lie brackets among the matrices from step \(k-1\); for example, for \(k=2\) one looks at Lie brackets of the form \([[A_{i},A_{j}],[A_{k},A_{\ell}]]\). The latter approach singles out a subset of the Lie brackets included when using the former approach. Therefore, every nilpotent Lie algebras is solvable, while some solvable Lie algebras are not nilpotent. During one of my conversations with George Seligman, he called my attention to solvable Lie algebras and pointed out their classical characterization known as _Lie's theorem_. This theorem says that matrices in a solvable Lie algebra can be simultaneously brought to an upper-triangular form by some linear (generally complex-valued) change of coordinates. At this point I must bring in another major character in this story: Joao Hespanha, who at that time was finishing up his PhD studies under Steve Morse. We overlapped at Yale for only one semester, but that was enough to establish collaboration on a range of topics which continues on and off to this day. Stability of switched systems was an area in which Joao had already been working with Steve for some time before I arrived, motivated primarily by problems in switching adaptive control. So, when I mentioned to Joao solvability and triangular structure, in the context of trying to generalize Gurvits' approach, this immediately rang a bell for him. He had previously encountered switching among stable linear systems with triangular matrices, and he knew that such switched linear systems are always stable. To see why this is true, let us consider the case when \(\mathcal{P}=\{1,2\}\) and \(x\in\mathbb{R}^{2}\). Let the two matrices be \[A_{1}:=\begin{pmatrix}-a_{1}&b_{1}\\ 0&-c_{1}\end{pmatrix},\qquad A_{2}:=\begin{pmatrix}-a_{2}&b_{2}\\ 0&-c_{2}\end{pmatrix}. \tag{22}\] Suppose for simplicity that their entries are real (the case of complex entries requires some care but the extension is not difficult). Since the eigenvalues of these matrices have negative real parts, we have \(a_{i},c_{i}>0\), \(i=1,2\). Now, consider the switched linear system \(\dot{x}=A_{\sigma}x\). The second component of \(x\) satisfies the equation \(\dot{x}_{2}=-c_{\sigma}x_{2}.\) Therefore, \(x_{2}\) decays to zero exponentially fast for every \(\sigma(\cdot)\), at the rate corresponding to \(\min\{c_{1},c_{2}\}\). The first component of \(x\) satisfies the equation \(\dot{x}_{1}=-a_{\sigma}x_{1}+b_{\sigma}x_{2}.\) This can be viewed as the exponentially stable system \(\dot{x}_{1}=-a_{\sigma}x_{1}\) perturbed by the exponentially decaying input \(b_{\sigma}x_{2}\). Thus \(x_{1}\) also converges to zero exponentially fast. It is not hard to extend this argument to more than two matrices of arbitrary dimension, proceeding from the bottom component of \(x\) upward. As before, GUES follows by virtue of (9). An alternative to the above direct stability proof, as we know, consists in constructing a common Lyapunov function. It turns out that in the present case of triangular matrices, it is possible to find a quadratic common Lyapunov function of the form (11), with \(P\) a diagonal matrix. We illustrate this again on the example of the two matrices (22). Let us look for \(P\) taking the form \[P=\begin{pmatrix}d_{1}&0\\ 0&d_{2}\end{pmatrix}\] where \(d_{1},d_{2}>0\). A straightforward calculation gives \[-A_{i}^{T}P-PA_{i}=\begin{pmatrix}2d_{1}a_{i}&-d_{1}b_{i}\\ -d_{1}b_{i}&2d_{2}c_{i}\end{pmatrix},\qquad i=1,2.\] To ensure that this matrix is positive definite, we can first pick an arbitrary \(d_{1}>0\), and then choose \(d_{2}>0\) large enough to have \(4d_{2}d_{1}a_{i}c_{i}-d_{1}^{2}b_{i}^{2}>0\), \(i=1,2\). Again, it is easy to see how this iterative construction can be extended to higher dimensions.5 Footnote 5: A slightly different approach is to rescale the basis vectors to make the off-diagonal elements (the \(b_{i}\)’s in the present example) as small as desired, so that \(P\) can be taken to be the identity matrix (see, e.g., [6, pp. 203–204]). We arrive at the following result: _If the Hurwitz matrices \(A_{p}\), \(p\in\mathcal{P}\) generate a solvable Lie algebra, then the switched linear system (8) is GUES._ Of course, it is clear from the preceding discussion that this is just a straightforward combination of two ingredients. The first one is the classical Lie's theorem, found in any textbook on Lie algebras, which gives the triangular form. The second ingredient is the observation that triangular form guarantees stability under arbitrary switching. Although I was initially unaware of this second result, Joao knew about it and in fact it had been documented in the literature. In particular, the paper [12], published one year before our investigation, and the paper [42], written at about the same time (and the same place) as ours, both mention essentially the same Lyapunov function construction as the one given above. On the other hand, it can be argued that the above result is greater than the sum of its parts. Indeed, the Lie-algebraic condition can be checked by performing a finite number of computations with the original matrices. It tells us that there exists a basis in which these matrices take the triangular form, but we do not need to actually find such a basis. And compared with Gurvits' approach described in Section 7, the present argument is much simpler and gives a stronger claim. We summarized the above findings in a paper that we submitted to Systems and Control Letters. At first it was rejected--the reviewers thought that the result was trivial. Although we did not entirely disagree, we insisted that the paper still had value, and eventually it was published [30]. In spite of (or maybe thanks to?) its almost embarrassing simplicity, the paper became quite highly cited. Unbeknown to me at the time, there was another reason to feel less than proud about this paper, but more on that later. It was noted in [19] that the Lie-algebraic stability condition discussed in this section can also be used for control design purposes. Namely, given a family of linear control systems \(\dot{x}=A_{p}x+B_{p}u\), \(p\in{\cal P}\), one can search for feedback gains \(K_{p}\), \(p\in{\cal P}\) such that the closed-loop matrices \(A_{p}+B_{p}K_{p}\) are Hurwitz and generate a solvable Lie algebra or, equivalently, are simultaneously triangularizable. The closed-loop switched linear system will then be GUES. The paper [19] describes an algorithm for finding such stabilizing feedback gains. ## 9 More general matrix Lie algebras The above line of research was continued in my joint work with Andrei Agrachev. Before describing this work, let me briefly digress to explain the role that Andrei Agrachev has played in my academic life. Third-year undergraduate students of mathematics at Moscow University had to choose an area of specialization. The process involved us listening to presentations made by professors in the department about their research. One of them was Agrachev, who spoke about a geometric approach to nonlinear controllability. I was immediately captivated, and this led me to choose control theory as my specialization area and Agrachev as my undergraduate research advisor. After working with him for about a year I went to graduate school in the United States, where I was still studying control theory but the specific topics were different, and for a while Agrachev and I lost contact. Then in 1998, after I defended my thesis and started attending international control conferences, we met again and reconnected. Having just recently written the paper [30], I showed a preprint to Agrachev. The next day, he told me that he had read it and could see how to generalize it. Once I was back at Yale, our collaboration proceeded mostly by Agrachev sending me very short emails and me spending hours in the Yale math library deciphering them. The results of this work are documented in our 2001 paper [2]. That paper makes use of deeper results from the theory of Lie algebras compared to [30], and so here I will limit myself to a brief informal summary. If the Lie algebra generated by the matrices \(A_{p}\), \(p\in{\cal P}\) is not solvable, it contains a maximal solvable subalgebra \(\mathfrak{r}\), called the _radical_.6 Every matrix \(A_{p}\) can then be written as a sum \(A_{p}=R_{p}+S_{p}\), where \(R_{p}\in\mathfrak{r}\) and \(S_{p}\) lies in a complementary subalgebra, which we call \(\mathfrak{s}\). Suppose that all matrices in \(\mathfrak{s}\) have purely imaginary eigenvalues (i.e., they are essentially rotation matrices); the subalgebra \(\mathfrak{s}\) is then said to be _compact_. What we showed in [2] is that under this condition, the matrices \(S_{p}\in\mathfrak{s}\) do not affect stability of the switched system. We can paraphrase the resulting stability criterion as follows: Footnote 6: More precisely, \(\mathfrak{r}\) is a maximal solvable ideal. _If the Hurwitz matrices \(A_{p}\), \(p\in{\cal P}\) generate a "solvable plus compact" Lie algebra, then the switched linear system (8) is GUES._ A quadratic common Lyapunov function also exists in this case, although the proof of this fact given in [2] is not nearly as constructive as in the solvable case. Moreover, it turns out that the above sufficient condition for stability is the strongest one that can be obtained by working solely with the Lie algebra. Indeed, we proved in [2] that if the Lie algebra is not "solvable plus compact" then it can always be generated by a family of Hurwitz matrices (which might be different from \(A_{p}\), \(p\in{\cal P}\)) such that the corresponding switched linear system is not stable. Thus we have in some sense reached the end of the road in formulating stability conditions for switched linear systems in terms of commutation relations between their matrices. Of course, it is still possible to obtain stronger results by bringing in other tools (see Section 14 below for some further discussion on this). ## 10 Commuting nonlinear vector fields Let us now go back to the switched nonlinear system (2) generated by the family of vector fields (1), which we assume to share a globally asymptotically stable equilibrium at the origin. In light of the previous developments for switched linear systems, it is natural for us to first examine the situation where these vector fields commute. This is the same as saying that the corresponding flows \(\phi_{p}(\cdot,x)\) commute, where \(\phi_{p}(t,x)\) denotes the solution at time \(t\) of the system \(\dot{x}=f_{p}(x)\) with initial condition \(x(0)=x\). For smooth vector fields, this property is captured by the fact that their _Lie brackets_ defined by \[[f_{p},f_{q}](x):=\frac{\partial f_{q}(x)}{\partial x}f_{p}(x)-\frac{\partial f _{p}(x)}{\partial x}f_{q}(x) \tag{23}\] equal \(0\) for all \(p,q\in{\cal P}\). For linear vector fields \(f_{p}(x)=A_{p}x\) the right-hand side becomes \((A_{q}A_{p}-A_{p}A_{q})x\), which is consistent with the definition of the Lie bracket of two matrices except for the difference in sign. The following is a generalization of the result from Section 6: _If the globally asymptotically stable vector fields \(f_{p}\), \(p\in{\cal P}\) commute pairwise, then the switched nonlinear system (2) is GUAS._ To see why this is true, we can try arguing as in the linear case. Take \({\cal P}=\{1,2\}\) for simplicity, and consider a switching signal that takes the value, say, \(1\) for \(t_{1}\) units of time, then the value \(2\) for \(s_{1}\) units of time, then \(1\) for \(t_{2}\) units of time, and so on. Since the two flows commute, the solution of (2) at time \(t\) can be written as \[x(t)=\phi_{2}(s_{k}+\cdots+s_{2}+s_{1},\phi_{1}(t_{k}+\cdots+t_{2}+t_{1},x(0))) \tag{24}\] where the two sums represent the total activation times of the two modes. As \(t\to\infty\), at least one of these sums must converge to \(\infty\). Since both vector fields are globally asymptotically stable, we conclude that \(x(t)\to\infty\). Note that this argument falls short of establishing GUAS, because we no longer have the equivalences (9). We must show the existence of a class \({\cal KL}\) function \(\beta\) as in (GUAS). For this, we can use the fact that each globally asymptotically stable mode has its own class \({\cal KL}\) function \(\beta_{p}\) supplying the upper bound \(|\phi_{p}(t,x)|\leq\beta_{p}(|x|,t)\). We also know from (24) that if a state can be reached at time \(t\) from \(x(0)\) with an arbitrary number of switches, then it can be reached at the same time \(t\) with at most one switch. Combining these two properties, we can write \(|x(t)|\leq\beta_{p}(\beta_{q}(|x(0)|,t-\tau),\tau)\) where \(\tau\) is the time of the switch and \((p,q)\) is either \((1,2)\) or \((2,1)\). Of the two time arguments \(\tau\) and \(t-\tau\), one is at least \(t/2\); replacing it by \(t/2\) and replacing the other time argument by \(0\) can only increase the upper bound. Taking the maximum over the different cases mentioned, we can easily obtain a function \(\beta\) certifying GUAS. A general construction of such a class \({\cal KL}\) function (for \(m\geq 2\) modes) was given by Mancilla-Aguilar in the 2000 paper [33]; it proceeds a bit differently, by relying on some known properties of class \({\cal KL}\) functions and using induction on \(m\). In Section 6 we saw how a quadratic common Lyapunov function for a family of commuting exponentially stable linear systems can be constructed by the iterative procedure (18)-(19). We also noted that the standard quadratic Lyapunov function for a single exponentially stable linear system, obtained by solving the Lyapunov equation via (10)-(12), can be viewed as a special case of the Lyapunov function (5) for an exponentially stable nonlinear system. It is then natural to try to generalize the iterative construction of a common Lyapunov function to the case of commuting exponentially stable nonlinear vector fields. This was done by Hyungbo Shim and coworkers in [41]. With \(\mathcal{P}=\{1,\ldots,m\}\) again, one iteratively defines the functions \[\begin{split} V_{1}(x)&:=\int_{0}^{T}|\phi_{1}(t,x )|^{2}dt,\\ V_{i}(x)&:=\int_{0}^{T}V_{i-1}(\phi_{i}(t,x))dt, \qquad i=2,\ldots,m\end{split} \tag{25}\] for a sufficiently large \(T\leq\infty\). Then \(V_{m}\) is a common Lyapunov function, at least locally; it is a global common Lyapunov function when the functions \(f_{p}\), \(p\in\mathcal{P}\) are globally Lipschitz. For the case of linear systems \(f_{p}(x)=A_{p}x\) we recover the construction of Section 6 upon setting \(T=\infty\). I learned about the procedure (25) from a poster that Hyungbo Shim presented at the 1998 SIAM Conference on Control and its Applications, which incidentally was the first international conference for both of us. After that initial meeting in front of Hyungbo's poster, a full decade would pass until he and I started collaborating (on topics not directly related to the present discussion). Note that an alternative to Hyungbo's approach is to employ Lyapunov's indirect (first) method. Namely, consider the Jacobian matrices \[A_{p}:=\frac{\partial f_{p}}{\partial x}(0),\qquad p\in\mathcal{P}. \tag{26}\] Lyapunov's indirect method tells us that the matrices \(A_{p}\) are Hurwitz if (and only if) the vector fields \(f_{p}\) are (locally) exponentially stable. Moreover, it can be shown that if the vector fields \(f_{p}\) commute, then the matrices \(A_{p}\) also commute. (The converse does not necessarily hold, so the latter commutativity condition is actually weaker.) A quadratic common Lyapunov function for the linearized systems \(\dot{x}=A_{p}x\) can thus be constructed as explained in Section 6, and it serves as a common Lyapunov function for the original family of commuting nonlinear vector fields. It is, however, only a local common Lyapunov function, and it only guarantees local uniform exponential stability of the switched nonlinear system (2). In 2002 I was teaching a graduate course on switched systems that I had recently introduced at the University of Illinois. One of the students in the class was Linh Vu, who had just started his graduate studies with me. The course included final projects, which typically consisted in reading and presenting research articles, but Linh actually made an original research contribution. After I described Hyungbo Shim's Lyapunov function construction (25) in class, Linh decided to develop analogous constructions for asymptotically but not necessarily exponentially stable commuting nonlinear systems. He proposed two approaches based on the classical Lyapunov function constructions of Massera and Kurzweil discussed in Section 4. The first construction is based on (4) takes the form \[\begin{split} V_{1}(x)&:=\int_{0}^{\infty}G(|\phi_{ 1}(t,x)|)dt,\\ V_{i}(x)&:=\int_{0}^{\infty}V_{i-1}(\phi_{i}(t,x)) dt,\qquad i=2,\ldots,m\end{split}\] with the function \(G\) coming from a nontrivial multivariable extension of a lemma due to Massera.7 The second construction is based on (6) and takes the form Footnote 7: Interestingly, the Wikipedia entry for “Massera’s lemma” cites Linh’s result. \[V_{1}(x) :=\sup_{t\geq 0}g(\phi_{1}(t,x))k(t),\] \[V_{i}(x) :=\sup_{t\geq 0}V_{i-1}(\phi_{i}(t,x))k(t),\qquad i=2,\ldots,m\] where \(g(x)\) is the infimum norm of the backward-in-time solutions from \(x\) over all switching signals, and \(k(\cdot)\) is a suitable function. These results became Linh's MS thesis and are documented in the paper [46], which also contains a readable and self-contained account of the relevant background material. ## 11 Beyond commuting nonlinear vector fields: early attempts In view of the preceding developments, the next logical case to consider is when the globally asymptotically stable vector fields \(f_{p}\), \(p\in\mathcal{P}\) generate a nilpotent or solvable Lie algebra. Here, the notions of a Lie algebra and its nilpotency and solvability are defined along the same lines as in Sections 7 and 8, except that instead of \(n\times n\) matrices with the Lie bracket defined by (15) we work with vector fields on \(\mathbb{R}^{n}\) and the Lie bracket defined by (23). We could, alternatively, inspect the Lie algebra generated by the Jacobian matrices (26). If these matrices are Hurwitz and if the Lie algebra is solvable, then we know from Section 8 that a quadratic common Lyapunov function exists for the linearized systems \(\dot{x}=A_{p}x\), and then exactly as in Section 10 we can conclude local uniform exponential stability of the switched nonlinear system (2). However, for the matrices \(A_{p}\) to be Hurwitz, we need to assume that the vector fields \(f_{p}\) are exponentially stable (at least locally). And overall this approach is not very interesting, as it is just an application of Lyapunov's indirect method. It seems much more intriguing to ask how the structure of the Lie algebra generated by the original nonlinear vector fields \(f_{p}\), \(p\in P\) is related to (potentially global) stability properties of the switched nonlinear system (2). We saw in Section 8 that in the linear setting, solvability of the matrix Lie algebra implies simultaneous triangularizability (Lie's theorem), which in turn directly leads to uniform exponential stability of the switched linear system. It is tempting to try to carry out a similar program for the nonlinear vector fields \(f_{p}\), \(p\in\mathcal{P}\), where by the upper-triangular structure we now mean that they take the form \[f_{p}(x)=\begin{pmatrix}f_{p1}(x_{1},x_{2},\ldots,x_{n})\\ f_{p2}(x_{2},\ldots,x_{n})\\ \vdots\\ f_{pn}(x_{n})\end{pmatrix}.\] Initially I was encouraged by the fact that there do exist nonlinear versions of Lie's theorem, which provide Lie-algebraic conditions under which a family of nonlinear systems can be simultaneously triangularized [13, 20, 36]. These results unfortunately rely on some technical assumptions that do not hold in our context, but let us ignore such details here. A more serious difficulty arises, however, when we try to explore the triangular structure for the purpose of establishing stability. When proving stability of the switched linear system generated by the matrices (22), we used the fact that the state of an exponentially stable linear system perturbed by an input converging to \(0\) must converge to \(0\). For nonlinear systems, this "converging-input-converging-state" property--which is a consequence of the well-known _input-to-state stability (ISS)_ property introduced by Sontag in [43]--is known not to be true in general. As an example, consider the system \[\begin{split}\dot{x}_{1}&=-x_{1}+x_{1}^{2}x_{2},\\ \dot{x}_{2}&=-x_{2}.\end{split} \tag{27}\] Even though \(x_{2}\to 0\) exponentially fast, for sufficiently large initial conditions \(x_{1}\) escapes to infinity in finite time (see, e.g., [22, p. 8] or [27, p. 44] for details). As I already mentioned in Section 5, when I was doing a postdoc at Yale and working on this problem I met David Angeli who was interested in switched systems as well. During one of our discussions, I mentioned to David that I was trying to see if triangular structure might be helpful for showing GUAS of a switched nonlinear system. Any remaining hope I might have still had was promptly shattered by the following nice counterexample that David suggested. Let \(\mathcal{P}=\{1,2\}\), and consider the upper-triangular vector fields \[f_{1}(x)=\begin{pmatrix}-x_{1}+2\sin^{2}(x_{1})x_{1}^{2}x_{2}\\ -x_{2}\end{pmatrix},\qquad f_{2}(x)=\begin{pmatrix}-x_{1}+2\cos^{2}(x_{1})x_{1 }^{2}x_{2}\\ -x_{2}\end{pmatrix}.\] The systems \(\dot{x}=f_{1}(x)\) and \(\dot{x}=f_{2}(x)\) are globally asymptotically stable, as one can easily verify by examining the behavior of their solutions. Nevertheless, the switched system \(\dot{x}=f_{\sigma}(x)\) is not GUAS. Indeed, if it were GUAS, then we know from Section 4 that \(f_{1}\) and \(f_{2}\) would share a common Lyapunov function. This Lyapunov function would in turn certify asymptotic stability of every "convex combination" \(\dot{x}=\alpha f_{1}(x)+(1-\alpha)f_{2}(x)\) of the two systems, where \(\alpha\in[0,1]\). But for \(\alpha=1/2\) we recover the unstable system (27), reaching a contradiction. Our earlier discussion indicates that the problem here is that the \(x_{1}\)-dynamics are not ISS with respect to \(x_{2}\) (viewed as an input). By imposing such ISS assumptions either on the switched system or on the individual modes, sufficient conditions for stability of switched triangular nonlinear systems can indeed be obtained, as David and I showed in [5]. (The above counterexample appears in the same paper.) However, such additional assumptions take us quite far from our original goal of formulating stability conditions in terms of the Lie algebra generated by the vector fields \(f_{p}\), \(p\in\mathcal{P}\). We see that up to this point, all attempts to formulate global asymptotic stability criteria valid beyond the commuting nonlinear case were unsuccessful; the methods employed to obtain the corresponding results for switched linear systems do not apply, and an altogether different approach seems to be required. I presented this as an open problem at a special session of the 2002 Mathematical Theory of Networks and Systems (MTNS) conference (it was included in the conference proceedings, and later published more formally as a book chapter [28]). By that time, I had mostly given up on this particular research direction and turned my attention to other things. ## 12 A new twist: worst-case switching and the maximum principle In September of 2003, I received an email from Michael Margaliot, a professor at Tel Aviv University. We had never met or communicated before, although I had seen some of his published work. In his email, Michael said that he had some ideas for solving the above open problem. The approach that he outlined was based on the concept of _worst-case switching_, an idea that we already encountered in Section 5 and that had appeared elsewhere in the literature, most notably in the work of Pyatnitskiy and Rapoport [39]. Inspired by that work, Michael proposed to consider an auxiliary control system whose trajectories contain those of the switched system, and to formulate, for this control system, an optimal control problem that consists in driving the state as far away from the origin as possible in a given amount of time. We can then study this optimal control problem with the help of the maximum principle and, under suitable conditions, hope to show that optimal controls are _bang-bang_, i.e., take values only at the extreme points of the control set. Furthermore, we can hope to derive an upper bound on the total number of switches of the optimal controls. We already know from Sections 6 and 7 that having an upper bound on the number of switches--combined with asymptotic stability of the individual modes--leads quite directly to asymptotic stability under switching. In the present case, since optimal controls correspond to the worst-case (most destabilizing) switching, we can conclude as in Section 5 that the switched system is GUAS. At the first glance, the approach just described makes no mention of commutation relations, so how is it relevant to our problem? The answer lies in the fact that the optimal control is determined by the sign of a certain function, and switches occur when this function changes sign. As it turns out, the derivatives of this function involve Lie brackets of the vector fields that define the switched system (and the auxiliary control system). When certain Lie brackets vanish, the above function can be shown to be polynomial, and we have a bound on the number of its sign changes. So, there is in fact a direct connection with the Lie brackets. Needless to say, Michael's email blew my mind. I had been so focused on solvable Lie algebras and triangular structure that I had completely abandoned an earlier approach, followed by Gurvits, which centered on reachability with a bounded number of switches. In the linear case, that earlier approach had led me to a dead end while the approach based on triangular structure had proved more fruitful. However, in the nonlinear case the latter approach had led me to a dead end as well. Michael's novel idea was, in essence, to return to the older approach, complemented by the bang-bang principle of optimal control. Michael himself had only worked out a particular case of a switched linear system and low-order nilpotency, thus reproving known results. He wrote to me to ask for my help with the nonlinear setting. In order to explain the results that we eventually obtained, it is easiest to begin with a familiar special case of two modes (\(m=2\)) and second-order nilpotency. This is the direct nonlinear counterpart of the linear condition (20) for which Gurvits proved his result in [17], and the simplest next case to consider after the commuting one treated in Section 10. In other words, let us first see how we can prove the following: _If two globally asymptotically stable vector fields \(f_{1}\) and \(f_{2}\) satisfy_ \[[f_{1},[f_{1},f_{2}]](x)=[f_{2},[f_{1},f_{2}]](x)=0\qquad\forall\,x\in\mathbb{ R}^{n} \tag{28}\] _then the switched nonlinear system (2) is GUAS._ The first step is to define the control system \[\dot{x}=f(x)+g(x)u \tag{29}\] with \(f:=f_{1}\), \(g:=f_{2}-f_{1}\), and the control set \(U:=\{0,1\}\). It is clear that for piecewise constant controls \(u(\cdot)\) taking values in \(U\), the trajectories of (29) coincide with those of the switched system (2). For technical reasons, it is preferable to enlarge the control set to \(\bar{U}:=[0,1]\) and to allow more general control functions \(u(\cdot)\) taking values in \(\bar{U}\). The solutions of the resulting control system coincide with those of the differential inclusion \[\dot{x}\in\mathrm{co}\{f_{1}(x),f_{2}(x)\} \tag{30}\] (here 'co' denotes the convex hull); they include all solutions of the original switched system. Now, we pose the following optimal control problem: for an arbitrary given initial condition \(x(0)\) and a given time horizon \(t_{f}>0\), find a control \(u(\cdot)\) that maximizes the functional \[J(u):=|x(t_{f})|^{2}\] where \(x(\cdot)\) is the state trajectory generated by \(u(\cdot)\). Intuitively, we are looking for the worst-case (the most destabilizing) control. If we can show that the resulting closed-loop system is asymptotically stable, then the same property should hold for all other controls, and global asymptotic stability of the differential inclusion (30)--hence in particular GUAS of the switched system (2)--will be established. This problem can be studied with the help of the _maximum principle_ of optimal control.8 To this end, we introduce the _Hamiltonian_ Footnote 8: The reader can find the statement of the maximum principle in almost any textbook on optimal control theory; [29] is my personal favorite. \[H(x,u,p):=p^{T}f(x)+p^{T}g(x)u\] where \(p:[0,t_{f}]\to\mathbb{R}^{n}\) is the _costate_ satisfying the _adjoint equation_\(\dot{p}=-\partial H/\partial x\). The maximum principle then says that an optimal control must maximize the Hamiltonian pointwise in time. More precisely, at each time \(t\), an optimal control should maximize the function \(u\mapsto H(x(t),u,p(t))\), where \(x(\cdot)\) is the corresponding optimal state trajectory and \(p(\cdot)\) is a costate trajectory. In view of the affine dependence of the right-hand side of the system (29), and hence of the Hamiltonian \(H\), on \(u\), it is easy to see how to choose \(u\) to maximize \(H\). If we define the function \[\varphi(t):=p^{T}(t)g(x(t))\] then an optimal control must satisfy \(u(t)=1\) if \(\varphi(t)>0\), and \(u(t)=0\) if \(\varphi(t)<0\). We see that the switches of an optimal control are governed by the sign of the function \(\varphi\). Here the announced link with Lie brackets finally appears. Using the definition of \(\varphi\) and the differential equations for \(x\) and \(p\), we can compute the time derivative of \(\varphi\) to be \[\dot{\varphi}(t)=p^{T}(t)[f,g](x(t))\] and its second derivative to be \[\ddot{\varphi}(t)=p^{T}(t)[f,[f,g]](x(t))+p^{T}(t)[g,[f,g]](x(t))u. \tag{31}\] Recalling (28), it is not hard to see that the second-order Lie brackets in the last expression are \(0\), forcing \(\ddot{\varphi}\) to be \(0\). This of course means that \(\varphi\) is a linear function of time, and so it has at most one sign change. Therefore, optimal controls are bang-bang with at most one switch! From this, we can conclude as explained earlier that optimal state trajectories go to \(0\), and GUAS of the switched system follows. In the above reasoning, we glossed over one important possibility: what if \(\varphi\) is _identically zero_? If this happens, the maximum principle no longer guarantees that an optimal control only takes the values \(0\) and \(1\). While we cannot rule out such behavior, fortunately we can use a lemma proved by Hector Sussmann in [45] to show that in this case, another optimal control exists which is bang-bang but may have one additional switch, i.e, it has at most two switches in total. As we know, with this extra switch the stability proof still goes through. To go beyond the second-order nilpotency condition (28), let us suppose, referring to (31), that we still have \([g,[f,g]](x)=0\) for all \(x\), but \([f,[f,g]]\) is nonzero. Then we can proceed to calculate the third time derivative of \(\varphi\): \[\dddot{\varphi}(t)=p^{T}(t)[f,[f,[f,g]]](x(t))+p^{T}(t)[g,[f,[f,g]]](x(t))u. \tag{32}\] If the third-order Lie brackets in this expression vanish, then \(\varphi\) is a quadratic function of time, and so it has at most two sign changes (unless it is identically \(0\), but we can handle that case in the same way as before). This still gives us a desired upper bound on the number of switches of optimal controls, implying GUAS of the switched system. In a similar fashion, we can formulate Lie-algebraic conditions guaranteeing that \(\varphi(\cdot)\) is a polynomial of degree 3, 4, and so on. Unfortunately, these conditions are not quite the \(k\)th-order nilpotency of the Lie algebra generated by \(f\) and \(g\), because we need to assume that the lower-order Lie brackets appearing in the \(u\)-dependent terms in (31), (32),... are 0 as well. So far we have only considered the case of \(m=2\) modes. For a general \(m\), the affine control system corresponding to the switched system (2) takes the form \[\dot{x}=f(x)+\sum_{i=1}^{m-1}g_{i}(x)u_{i} \tag{33}\] where \(f:=f_{1}\); \(g_{i}:=f_{i+1}-f_{1}\), \(i=1,\ldots,m\); \(U\in\mathbb{R}^{m-1}\) consists of the standard unit vectors (with one component equal to 1 and the other components equal to 0) and the origin; and \(\bar{U}=\mathrm{co}(U)\) is the standard simplex with these \(m\) vertices. Switches of each component \(u_{i}\) of an optimal control are now governed by sign changes of a corresponding function \(\varphi_{i}\), and we can still formulate Lie-algebraic conditions under which these functions are all polynomial. If \(d\) is an upper bound on the degrees of these polynomials, then it can be shown that optimal controls have at most \((d+2)^{m-1}-1\) switches. (For \(m=2\) and \(d=1\) we recover 2 switches as above.) Full details of this general formulation can be found in our paper with Michael [35]. This approach was also further explored and extended in a follow-up paper that Michael wrote with his MS student Yoav Sharon [40], who subsequently did his PhD with me (on unrelated topics). While these results are encouraging, I still consider the general nonlinear problem to be largely open. It seems appropriate here to go back to the linear case for a moment and mention another relevant result by Michael. As we discussed in Section 7, Gurvits claimed in [17] that if \(m\) matrices generate a Lie algebra nilpotent of order \(k\), then there exists a positive integer \(N=N(m,k)\) such that an arbitrary product of exponentials of these matrices can be written as a product of at most \(N\) exponential terms. Gurvits himself only proved the case \(m=k=2\), in which \(N=5\) as expressed by (21). In the paper [34], Michael actually disproved Gurvits' general claim by giving a counterexample where \(m=2\), \(k=3\), and an \(N\) with the above property does not exist. So, my earlier efforts to prove Gurvits' claim were futile. The approach proposed by Michael and developed (for nonlinear systems) in our paper [35] still draws quite heavily on that of Gurvits, since both invoke reachability with a bound on the number of switches. However, the worst-case switching--formalized via optimal control and analyzed using the maximum principle--was a crucial missing ingredient that Michael supplied and that allowed us to make progress. ## 13 A throwback In 2009 I met Yuliy Baryshnikov, who a couple of years later became my colleague at University of Illinois. I told him about the research I had been doing earlier on Lie algebras and stability of switched systems, and he surprised me by saying that, yes, he knew that a switched linear system is stable under arbitrary switching if the corresponding matrix Lie algebra is solvable, or "solvable plus compact" as in Section 9. This seemed odd because Yuliy had not really been working in control theory since his days at the Institute of Control Problems in Moscow (where both he and I grew up) in the 1980s, and it was unlikely that he would have read my papers. I asked him where he had seen these results, and he pointed me to two papers by Sergey Kutepov [25, 26] dating back to 1982 and 1984. It was understandable why I had not come across these references earlier. First, they appeared in a Russian-language journal and were never translated into English.9 Of course, I could read Russian perfectly well, but this explains why these papers have not been cited in the Western control-theoretic literature. Second, Kutepov's work did not mention switched systems at all. Instead, it was concerned with exponential stability, uniform over all controls, of bilinear control systems of the form \(\dot{x}=\sum_{i=1}^{m}u_{i}A_{i}x\). For suitable choices of the control set, such a system captures the behavior of the switched linear system generated by the matrices \(A_{1},\ldots,A_{m}\); this is a special case of the relationship that we already saw in Section 12 between the affine control system (33) and the switched nonlinear system (2). These differences notwithstanding, Kutepov's papers [25] and [26] derived the same basic Lie-algebraic stability criteria as the ones in our papers [30] and [2], respectively. They did not contain constructions of quadratic common Lyapunov functions or the more advanced results of [2]. On the other hand, Kutepov's proofs were in some places more elegant, revealing a firmer grasp of the theory of Lie algebras than the one I had. There is a joke that I heard from Andy Teel, and so I will attribute it to him. It goes like this: Russian technical literature is not causal; you prove a theorem, and some time later it appears in an old Russian paper. Kutepov's papers from the 1980s, which "appeared" about 10 years after our work, are a perfect example of this. Of course, in subsequent citations I give full credit to Kutepov. We also submitted a note to Systems and Control Letters explaining the relationship between [25] and [30]. For some reason they never published it, but it is posted on my website. Through my Moscow contacts I was eventually able to find Kutepov's email address and wrote to him about how I had been unwittingly following in his footsteps. He was very gracious and thanked me for helping to finally bring proper recognition to his work. Had I learned about Kutepov's work soon after writing my papers containing the same results, I would have certainly been devastated. In that sense, I am thankful that many years had passed, I had a faculty job, and my research reputation was less dependent on that early work. I was also mature enough to understand that our collective knowledge is more important than who did what first. ## 14 Robustness of Lie-algebraic stability conditions Lie-algebraic stability conditions, while mathematically appealing and often fairly easily checkable, suffer from one serious drawback: they are not robust with respect to small perturbations of the system data. For example, if we take two matrices that commute with each other and perturb one of them slightly, they will cease to commute. If we take a family of matrices generating a nilpotent or solvable Lie algebra and introduce arbitrarily small errors in their entries, the new Lie algebra will no longer possess any helpful structure (see [2, Section A.6] for a precise result along these lines). On the other hand, the stability properties that these Lie-algebraic conditions establish _are_ robust to small perturbations. For the switched linear system (8), this is especially easy to see with the help of a quadratic common Lyapunov function characterized by the inequalities (14), which exists in all situations discussed in Sections 6-9. Suppose that we are given a family of perturbed matrices of the form \(\bar{A}_{p}=A_{p}+\Delta_{p}\), \(p\in\mathcal{P}\). Then \(V(x)=x^{T}Px\) is still a common Lyapunov function for the perturbed systems \(\dot{x}=\bar{A}_{p}x\) if the perturbations matrices \(\Delta_{p}\) satisfy \[\|\Delta_{p}\|_{2}<\frac{\lambda_{\min}(Q)}{2\lambda_{\max}(P)} \tag{34}\] where \(\|\cdot\|_{2}\) is the matrix norm induced by the Euclidean norm and \(\lambda_{\min}(\cdot)\) and \(\lambda_{\max}(\cdot)\) denote the smallest and the largest eigenvalue of a symmetric matrix, respectively (see [21, p. 342] or [27, p. 42]). This suggests that, instead of requiring that suitable commutators vanish, robust Lie-algebraic stability conditions should ask that these commutators be sufficiently small, in order to guarantee that a given family of matrices is close to a family of commuting matrices or to a family of matrices generating a nilpotent, solvable, or "solvable plus compact" Lie algebra. I already explained one consequence of my conversation with Yuliy Baryshnikov in 2009. Another, perhaps more important outcome of our discussions was that Yuliy suggested several approaches for developing robust Lie-algebraic stability conditions for switched linear systems. One approach, carried out in discrete time, involves rearranging the order of matrices in a product (somewhat in the spirit of Gurvits' "elementary shufflings" mentioned in Section 7) and characterizing the effect of such a rearrangement in terms of the norms of the commutators of the matrices. The switched linear system is GUES if the commutators are small enough, which gives a robust version of the result from Section 6. Another approach is in continuous time and utilizes the structure of the Lie algebra. In the notation of Section 9, we showed GUES when the matrices in the subalgebra \(\mathfrak{s}\) have sufficiently small norms compared to the stability margin of the matrices in the solvable part \(\mathfrak{r}\), which is a robust version of the result from Section 8. We also showed that GUES is preserved when noncompact perturbations are introduced, as long as they are small compared to the real parts of the eigenvalues of the matrices in the solvable part \(\mathfrak{r}\); this is a robust version of the stability criterion from Section 9. All of these results are documented in our 2012 paper [1] with Yuliy and Andrei Agrachev. In the follow-up paper [8], Yuliy and I used an inequality due to Lojasiewicz to relate the size of commutators of given matrices to the distance from this family of matrices to a family of commuting matrices (or matrices generating a nilpotent or solvable Lie algebra). Coupled with the bound (34), this again leads to robust stability criteria involving bounds on the commutators. The discrete-time approach from [1] was later taken up by Atreyee Kundu, who did her PhD under Debasish Chatterjee, my former PhD student. In the paper [23] Atreyee adapted Yuliy's method to switching signals satisfying a dwell-time condition, while in [24] Atreyee and Debasish further extended the results by allowing the presence of unstable modes. Another relatively recent work [16] derives an upper bound on the commutator of two Hurwitz matrices under which the procedure (18)-(19), with \(m=2\), still yields a quadratic common Lyapunov function. At the end of Section 8 we mentioned the paper [19] which addressed stabilization under arbitrary switching via finding controller gains that achieve simultaneous triangularization of the closed-loop matrices. The same authors--Haimovich and Braslavsky--also developed a robust version of their procedure, which only requires approximate simultaneous triangularization [18]. ## 15 Confusions Reading the above story, one might get the impression of clear vision conceived, systematic efforts expended, and steady progress achieved. For me, this could not be farther from the truth. The part of the work described here in which I was personally involved was in reality a fairly random sequence of confused attempts, unexpected turns, and frustrating failures. I was also working on several other research topics, quite unrelated to the problem addressed in this article, and there were long periods when I was not really thinking about this problem until some spontaneous conversation would nudge me to return to it. It is only _after_ the work has been done that a coherent story has gradually emerged. I have been considering for some time the possibility of writing a technical survey on this subject, but it was only very recently--as I was teaching my course on switched systems and telling various stories to the students--that the idea occurred to me to write this article from an informal personal perspective. I hope that some readers will find this account useful, and the missing technical details can always be found in the cited papers. I should also stress that, since this article was conceived as a personal story and not as an exhaustive survey of relevant research, my overview of the literature is far from being complete.
2301.05926
Physics-Informed Neural Networks for Mesh Deformation with Exact Boundary Enforcement
In this work, we have applied physics-informed neural networks (PINN) for solving mesh deformation problems. We used the collocation PINN method to capture the new positions of the vertex nodes while preserving the connectivity information. We use linear elasticity equations for mesh deformation. To prevent vertex collisions or edge overlap, the mesh movement in this work is conducted in steps with relatively small movements. For moving boundary problems, the exact position of the boundary is essential for having an accurate solution. However, PINNs are frequently unable to satisfy Dirichlet boundary conditions exactly. To overcome this issue, we have used hard boundary condition enforcement to automatically satisfy Dirichlet boundary conditions. Specifically, we first trained a PINN with soft boundary conditions to obtain a particular solution. Then, this solution was tuned with exact boundary positions and a proper distance function by using a new PINN considering only the equation residual. To assess the accuracy of our approach, we used the classical translation and rotation tests and compared them with a proper mesh quality metric considering the change in the element area and shape. The results show the accuracy of this approach is comparable with that of finite element solutions. We also solved different moving boundary problems, resembling commonly used fluid-structure interaction problems. This work provides insight into using PINN for mesh-deformation problems without needing a discretization scheme with reasonable accuracy.
Atakan Aygun, Romit Maulik, Ali Karakus
2023-01-14T14:17:22Z
http://arxiv.org/abs/2301.05926v1
# Physics-Informed Neural Networks for Mesh Deformation ###### Abstract In this work, we have applied physics-informed neural networks (PINN) for solving mesh deformation problems. We used the collocation PINN method to capture the new positions of the vertex nodes while preserving the connectivity information. We use linear elasticity equations for mesh deformation. To prevent vertex collisions or edge overlap, the mesh movement in this work is conducted in steps with relatively small movements. For moving boundary problems, the exact position of the boundary is essential for having an accurate solution. However, PINNs are frequently unable to satisfy Dirichlet boundary conditions exactly. To overcome this issue, we have used hard boundary condition enforcement to automatically satisfy Dirichlet boundary conditions. Specifically, we first trained a PINN with soft boundary conditions to obtain a particular solution. Then, this solution was tuned with exact boundary positions and a proper distance function by using a new PINN considering only the equation residual. To assess the accuracy of our approach, we used the classical translation and rotation tests and compared them with a proper mesh quality metric considering the change in the element area and shape. The results show the accuracy of this approach is comparable with that of finite element solutions. We also solved different moving boundary problems, resembling commonly used fluid-structure interaction problems. This work provides insight into using PINN for mesh-deformation problems without needing a discretization scheme with reasonable accuracy. **Keywords:** physics-informed neural networks, mesh deformation, exact boundary enforcement, linear elasticity ## 1 Introduction Dynamic grids in numerical fluid flow simulations generally arise in many applications, such as airfoil movement [1, 2], blood flow [3], parachute mechanics [4, 5], and free surface flow problems [6]. These and other fluid-structure interaction (FSI) problems need to move the computational grid with moving boundaries. The naive choice is to regenerate the mesh every time the boundary moves. Regenerating the mesh for a complex geometry results in a need for an automatic mesh generator [7]. This approach alters the grid connectivity and, therefore, brings up a need to project the solution to the new mesh. This introduces new projection errors each time the mesh is updated. Moreover, the cost of calling a new mesh generation algorithm can be overwhelming, especially for 3D problems [8]. Specific mesh moving techniques can overcome the drawbacks of remeshing for moving boundary problems. These methods try to update the position of the nodes of the original mesh under some prescribed laws without changing the grid connectivity. Farhat et al. introduced a spring analogy, where they fictitiously attach a torsional spring to the nodes of the mesh [9]. The system has fictitious mass, damping, and stiffness matrices, and the forcing is the displacement of the moving boundaries. This approach prevents vertex collisions as well as penetrating grid edges. In [8], the authors used a linear elastic equation to represent the fluid domain as an elastically deformable body and introduced a parallel finite element strategy. Using the same elasticity formulation, Stein et al. [10] solved the equation using a Jacobian-based stiffening. They introduced an additional stiffening power as a function of transformation Jacobian in the finite element formulation. This addition allowed them to stiffen the smaller elements more than the larger ones, resulting in improved mesh quality near the moving surfaces. Takizawa et al. [11], introduced a method based on the fiber-reinforced hyperelasticity model. They introduced fibers in different directions according to the motion, which allows the model to reduce the distortion of a mesh element. The moving mesh problem can be solved using the Laplacian or biharmonic equations [12, 13, 14]. Although using the biharmonic operator introduces extra computational complexity compared to the Laplacian equation systems, it can give the extra ability to control the normal mesh spacing [14]. Apart from conventional numerical methods, machine learning methods are also used to solve partial differential equations. Deep neural networks were first used by Lee and Kang [15], and Lagaris et al [16] to predict the solution of a partial differential equation (PDE). Raissi et al. [17] introduced the concept of physics-informed neural networks (PINN) to solve PDEs without any given data. This approach gives information about the physical laws to the neural network. Using the information on the boundary and initial conditions, neural networks can predict the solution of a PDE. The PINN formulation has received great attention and has been studied in wide content. There are numerous extensions of PINN to improve the methodology. Several domain decomposition models are designed to improve the accuracy and allow parallelization [18, 19, 20]. Bayesian PINNs are proposed to tackle the problems involving solving PDEs where noisy data is available [21] and where uncertainty quantification is important. Applications of PINN cover the solutions of conservation laws [22], fractional and stochastic differential equations [23, 24], solution of Navier-Stokes equations [25, 26], Euler equations [27], heat transfer problems [28, 29], Boltzmann equation with Bhatnagar-Gross-Krook collision model [30], Allen-Cahn and Cahn-Hilliard equations [31, 32], free boundary and Stefan problems [33] and many more. Despite the success of PINN across a range of different problems, it can face difficulties when solving multiscale and multiphysics problems [34], especially for dynamical systems with chaotic or turbulent behavior [35]. The fully connected networks face difficulties in learning high-frequency functions. This phenomenon is named spectral bias [36, 37]. The high-frequency behavior in the objective function results in sharp gradients. Therefore, PINN models can have difficulties while penalizing the residual loss. Although there are several approaches to tackle these problems and improve the training capabilities of PINN, the classical PINN method shows better performance to accurately solve the PDEs that govern the mesh deformation. Therefore our research focuses on using PINNs in the application of these problems. The main objective of this paper is to show the applicability of physics-informed neural networks for moving mesh problems. The PINN approach can produce satisfactory solutions for the movement of boundaries without needing a discretization scheme. However, using the original PINN formulation for mesh moving problems can have difficulties with the static and moving boundaries. PINN minimizes the loss at the boundaries, without imposing boundary conditions exactly. To overcome this problem, we used exact boundary enforcement. After obtaining a particular solution that weakly satisfies the boundary conditions, the prediction is corrected by training another PINN. To the best of our knowledge, using PINNs on mesh movement problems with exact boundary enforcement is not studied in detail in the literature. The remainder of this paper is organized as follows. First, basic information is given about physics-informed neural networks. This chapter is enhanced with the methodology of automatically satisfying boundary conditions using exact boundary enforcement. Most common mesh movement techniques are presented in the next chapter alongside the mesh quality metric used for comparing different methods. The results are presented with classical translation and rotation tests followed by examples resembling commonly used moving boundary problems. ## 2 Physics-Informed Neural Networks A basic, fully connected deep neural network architecture can be used to solve differential equations [38]. Given an input vector \(\mathbf{x}\in\mathbb{R}^{d}\), a single layer neural network gives an output \(\hat{\mathbf{u}}\) by the following form: \[\hat{\mathbf{u}}=\sigma(\mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1})\mathbf{W}_{2} +\mathbf{b}_{2}, \tag{1}\] where \(\mathbf{W}\) are the weight matrices and \(\mathbf{b}\) are the bias vectors. \(\sigma(\cdot)\) is a nonlinear function known as the activation function. In general, Sigmoid, hyperbolic tangent, and rectified linear unit (ReLU) are popular choices for the activation function. The hyperparameters \(\theta=[\mathbf{W},\mathbf{b}]\) are estimated by the following optimization problem \[\theta^{*}=\operatorname*{arg\,min}_{\theta}J(\theta;\mathbf{x}). \tag{2}\] Here, \(J\) is the objective function to be minimized. In this work, this function is defined as the mean squared error of the prediction. This minimization problem in Equation 2 can be solved by using first-order stochastic gradient descent (SGD) algorithms [39]. In each iteration, the hyperparameters are updated in such a way, \[\theta^{i+1}=\theta^{i}-\eta^{i}\nabla_{\theta}J(\theta;\mathbf{x}), \tag{3}\] where \(i\) being the current iteration and \(\eta\) is the learning rate. The gradient of the loss function, \(\nabla_{\theta}J(\theta;\mathbf{x})\), is calculated by backpropagation [40]. For the physics-informed a neural network, we consider the general form of partial differential equations: \[\mathbf{u}_{t}+\mathcal{N}[u]=0, \mathbf{x}\in\Omega,\ t\in[0,T] \tag{4a}\] \[\mathbf{u}(\mathbf{x},0)=f(\mathbf{x}), \mathbf{x}\in\Omega\] (4b) \[\mathbf{u}(\mathbf{x},t)=g(\mathbf{x},t), \mathbf{x}\in\partial\Omega,\ t\in[0,T] \tag{4c}\] where \(\mathcal{N}\) is a generalized differential operator that can be linear or nonlinear, \(\mathbf{x}\in\mathbb{R}^{d}\) and \(t\in[0,T]\) are the spatial and temporal coordinates. \(\Omega\) and \(\partial\Omega\) represent the computational domain and the boundary respectively. \(\mathbf{u}(\mathbf{x},t)\) is the general solution of the PDE with \(f(\mathbf{x})\) is the initial condition and \(g(\mathbf{x},t)\) is the boundary condition. The hidden solution, \(\mathbf{u}(\mathbf{x},t)\), can be approximated under the PINN framework proposed by Raissi et al. [17], by a feedforward neural network \(\hat{\mathbf{u}}(\mathbf{x},t;\theta)\) with parameters \(\theta\). For the supervised training the only labeled data comes from the boundary/initial points. Inside the domain, the loss is determined by the PDE residual. By utilizing automatic differentiation (AD) [41], PINNs can differentiate the network output w.r.t the input layer. AD applies the chain rule repeatedly to the elementary functions and arithmetic operations to achieve the derivative of the overall composition. AD is well implemented in popular deep learning frameworks such as TensorFlow [42] and PyTorch [43]. In classical PINN implementations, the loss term is a composite term including supervised data loss on the boundaries and initial points and the PDE loss. The total loss term can be written such that, \[\mathcal{L}=w_{R}\mathcal{L}_{R}+w_{BC}\mathcal{L}_{BC}+w_{IC}\mathcal{L}_{IC}. \tag{5}\] Here the terms represent the boundary loss \(\mathcal{L}_{BC}\), the initial condition loss \(\mathcal{L}_{IC}\), and the PDE residual loss \(\mathcal{L}_{R}\). The \(w\) terms are specific weights of each loss term that can be user-specified or tuned manually or automatically [37, 44]. Each loss term can be written as, \[\mathcal{L}_{R} =\frac{1}{N_{R}}\sum_{i=1}^{N_{R}}|\mathbf{u}_{t}+\mathcal{N}[ \mathbf{u}(\mathbf{x}^{i},t^{i})]|^{2} \tag{6a}\] \[\mathcal{L}_{BC} =\frac{1}{N_{BC}}\sum_{i=1}^{N_{BC}}|\mathbf{u}(\mathbf{x}^{i},t^ {i})-g(\mathbf{x}^{i},t^{i})|^{2}\] (6b) \[\mathcal{L}_{IC} =\frac{1}{N_{IC}}\sum_{i=1}^{I_{BC}}|\mathbf{u}(\mathbf{x}^{i},0 )-f(\mathbf{x}^{i})|^{2}. \tag{6c}\] Here \(N_{R},\ N_{BC}\), and \(N_{IC}\) are the total number of points used for calculating the mean squared error used here as the loss function. The schematic of a classical PINN can be seen in the left part of Figure 1. The hyperparameters \(\theta=[\mathbf{W},\mathbf{b}]\) can be optimized by a chosen optimization algorithm to find the minimum total loss defined in Equation 5. As mentioned above, stochastic gradient descent algorithms are commonly used in neural network implementations [39]. This method aims to find new parameters \(\theta\) in the opposite direction of the gradient of the objective function. The gradient of the loss function w.r.t. hyperparameters is calculated by backpropagation. In this work, we used the ADAM algorithm [45] as the SGD optimizer. ### Exact Boundary Enforcement The optimization algorithm used in PINN tries to minimize the physics-based loss, \(\mathcal{L}_{R}\). Using proper boundary and initial conditions can regularize the physics loss in deep neural networks. This classical PINN boundary condition implementation in Equation 6 is named soft boundary enforcement [46]. In this approach, the boundary prediction is minimized in the composite loss function. Although the SGD Figure 1: Schematic of PINN approach with exact boundary enforcement. The first PINN on the left shows the original formulation with weakly enforced Dirichlet boundary conditions. The second network uses the particular solution with exact boundary enforcement to satisfy Dirichlet boundaries exactly algorithms can minimize these loss functions, they do not satisfy the boundary values exactly. However, some PDE applications, such as mesh movement, need exact boundary values. For this purpose, we apply exact boundary enforcement. Sun et al. [46] used this boundary condition enforcement to exactly satisfy the velocity and pressure values on the boundaries of internal flow cases with Navier-Stokes equations. Sukumar and Srivastava [47] introduced geometry-aware trial functions. They multiply the neural network output with these functions and use its generalization to exactly satisfy boundary conditions on complex geometries. In this work, we use this idea with multiple physics-informed neural networks to exactly satisfy Dirichlet boundary conditions. First, we trained a PINN with soft boundaries. For the mesh movement problem, the displacement vector \(\mathbf{u}=[X,Y]^{T}\) will give the new coordinates of the nodes from the first neural network prediction. This solution is then changed on the boundaries with the exact values. This new solution is the particular solution of our approach. Then, a new PINN is trained with an output \(\hat{\mathbf{u}}(\mathbf{x};\theta)\). This output is modified with the following equation. \[\tilde{\mathbf{u}}(\mathbf{x};\theta)=\mathbf{u}_{par}(\mathbf{x})+D(\mathbf{ x})\hat{\mathbf{u}}(\mathbf{x};\theta). \tag{7}\] Here, \(\mathbf{u}_{par}\) is a particular solution that is a globally defined smooth function that only satisfies the boundary conditions. Any smooth function can be used for the particular solution such as radial basis functions (RBF) or linear functions [46]. In this work, we use the classical PINN predictions with the soft boundary condition implementation as the particular solution. \(D\) is a specified distance function from the boundary. Equation 7 states that on the boundaries \(D(\mathbf{x})=0\), the particular solution satisfies the exact boundary values, \(\mathbf{u}=g\) on \(\partial\Omega\). For a general approach, we used the shortest distance between the residual points and the boundaries. Since the geometric domains used in this paper are not too complex, this approach is not very time consuming. For complex geometries, approximate distance functions using R-functions [47] or pre-trained deep neural networks [48] can be used. This modified output contributes to the physics loss of the new PINN. In this network, the objective function is only consisting of the PDE residual \(\mathcal{L}_{R}\) and trained with the same PDE. This approach allows us to exactly satisfy the Dirichlet boundary conditions using PINN. ## 3 Mesh Movement Mesh movement strategies to deform the mesh with a moving boundary generally can be performed by solving a PDE or using an interpolation scheme [49]. All of these techniques have the goal to provide a displacement of the moving boundary and propagate this movement into the domain. Methods with a PDE solution, generally model the mesh movement as a physical process which can be solved using numerical methods. One of the popular versions includes modeling the domain with torsional springs that prevent the vertices to collide [9]. In a similar manner, this movement can be modeled with an elastic [8, 10] and hyperelastic [11] analogy, where the computational domain is simulated as an elastic body. Nonlinear elasticity equations with neo-Hookean models can be used in the same way as the elastic equations [50]. Other techniques include mesh deformation as a diffusive system modeled with the Laplacian or biharmonic equations [14]. All these PDEs can be solved using traditional numerical methods such as FEM. Interpolation schemes consider the mesh movement as a problem of interpolation from the boundaries to the domain. These schemes use interpolation on scattered data and generally do not need connectivity information. Using radial basis functions (RBF) is one of the common methods. In [51], de Boer et al. use RBF interpolation on unstructured grids to estimate the movement. The equation system only involves the boundary nodes and displacement of the whole mesh is modeled. Extending this method, in [52], the authors use data reduction algorithms using a coarse subset of the surface mesh. With greedy algorithms, this approach is effective, especially for mesh motion problems with smooth surface deformations. In this work, we used one of the common PDEs for mesh movement. The mesh motion is calculated by using the linear elasticity equation from structural mechanics. The coordinates of the nodes will be defined as \(\mathbf{u}\), the computational domain is referred to as \(\Omega\), and the boundaries are \(\partial\Omega\). Boundaries also include the moving objects inside the meshes. The new coordinates of the moving and stationary boundaries are given as the Dirichlet boundary condition. The movement of an object inside the mesh deforms the computational domain which is modeled as an elastic body. The new coordinates can be found by the following linear elasticity equation: \[\nabla\cdot\boldsymbol{\sigma}(\mathbf{u})=0 \text{in }\Omega \tag{8a}\] \[\mathbf{u}=\mathbf{u}_{b} \text{on }\partial\Omega. \tag{8b}\] Here \(\boldsymbol{\sigma}\) is the Cauchy stress tensor. It is related to the strain tensor \(\boldsymbol{\epsilon}=(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})/2\). The stress tensor can be written in a way by Hooke's law: \[\boldsymbol{\sigma}=\lambda\text{tr}(\boldsymbol{\epsilon})\mathbf{I}+2\mu \boldsymbol{\epsilon}. \tag{9}\] The Lame parameters \(\lambda\) and \(\mu\) are structural parameters coming from the elastic modulus \(E\) and Poisson's ratio \(\nu\). Since the mesh domain is not a real elastic body, the exact values for these parameters are not known. A value between \(0.3\) and \(0.45\) is recommended for Poisson's ratio since a high value can lead to distorted elements, and a lower value can reduce the resistance [50]. To be able to compare the effectiveness of different mesh movement techniques after a deformation, we use a mesh quality metric based on [10]. In these metrics, the area and shape changes are considered by checking the element area and the aspect ratio. Both metric uses the initial mesh elements as reference elements and measures the change according to them. The element area change \(f^{e}_{A}\) and shape change \(f^{e}_{AR}\) is defined as : \[f^{e}_{A} =\left|\log\left(\frac{A^{e}}{A^{e}_{o}}\right)/\log(2.0)\right|, \tag{10a}\] \[f^{e}_{AR} =\left|\log\left(\frac{AR^{e}}{AR^{e}_{o}}\right)/\log(2.0)\right|. \tag{10b}\] Here, the superscript \(e\) represents the specific element, and the subscript \(o\) is the initial mesh element before the deformation occurs. \(AR^{e}\) is the element aspect ratio defined in [10] as: \[AR^{e}=\frac{(l^{e}_{max})^{2}}{A^{e}}. \tag{11}\] Here, \(l^{e}_{max}\) is the maximum edge length for the specific element. For comparison of different techniques, we use the global area and shape changes by considering the maximum values of element area and shape changes, respectively. ## 4 Results The movement of dynamic meshes with PINN is presented with several different test cases. First, a deformed square is presented where we squeeze the domain from the top and bottom. Then, the basic translation and rotation tests are performed and the solutions of the PINN approach are compared with the finite element solutions. Lastly, the movement of a flexible beam is presented where one end of the beam is fixed. For all the problems, initial meshes are generated using the Gmsh mesh generator [53]. We used TensorFlow to construct our PINN framework with Adam optimizer as the gradient descent algorithm. We initialized all the neural networks using the Glorot scheme and used 7 hidden layers with 50 units. The classical neural networks are trained for 40000 iterations, and the networks with exact boundary enforcement are trained for 5000 iterations. The learning rate is \(10^{-3}\) with a decay rate of 0.9. The Lame parameters are selected as \(\mu=0.35\) and \(\lambda=1\) as recommended [50]. ### Deformed Square In this test case, a square domain is deformed from its boundaries. The square domain is \(x\), \(y\in[0,1]\times[0,1]\) and the unstructured mesh consists of 2744 triangular elements. The initial mesh can be seen in Figure 2. We want to find a deformed mesh where the position of the top boundary becomes \(\hat{y}=y-0.25\sin(\pi x)\). On the top surface, we implement this condition as a Dirichlet boundary condition as well as \(\hat{x}=x\). All the other boundaries have the same Dirichlet boundary condition as \(\hat{y}=y\), and \(\hat{x}=x\). The deformed mesh can be seen in Figure 2. The figure in the middle shows the results obtained by only using classical PINN. This shows the boundaries, especially the corners, are not in the exact position and are deformed in an undesired way. The figure on the right shows the solution after exact boundary enforcement. The boundary values are corrected with the exact positions with the proposed approach. The \(L_{2}\) error on the boundary nodes is calculated as 0.031. For this test case, we increased the specific weight of the boundary loss of the composite loss function in Equation 5. Since the deformation of the boundary is higher than the deformation of the computational domain, the boundary weight is increased. The weight ratio of the boundary loss and the residual loss is set to 25 to capture the boundary values more precisely. The mesh quality measure of the deformed mesh based on the element area and shape changes can be seen in 3. The top surface is deformed according to a sinusoidal function. The elements near the deformed boundary have the most change in size and shape as expected. Especially in the middle where the deformation is the largest, the elements are squeezed and get smaller. In the corners where the element vertices have two boundary conditions in each direction, the element area change is not significantly large. However, the shape of the corner elements changes more than the other elements on the boundary. These elements are bounded by the two boundaries and therefore the aspect ratios get larger. The deformation of the inner elements is relatively low, especially near the bottom boundary. The mesh deformation metrics get lower as the elements' position moves away from the deformed boundary. The global area and shape change metrics are calculated as \(|f_{A}^{\infty}|=0.744\), \(|f_{AR}^{\infty}|=1.264\), respectively. To see the capabilities of our approach we further deform the bottom boundary with its coordinates \(\hat{y}=y+0.25\sin(\pi x)\). The Dirichlet boundary conditions on the stationary boundaries are the same as the other, \(\hat{x}=x\), \(\hat{y}=y\). The same specific weight ratio for the loss function of the PINN formulation is used. The deformed configuration can be seen in Figure 4. The figure in the middle is the solution with the classical PINN approach. The vertices on the boundaries are not in exact positions. Especially Figure 2: Initial and deformed meshes of the deformed square case with its deformed top boundary. The unstructured mesh consists of 2744 triangular elements The first deformed figure shows the solution with classical PINN. The last figure represents the solution with exact boundary enforcement. on the corners, the classical PINN solution has difficulty satisfying the positions. The \(L_{2}\) error of the boundary positions is calculated as \(0.076\) for this case. The elementwise quality measures of this case can be seen in Figure 5. the elements on the top and the bottom boundaries are deformed the most, the same as in the previous case. The elements in the middle collapsed more than the case before. The global area and shape change values are \(|f_{A}^{\infty}|=1.701\), \(|f_{AR}^{\infty}|=1.845\), respectively. The element shape and size change significantly as the deformation is increased. ### Translation and Rotation tests To test the accuracy of our approach, translation and rotation tests in [10] are performed. The original mesh can be seen in Figures 6 and 7. There is a line object located in \((-L,0)\times(L,0)\) in a \((-2L,-2L)\times(2L,2L)\) domain. A total of 2182 triangles are generated for the mesh. For the translation tests, the object is moved \(0.5L\) upwards. The movement is performed in 10 steps with \(0.05L\) and in 5 steps with \(0.1L\) movement upwards in two different training settings. The last step of the movement can be seen in Figure 6. In Figure 8, the PINN method is compared with the approach in [10]. The area and shape change metrics of two PINN solutions are presented alongside the classical finite element solutions and solutions with Jacobian-based stiffening. The authors applied a stiffening power to prevent the deformation of the smaller elements. The stiffened approach represents the best value obtained in [10] with different applied stiffening power. The two PINN solutions are representing the overall motion in 5 and 10 steps. The total number of steps is represented in parentheses in the figure. As seen in the first row of Figure 8, the PINN solutions are comparable with the FEM solutions with Jacobian-stiffening. As mentioned before, the PINN approach does not have any criteria to prevent mesh overlapping and sudden movements move the vertex nodes in an undesired way. Therefore, the quality of the deformed mesh improves as the number of steps increases. For the rotation tests, the object is rotated \(0.25\pi\) counterclockwise. Again, to prevent overlapping Figure 3: Element quality metrics of the square with deformed top boundary. The figure on the left shows the element area change and the figure on the right shows the element shape change with respect to the initial mesh elements. Figure 4: Initial and deformed meshes of the deformed square case. The square is squeezed from its top and bottom boundary. The first deformed figure shows the solution with classical PINN. The last figure represents the solution with exact boundary enforcement. Figure 5: Element quality metrics of the square deformed from the top and bottom boundaries. The figure on the left shows the element area change and the figure on the right shows the element shape change with respect to the initial mesh elements. of edges and collision of vertices, the movement is performed in steps with \(0.025\pi\) and \(0.05\pi\) counter-clockwise movement in each step in two different training. The last step of the rotation can be seen in Figure 7. The deformed mesh differs especially on the boundaries between different PINN solutions. The small elements near the moving boundary start to collapse in the PINN solution with 5 steps. As the number of steps increases, the mesh quality increases. The comparison of the rotation tests with the same finite element solution of the translation tests is presented in Figure 8. The PINN approach again lies between the classical solution and the solution with Jacobian-based stiffening. In both tests, the global mesh quality metric presented in section 3 is used. The \(|f_{A}|_{\infty}\) and \(|f_{AR}|_{\infty}\) are calculated as the maximum area and shape change of the values in Equation 10 in every step. The \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline \(\Delta y\) & 0.05 & 0.1 & 0.15 & 0.2 & 0.25 & 0.3 & 0.35 & 0.4 & 0.45 & 0.5 \\ \hline \(|f_{A}|_{\infty}\) & 0.667 & 1.196 & 1.189 & 1.291 & 1.518 & 1.627 & 1.697 & 1.715 & 1.791 & 1.998 \\ \(|f_{AR}|_{\infty}\) & 0.596 & 1.125 & 1.118 & 1.220 & 1.447 & 1.556 & 1.626 & 1.663 & 1.921 & 2.258 \\ \hline \hline \end{tabular} \end{table} Table 1: Global area and shape changes of translation tests. The solution is performed in 10 steps. The values are given in every step. Figure 6: Initial mesh and deformed mesh after a total translation of 5 units. The solution in the middle is performed in 10 steps while the solution on the right is performed in 5 steps. Figure 7: Initial mesh and deformed mesh after a total rotation of \(0.25\pi\). The solution in the middle is performed in 10 steps while the solution on the right is performed in 5 steps. ## 4 Conclusion In this paper, we have proposed a method for the estimation of the \(\Delta\theta(\pi)\) and \(\Delta\theta(\pi)\) using the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. area change and shape change values are presented in Tables 1 and 2, for the translation and rotation tests, respectively. ### Flexible Beam This test case consists of a mesh movement due to a motion of a flexible beam adapted from the problem in [50]. The beam is fixed on its left end and sits in the center of the domain. Domain dimensions are \((-10,10)\times(-10,10)\) and the structure's position is \((-5,5)\times(-0.5,0.5)\) The deformation is based on a sinusoidal function \(\sin(\frac{\pi}{2}\frac{\pi}{L})\) with varying amplitude. The initial mesh can be seen in Figure 9. This unstructured mesh consists of 2098 triangular elements. The right end of the structure first moves to 4 units upwards, then 8 units downwards, following a 4-unit upward motion to return to its initial state. The movements are performed in steps with 2-unit motions, upwards or downwards. In Figure 10, the deformed mesh after two steps of movement is presented with the mesh quality presented with the global area and shape change metric. Using exact boundary enforcement gives the true boundary position and therefore fixes the vertices on the boundaries. Therefore, on the outer boundaries, elements are stretched and squeezed more than the inner elements. Especially elements near the tip of the moving boundary have the most area and shape changes. In figure 11, the mesh after one cycle of motion is presented. The structure returns to its original place after eight iterations. By looking at the area change, the sinusoidal motion of the structure can be observed. The most deformed elements are located at the top and bottom boundaries and near the moving tip of the structure. These elements are squeezed first and cannot recover themselves after the relative stretching. ## 5 Conclusion and Future Work In this work, we solved mesh deformation problems with physics-informed neural networks. The selected method uses the linear elastic model since PINN can give accurate results for solving this type of PDE. We note that vertex nodes are moved according to boundary movement. Moreover, Figure 9: Initial mesh of the flexible beam test case with 2098 triangular elements. The elements are concentrated on the moving boundary to track the deformation in a precise way. Figure 11: Element quality metrics when the structure returns its original position. Figure 10: Element quality metrics when the structure tip moves to \(y=4\). exact boundary values are enforced to satisfy the Dirichlet boundary conditions exactly. We test this approach with translation and rotation tests and compared it with finite element solutions. We showed that the PINN solution is comparable with the FEM solutions. The deformation is performed in numerous steps instead of a sudden movement. This prevents vertex collision and edge overlapping. We showed that as the number of steps is increased, the deformed mesh quality gets higher. For a greater mesh quality, the number of steps can be increased. The mesh movement method in this paper only includes linear elastic equations, although it can be extended to other techniques. Other commonly used methods such as the Laplacian or biharmonic equations are also applicable to PINN formulation. Our future work aims to use other methods that prevent mesh overlapping in the training of the PINN. The network parameters and formulation can be extended in a way that vertex collisions and edge overlapping is prevented.
2306.00621
Optimal execution and speculation with trade signals
We propose a price impact model where changes in prices are purely driven by the order flow in the market. The stochastic price impact of market orders and the arrival rates of limit and market orders are functions of the market liquidity process which reflects the balance of the demand and supply of liquidity. Limit and market orders mutually excite each other so that liquidity is mean reverting. We use the theory of Meyer-$\sigma$-fields to introduce a short-term signal process from which a trader learns about imminent changes in order flow. In this setting, we examine an optimal execution problem and derive the Hamilton--Jacobi--Bellman (HJB) equation for the value function. The HJB equation is solved numerically and we illustrate how the trader uses the signal to enhance the performance of execution problems and to execute speculative strategies.
Peter Bank, Álvaro Cartea, Laura Körber
2023-06-01T12:45:01Z
http://arxiv.org/abs/2306.00621v2
# Optimal execution and speculation ###### Abstract We propose a price impact model where changes in prices are purely driven by the order flow in the market. The stochastic price impact of market orders and the arrival rates of limit and market orders are functions of the market liquidity process which reflects the balance of the demand and supply of liquidity. Limit and market orders mutually excite each other so that liquidity is mean reverting. We use the theory of Meyer-\(\sigma\)-fields to introduce a short-term signal process from which a trader learns about imminent changes in order flow. In this setting, we examine an optimal execution problem and derive the Hamilton-Jacobi-Bellman (HJB) equation for the value function. The HJB equation is solved numerically and we illustrate how the trader uses the signal to enhance the performance of execution problems and to execute speculative strategies. + Footnote †: P. Bank and L. Köber are supported by Deutsche Forschungsgemeinschaft through IRTG 2544. L. Köber is grateful to the Oxford-Man-Institute of Quantitative Finance for their generous hospitality. We are also grateful to participants at the Mathematical Finance Seminar at Imperial College, the German Probability and Statistics Days in Essen, the SIAM Conference on Financial Mathematics and Engineering in Philadelphia and the Women in Mathematical Finance Conference at Rutgers University. **Mathematical Subject Classification (2022)**: 91B70, 93E20 **Keywords**: Optimal execution, speculation, trade signal, Meyer \(\sigma\)-field, stochastic optimal control ## 1 Introduction In this paper, we propose a reduced-form model where the evolution of prices is determined by the order flow in the market. In our model, changes in the asset's price are caused by market order flow, which is modelled as a pure jump process. Thus, we do not require exogenous dynamics for the evolution of the unaffected fundamental price as is generally proposed in standard models including Almgren and Chriss (2000) and Obizhaeva and Wang (2013). Here, the price impact of a market order depends on the market's liquidity which is given by the difference between the volume of posted limit orders and the volume of market orders and cancellations. When this difference is high, i.e., when the market is liquid, the price impact of the next market order is low; and when this difference is low, the price impact is high. Our model may be viewed as a permanent price impact model with finite _market depth_ similar to that in Huberman and Stanzl (2004). However, our impact specification depends on liquidity and we do not require an exogenous fundamental price. For simplicity, we do not distinguish between the volumes posted on the bid or ask side of the book, a model extension we leave for future research. This allows us to specify the _market's tightness_ with a constant spread. The _market's resilience_ is captured by letting the arrival rates of market orders, cancellations and limit orders depend on the liquidity in the market: when the level of liquidity is low, the arrival rate of further market orders is low and the arrival rate of limit orders is high, and vice-versa when liquidity is high, see Cartea et al. (2020) who provide empirical evidence of this effect based on data from the Nasdaq stock exchange. As a consequence, resilience is an endogenous feature in our model as opposed to models such as the one by Obizhaeva and Wang (2013) where resilience is an exponential relaxation of price impact towards zero. We introduce a trader who uses market orders to complete an execution programme, where her objective is to maximize expected utility of terminal wealth and the trader does not provide liquidity to the book. She receives a private signal about the arrival of other limit and market orders and uses it to execute informed trades ahead of other orders arrive in the book. In our model, the trader's market orders affect the price of the asset and the liquidity in the market in the same way as the _external_ orders sent by other market participants. Due to the resilience of the book in our model, the trader's orders will trigger liquidity provision which may incentivise pump-and-dump schemes. In such cases, to ensure a well-functioning market, we introduce a circuit breaker that imposes a minimum liquidity requirement for the market to continue operating so that the profitability of pump-and-dump schemes is limited. When a liquidity taking order exhausts liquidity below the minimum liquidity requirement, the circuit breaker is triggered, trading is halted, and positions are executed in an auction where the price is randomly drawn from a Gaussian distribution. In a market with a circuit breaker, the trader's optimal execution problem is well-posed and the value function is non-degenerate. The optimization problem is a singular stochastic control problem when trading is continuous and it results in an impulse control problem when trading at a minimum lot size. We give a direct prove of continuity for the value function which is non-trivial here because of the unbounded state-space and controls in a setting with Hawkes-like jump dynamics. We apply the dynamic programming approach to derive the Hamilton-Jacobi-Bellman Quasi-Variational Inequality (HJBQVI) for both continuous and discrete trading. As a novelty, our HJBQVI contains a double integral straddling a sup-operator that yields the optimal signal-based trade. Mathematically, the trader's information flow with signal is modelled as a Meyer-\(\sigma\)-field, see Lenglart (1980). The technique of Meyer-\(\sigma\)-fields was first applied in stochastic optimization by El Karoui (1981) in the context of optimal stopping. Bank and Besslich (2020) apply the theory of Meyer-\(\sigma\)-fields in a singular stochastic control problem which is solved with convex analysis tools. In Merton's optimal investment problem, Bank and Korber (2022) use a Meyer-\(\sigma\)-field to incorporate a short-term signal about jumps in the price of the asset and use dynamic programming methods to solve for the optimal investment strategy. We use simulations to study the trader's execution strategies. The trader uses signals about liquidity taking orders to optimize the times of execution and the trading volume. Specifically, upon receiving the signal of an imminent arrival of a liquidity taking order, the trader may submit a market order before the external order arrives and impacts the price. As the bid-ask spread widens, a signal about liquidity provision becomes irrelevant for the execution programme of a trader because there is no incentive to execute a market order before liquidity increases in the book, so price impact of market orders decreases. For a narrow bid-ask spread, we find that the trader uses the signal about liquidity provision to start speculative roundtrip trades when liquidity is low and completes the roundtrip after liquidity has recovered upon receiving a signal about a liquidity taking order. Trading on information from signals increases the average terminal wealth and it also increases the variance of the distribution of terminal wealth. In other words, to extract value from the signal, the trader uses strategies that increase the expected gain and that increase the risk of the financial performance of the execution programme. There exists a broad range of literature on informed trading. Important seminal works are those by Kyle (1985) and Back (1992). More recent work on optimal trading with market signals is in Cartea and Jaimungal (2016) who examine optimal execution with a general Markovian signals and derive closed form optimal strategies; see also Casgrain and Jaimungal (2019) who incorporate latent factors. Similarly, Lehalle and Neuman (2019) and Neuman and Voss (2022) study optimal trading with signals when there is transient price impact and Belak et al. (2018) use non-Markovian finite variation signals. For a market maker, Cartea and Wang (2020) show how to use a signal of the trend of the price of an asset in optimal liquidity provision strategies; similarly Lehalle and Mounjid (2017) study optimal market maker strategies with a signal on liquidity imbalance. More recently, Cartea et al. (2022) use signatures of the market to generate signals and Cartea and Sanchez-Betancourt (2022) study how a broker provides liquidity to informed and uninformed traders. The feature that the trader's market orders influence the arrival rate of other limit and market orders in the same way as external orders is a novelty compared with the existing literature on stochastic optimal control with Hawkes processes. For instance, Alfonsi and Blanc (2016) solve an optimal execution problem in a modified version of the model in Obizhaeva and Wang (2013) where the external order flow is modelled as a Hawkes process that is not influenced by the trader. Similarly, Cartea et al. (2018) study a framework where the fill rates of the trader's controlled limit order flow are driven by an external, uncontrolled Hawkes process. The work of Caye and Muhle-Karbe (2014) proposes a modification of the Almgren and Chriss (2000) model where the past orders of the trader have a self-exciting effect on the price impact. In Horst et al. (2020), the trader influences the base intensity of the mutually exciting external market order dynamics so that the order flow of the trader influences the intensity process in a different way from the external order flow. The remainder of this paper is organized as follows: Section 2 presents the market model and introduces a trader who receives signals on the order flow. Section 3 examines the problem of optimal investment and execution in a market with a circuit breaker and illustrates the performance of the optimal strategy with trading signals. Results and proofs are collected in the Appendix. ## 2 The model In this section, we present a model of stock prices where innovations in prices are driven by the flow of market orders. First, we introduce a market model where the arrival of market orders and limit orders is driven by a marked Poisson point process. The arrival rate of orders is a function of the liquidity in the market. At times of high liquidity, market orders arrive more frequently, while at times of low liquidity, the arrival rate of limit orders increases. This feature of the model introduces resilience in the supply and demand of liquidity. Next, we introduce a strategic trader who receives signals about order flow and executes market orders. In our model, all orders arriving in the market, external or those of the trader, have the same effect on the dynamics of liquidity and prices. ### The uncontrolled model In the following, we present a model for the price dynamics \((P_{t})_{t\geq 0}\) of a single asset where price changes are driven by the market order flow \((M_{t})_{t\geq 0}\). Market liquidity \(\lambda_{t}\) measures the capacity of the market to fill an incoming market order at every point in time \(t\geq 0\). The change in liquidity is the difference between the volume of incoming and cancellations of limit orders \((L_{t})_{t\geq 0}\) and the volume of incoming market orders. For simplicity, we assume that the liquidity of the buy and sell side are the same. Specifically, the liquidity process \((\lambda_{t})_{t\geq 0}\) satisfies the dynamics \[d\lambda_{t}=dL_{t}-|dM_{t}|, \tag{1}\] where the limit order flow satisfies \[dL_{t}=\int_{E\times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq g(\lambda_{t-})\}} \rho(e)N(dt,de,dy), \tag{2}\] and the market order flow satisfies \[dM_{t}=\int_{E\times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq f(\lambda_{t-})\}} \eta(e)N(dt,de,dy). \tag{3}\] Here, \(N\) is a a marked Poisson point process on \([0,\infty)\times E\times\mathbb{R}_{+}\) with compensator \(dt\otimes\nu(de)\otimes dy\) where we assume \(\nu(E)=1\). The mappings \(\eta,\rho\in L^{1}(\nu)\) determine the volume of the order associated with a mark \(e\) that the point process \(N\) sets in the Polish mark space \(E\), and \(\nu(de)\) is the probability distribution for this mark.1 When \(\rho(e)>0\), a new limit order of size \(\rho(e)\) is posted, and \(\rho(e)<0\) corresponds to a cancellation of limit orders of size \(|\rho(e)|\). Similarly, \(\eta(e)>0\) corresponds to a buy market order of size \(\eta(e)\) and \(\eta(e)<0\) is a sell market order of size \(|\eta(e)|\). Modelling the dynamics of limit and market orders with the same marked Poisson point process allows us to specify the dependence structure between the arrival of orders in a tractable way. To exclude the simultaneous arrival of limit orders and market orders (which would be cumbersome for bookkeeping), we assume \(\eta(e)\rho(e)=0\) for all \(e\in E\). Footnote 1: For a measurable function \(f:E\to\mathbb{R}\) and a measure \(\mu\) on \(E\), the notation \(f\in L^{p}(\mu)\) means that \(f\) is measurable with \(\int_{E}|f(e)|^{p}\mu(de)<\infty\). The functions \(f\) and \(g\) are of at most linear growth, i.e., \[|f(\lambda)|+|g(\lambda)|\leq c_{1}+c_{2}|\lambda|,\quad\lambda\in\mathbb{R}, \tag{4}\] for finite constants \(c_{1},c_{2}\geq 0\), so the market dynamics \((\lambda,L,M)\) admit a unique solution. **Lemma 2.1**.: _Suppose the functions \(f\) and \(g\) are of at most linear growth, i.e., they satisfy (4). Then, the market dynamics (1), (2), (3) admit a unique solution \((\lambda,L,M)\)._ For a proof, see the Appendix. The continuous, strictly positive functions \(f\) and \(g\) specify how the arrival rates of the orders depend on the liquidity \(\lambda\) of the market. We assume that \(f\) is increasing and \(g\) is decreasing, and assume \[\int_{E}\rho(e)\nu(de)>0 \tag{5}\] to introduce resilience of liquidity provision in a market with the Hawkes-like liquidity dynamics in (1). More precisely, as liquidity decreases, fewer market orders will arrive and limit orders will arrive more frequently -- and vice versa when liquidity increases, where (5) guarantees that liquidity provision dominates cancellations of limit orders. Next, we specify the price dynamics of the asset as \[dP_{t}=I(\Delta_{t}M,\lambda_{t-}), \tag{6}\] with the price impact function \(I(\cdot\,,\cdot)\) defined by \[I(\Delta,\lambda):=\operatorname{sgn}(\Delta)\int_{0}^{|\Delta|}\iota( \lambda-z)dz,\quad\lambda\in\mathbb{R}, \tag{7}\] where \(\operatorname{sgn}(\,\cdot\,)\) denotes the sign function. Here, the function \(I(\Delta,\lambda)\) describes the price impact of a market order of size \(\Delta\in\mathbb{R}\) that arrives when the liquidity of the market is \(\lambda\). The sign of \(I(\Delta,\lambda)\) is determined by that of \(\Delta\): buy orders increase the price and sell orders decrease it. The function \(\iota:\mathbb{R}\to\mathbb{R}_{+}\) is non-increasing to ensure that market orders have less impact when liquidity is high. Similarly, the absolute value of the function \(I\) is increasing in the size \(|\Delta|\) because, everything else being equal, as the volume of market orders increases, so does the impact of the order on the price of the asset. Finally, definition (7) makes the price dynamics (6) consistent when orders are split, i.e., \(I(\Delta,\lambda)=I(\Delta_{1},\lambda)+I(\Delta_{2},\lambda-|\Delta_{1}|)\) for \(\Delta_{1}+\Delta_{2}=\Delta\) with \(\operatorname{sgn}(\Delta_{1})=\operatorname{sgn}(\Delta_{2})\). Our model exhibits a direct link between price volatility and market liquidity because price volatility is determined by the arrival rate and the size of price changes which both depend on the liquidity in the market. More precisely, market liquidity affects the arrival rate of market orders, i.e., the arrival rate of price changes, through the dynamics as in (3) and the size of price changes in (6) through the price impact function \(I\) as in (7). For instance, when liquidity in the market decreases, the arrival rate of market orders decreases and the size of the price impact of market orders increases. The next lemma makes this link between volatility and liquidity explicit and states an elasticity condition under which the volatility of prices decreases with market liquidity. **Lemma 2.2**.: _The predictable quadratic variation of the price \(P\) is given by_ \[d\langle P\rangle_{t}=\sigma^{2}(\lambda_{t-})dt, \tag{8}\] _where_ \[\sigma(\lambda):=\left(f(\lambda)\int_{E}I(\eta(e),\lambda)^{2}\nu(de) \right)^{1/2}. \tag{9}\] _The function in (9) is strictly decreasing in \(\lambda\) under the elasticity condition_ \[0<\left.\frac{\partial_{\lambda}f(\lambda)}{f(\lambda)}\right/ \left(-\frac{\partial_{\lambda}\mathbb{I}^{2}(\lambda)}{\mathbb{I}^{2}(\lambda )}\right)<1, \tag{10}\] _where \(\mathbb{I}(\lambda):=\big{(}\int_{E}I(\eta(e),\lambda)^{2}\nu(de)\big{)}^{1/2}\) is the \(L^{2}\)-norm of the price impact size from market orders at liquidity level \(\lambda\)._ For a proof, see the Appendix. The elasticity condition compares the relative change in the arrival rate \(f\) of market orders with the absolute value of the relative change in the impact size of market orders under the \(L^{2}\)-norm. Thus, condition (10) says that when liquidity decreases, the price impact increases at a higher rate than the arrival rate of market orders decreases. Hence, price volatility rises with low liquidity because the increase of the price impact of market orders at low liquidity compensates for the decrease in the arrival rate of market orders. ### The controlled model with trading signal Next, we introduce a trader who follows an optimal execution programme. For simplicity, we assume that the trader sends market orders and does not provide liquidity to the market. To describe the trader's information structure, we introduce the (right-continuous) filtration \[\mathcal{F}_{t}=\sigma(N([0,s]\times E\times[0,y]),s\in[0,t],E\in\mathcal{E}, y\in\mathbb{R}_{+}),\quad t\geq 0\] generated by the point process \(N\). The usual information framework of the predictable information flow \(\mathcal{P}(\mathcal{F})\) contains only past information about the order flow and a trader with information \(\mathcal{P}(\mathcal{F})\) can only trade after the arrival of external limit and market orders are revealed. Thus, she can only react to market events after they have happened. Here, however, the trader receives a short-term signal about imminent changes in the order flow. She uses this information to anticipate jumps in liquidity and in prices, and executes signal-based trades before the external order arrives in the market. The signal is given by the process \[Z_{t}=\int_{[0,t]\times E\times\mathbb{R}_{+}}z(e,y)N(ds,de,dy),\quad t\geq 0, \tag{11}\] where \(N\) is the same marked Poisson point process that also drives the external order flows (2) and (3). The function \(z\in L^{1}(\nu(de)\otimes dy)\) determines what the trader learns about any mark \((e,y)\) set by \(N\). We introduce the Meyer-\(\sigma\)-field (cf.(Lenglart, 1980, Def. 2)) \[\Lambda:=\mathcal{P}(\mathcal{F})\vee\sigma(Z),\] which adds the information \(\sigma(Z)\) of the signal \(Z\) to the predictable information \(\mathcal{P}(\mathcal{F})\). The trader's strategy is described by a process \((C_{t})_{t\geq 0}\) of locally bounded variation starting at \(C_{0-}=0\) and whose changes represent changes in the trader's inventory so that at any time \(t\) the net number of shares sold or bought up to this moment is given by \(C_{t}\). Trades can be of any size, i.e., \(D=\mathbb{R}\), or -- in line with market practice -- a multiple of a minimal lot size \(\delta>0\), so \(D=\{...,-2\delta,-\delta,0,\delta,2\delta,...\}\). The set of admissible controls is therefore \[\mathcal{C}=\Big{\{}C\ \Lambda\text{-measurable of bounded total variation, with }C_{0-}=0,\ \text{and}\ \Delta^{l}C,\Delta^{r}C\ \text{D-valued}\Big{\}}. \tag{12}\] Here, \(\Delta^{l}_{t}C=C_{t}-C_{t-}\) and \(\Delta^{r}_{t}C=C_{t+}-C_{t}\) denote, respectively, the 'left' and the 'right' jumps of \(C\) at time \(t\geq 0\). They correspond to the proactive signal-based trades and reactive state-based trades we introduce shortly. The following lemma provides a representation of the trader's strategy and clarifies how to use the signal \(\Delta_{t}Z\) in her strategy. **Lemma 2.3**.: _A process \(C\) is a \(\Lambda\)-measurable process of locally bounded variation if and only if it admits the decomposition_ \[C_{t}=C_{t}^{c}+\sum_{0\leq s\leq t}\Delta_{s}^{l}C+\sum_{0\leq s<t}\Delta_{s}^{ r}C, \tag{13}\] _where \(C^{c}\) is a continuous, adapted process of bounded variation, \(\Delta^{r}C\) is an adapted process of bounded variation, and where_ \[\Delta_{t}^{l}C=\int_{\{t\}\times E\times\mathbb{R}_{+}}\Gamma_{s}(z(e,y))N(ds, de,dy) \tag{14}\] _for some \(\mathcal{P}(\mathcal{F})\otimes\mathcal{B}(\mathbb{R})\)- measurable field \(\Gamma\) satisfying both \(\Gamma_{s}(0)=0\) and the integrability condition_ \[\int_{[0,t]\times E\times\mathbb{R}_{+}}|\Gamma_{s}(z(e,y))|N(ds,de,dy)<\infty \quad a.s.\qquad\text{for all }t\geq 0. \tag{15}\] Thus, upon observing a signal \(\Delta_{t}Z\), the trader sends the order \(\Delta_{t}^{l}C=\Gamma_{t}^{l}(\Delta_{t}Z)\) before the full information about external orders becomes known to all market participants and its effect materialises, i.e., liquidity and prices change as a result. Hence, we call \(\Delta_{t}^{l}C\) the _signal-based trade_. In contrast, the right jump \(\Delta_{t}^{r}C\) is the trader's action after the arrival of an external order in the market and incorporates the full information about the post-shock market state, i.e., the market state after the arrival of external orders. In addition, \(\Delta_{t}^{r}C\) can also represent an order sent out of the trader's own volition, typically motivated by the state of her execution programme. Therefore, we call \(\Delta_{t}^{r}C\) the _state-based trade_. **Remark 2.4**.: _In our model, the controlled orders \(\Delta_{t}^{l}C\), \(\Delta_{t}^{r}C\), \(dC_{t}^{c}\) affect market liquidity \(\lambda\), and thus affect the arrival rate of external shocks, in the same way as the external limit orders, market orders and cancellations. This is a key difference between our model and those proposed in the existing literature on stochastic control for Hawkes processes, see e.g. Alfonsi and Blanc (2016), Cartea et al. (2018), and Horst et al. (2020)._ Next, we introduce the controlled dynamics of the state process \[S_{t}^{C}:=(\lambda_{t}^{C},Q_{t}^{C},P_{t}^{C},X_{t}^{C}),\quad t\geq 0,\] starting at \(S_{0-}^{C}:=(\lambda,q,p,x)\). Here, \(\lambda_{t}^{C}\) is the current level of market liquidity, \(Q_{t}^{C}\) is the trader's current inventory, \(P_{t}^{C}\) is the current asset price, and \(X_{t}^{C}\) is the trader's cash process resulting from her strategy \(C\). When no external market and limits orders arrive in the market and when there are no jumps in the trader's strategy, the state process satisfies the continuous dynamics \[dS_{t}^{C}=(-|dC_{t}^{c}|,dC_{t}^{c},\iota(\Lambda_{t}^{C})dC_{t},-P_{t}^{C}dC _{t}^{c}-\zeta|dC_{t}^{c}|)\text{ when }\Delta_{t}^{l}C=\Delta_{t}^{r}C=0,N(\{t\}\times E \times\mathbb{R}_{+})=0,\] where \(\zeta\geq 0\) is the half-spread quoted in the book. Next, we describe how the state changes when there are jumps in the order flow. A jump can result, for instance, from the trader sending an order of size \(\Delta>0\). Thus, the asset price changes by \(I(\Delta,\lambda)\) where \(\lambda\) denotes the pre-trade liquidity level, see (6). When the trader sends an order of size \(\Delta>0\), the price impact from her trade affects the trader's post-trade cash position in a nonlinear way. To describe this, it is convenient to introduce the function \[\Xi(\Delta,\lambda):=\int\limits_{0}^{|\Delta|}I\left(z,\lambda\right)dz, \tag{16}\] which reflects the impact costs of the trade \(\Delta\) when liquidity is \(\lambda\). So the total cost of buying \(\Delta\geq 0\) shares at price \(p\) is \((p+\zeta)\Delta+\Xi(\Delta,\lambda)\); similarly, the revenue from selling \(\Delta\geq 0\) shares is \((p-\zeta)\Delta-\Xi(\Delta,\lambda)\). As in the definition of \(I\) in (7), the function \(\Xi\) is consistent with order splitting, i.e., \(\Xi(\Delta,\lambda)=\Xi(\Delta_{1},\lambda)+\Xi(\Delta_{2},\lambda-|\Delta_{1 }|)\) for \(\Delta_{1}+\Delta_{2}=\Delta\) with \(\operatorname{sgn}(\Delta_{1})=\operatorname{sgn}(\Delta_{2})\). Finally, the precise timing when market orders, limit orders, and cancellations arrive is important and not interchangeable. This is because the different orders arrive when the level of liquidity is \(\lambda_{t-}^{C}\), \(\lambda_{t-}^{C}-|\Gamma_{t}(\Delta_{t}Z)|\) and \(\lambda_{t}^{C}\), respectively, and so the impact on the price of the asset varies and the effect on the trader's cash process is different, too. We explain this step-by-step and refer to Figure 1 for an illustration. From pre-trade state \(S_{t-}^{C}\) to post-shock state \(S_{t}^{C}\)We assume the pre-trade state \(\mathbf{s}_{-}:=S_{t-}^{C}\) is \[\mathbf{s}_{-}=(\boldsymbol{\lambda}_{-},\mathbf{q}_{-},\mathbf{p}_{-}, \mathbf{x}_{-}):=(\lambda_{t-}^{C},Q_{t-}^{C},P_{t-}^{C},X_{t-}^{C}).\] The impending external limit and market orders at time \(t\) are \[\rho :=\Delta_{t}L^{C}=\int_{\{t\}\times E\times\mathbb{R}_{+}}\mathbbm {1}_{\{y\leq g(\lambda_{-}^{C})\}}\rho(e)N(ds,de,dy),\] \[\eta :=\Delta_{t}M^{C}=\int_{\{t\}\times E\times\mathbb{R}_{+}}\mathbbm {1}_{\{y\leq f(\lambda_{-}^{C})\}}\eta(e)N(ds,de,dy).\] The trader receives private information about \(\Delta_{t}M^{C}\) or \(\Delta_{t}L^{C}\) from the signal \(z:=\Delta_{t}Z\) and thus trades the quantity \(\gamma:=\Gamma_{t}(z)\). Market liquidity updates to \[\boldsymbol{\lambda}(\gamma,\eta,\rho;\mathbf{s}_{-}):=\boldsymbol{\lambda}_ {-}-|\gamma|-|\eta|+\rho,\] and the trader's inventory becomes \[\mathbf{q}(\gamma,\eta,\rho;\mathbf{s}_{-}):=\mathbf{q}_{-}+\gamma.\] Due to price impact, the price changes to \[\mathbf{p}(\gamma,\eta,\rho;\mathbf{s}_{-}):=\mathbf{p}_{-}+I\left(\gamma, \boldsymbol{\lambda}_{-}\right)+I\left(\eta,\boldsymbol{\lambda}_{-}-|\gamma| \right),\] Figure 1: Evolution of state processes where the market order of size \(\eta\) arrives when liquidity is \(\mathbf{\lambda}_{-}-|\gamma|\). The trader's cash position becomes \[\mathbf{x}(\gamma,\eta,\rho;\mathbf{s}_{-}):=\mathbf{x}_{-}-\mathbf{ p}_{-}\gamma-\zeta|\gamma|-\Xi(\gamma,\mathbf{\lambda}_{-}).\] Thus, we define the state-update function \(\mathfrak{s}\) as \[\mathfrak{s}(\gamma,\eta,\rho;\mathbf{s}_{-}):=(\mathbf{\lambda}, \mathbf{q},\mathbf{p},\mathbf{x})(\gamma,\eta,\rho;\mathbf{s}_{-}), \tag{17}\] to write the post-shock state as \[S_{t}^{C}:=\mathfrak{s}(\Gamma_{t}(\Delta_{t}Z),\Delta_{t}M^{C},\Delta_{t}L^{C};S_{t-}^{C}).\] From post-shock state \(S_{t}^{C}\) to post-trade state \(S_{t+}^{C}\)After the realised external shock and the post-shock state \(S_{t}^{C}\) become fully known to the trader, she executes the state-based trade \(\Delta_{t}^{r}C\) and the post-trade state is defined as \[S_{t+}^{C}:=\mathfrak{s}(\Delta_{t}^{r}C,0,0;S_{t}^{C}),\] where \(\mathfrak{s}\) is as in (17). Immediate roundtrip trades are not profitable in our model because the price impact of every order includes the effect of previous orders on liquidity. For instance, consider buying \(\Delta\) shares and immediately selling \(\Delta\) shares at some time \(t\in[0,T]\). Assuming no other orders arrive in between, the price impact \(I(-\Delta,\lambda-\Delta)\) of the sell trade includes the liquidity depleting effect of the first leg of the roundtrip trade. Due to the monotonicity of \(I\) in the liquidity component \(\lambda\), the change in price as a result of selling \(\Delta\) shares is higher than the change in price when first buying the same amount of shares. Consequently, the revenue received from selling the shares is less than the initial purchase cost; hence, instantaneous roundtrips result in a loss. Moreover, the resilience in market liquidity encourages market participants to wait for a recovery in liquidity before sending another market order because low liquidity in the market leads to higher execution costs and increases the arrival frequency of orders that provide liquidity. Therefore, there is an incentive to split large orders into child orders to minimize execution costs, i.e., minimize slippage. ## 3 The optimal investment and execution problem In this section, we show how a trader who receives the private signal \(Z\) executes a position with market orders over some finite time horizon \([0,T]\). The trader's performance criterion is the expected utility of terminal wealth. The self-exciting nature of our system requires special care to avoid blow-ups. Thus, we introduce a lower bound on liquidity so that liquidity taking orders (i.e., market orders and limit order cancellations) are limited by the supply of liquidity in the market. The lower bound on liquidity can be interpreted as a circuit breaker that is imposed by the exchange to ensure a well-functioning market. As we show, the circuit breaker ensures that the value function of the optimization problem is non-degenerate. Finally, we illustrate the market model with signals and showcase the performance of the optimal strategy. ### A circuit breaker for liquidity In practice, we observe that markets are temporarily shut down if prices become too volatile or prices undergo an abrupt change exceeding a predefined level; see, e.g., Chen et al. (2023). We use the connection between high volatility in prices and low market liquidity from Lemma 2.2 to impose a circuit breaker for price volatility by introducing a lower bound for liquidity: \[\lambda_{t}^{C}\geq\underline{\lambda},\quad t\geq 0. \tag{18}\] We refer to \(\underline{\lambda}\) as the _liquidity trigger_ that activates the circuit breaker when a market participant executes an order that depletes liquidity beyond \(\underline{\lambda}\). In this case, the market order or the cancellation of a limit order that reduces liquidity to the level \(\underline{\lambda}\) is partially executed with the available liquidity in the market \(\lambda_{t}-\underline{\lambda}\) and the market is shut down. Orders that are sent thereafter cannot be executed. In the following, we denote by \(\tau^{C}\in[0,\infty]\) the stopping time when the circuit breaker is triggered, where \(\tau^{C}=\infty\) corresponds to the case when liquidity remains above \(\underline{\lambda}\) for the entire time horizon. When the trader's inventory at market shutdown is \(Q_{\tau^{C}+}^{C}\neq 0\), she executes the outstanding inventory at a price determined in an auction. For simplicity, we assume that the auction price is \(P_{\tau^{C}+}^{C}+\sigma Y\), where \(Y\sim N(0,1)\), \(\sigma>0\) and \(P_{\tau^{C}+}^{C}\) is the price when trading was halted. Clearly, as the trader's level of risk aversion increases, the incentives to avoid a market shutdown are stronger. For notational simplicity, we denote \(\tilde{S}_{t}^{C}:=(\lambda_{t}^{C},Q_{t}^{C},P_{t}^{C},X_{t}^{C})\) the state process in the market with circuit breaker and \(\tilde{\mathcal{C}}\) the class of admissible strategies \(C\) with circuit breaker, see Section 4.1 below for a detailed description. Finally, for a trader with trading horizon \([0,T]\), the terminal wealth \(\tilde{X}_{T}^{C}\) is the cash position \(X_{T}^{C}\) plus the cash from executing any remaining position \(Q_{T}^{C}\) at time \(T\), i.e., \[\tilde{X}_{T}^{C}:=X_{T}^{C}+P_{T}^{C}Q_{T}^{C}+\sigma Y\operatorname{sgn}(Q_ {T}^{C})(|Q_{T}^{C}|-(\lambda_{T}^{C}-\underline{\lambda})^{+})^{+}-\zeta|Q_ {T}^{C}|-\Xi(-Q_{T}^{C},\lambda_{T}^{C}), \tag{19}\] where \(\sigma Y\operatorname{sgn}(Q_{T}^{C})(|Q_{T}^{C}|-(\lambda_{T}^{C}-\underline {\lambda})^{+})^{+}\) is the additional cash from executing at the auction price when the circuit breaker has been triggered before \(T\) or when \(|Q_{T}^{C}|\) exceeds the available liquidity at time \(T\). The lower bound (18) requires a sufficient supply of liquidity in the market and thus ensures that market orders and cancellations cannot deplete liquidity in an excessive way. This leads to an upper bound for the expected trade volume of liquidity taking orders, including the trader's market orders. Here, we denote by \(\tilde{M}^{C}\) the external market order flow in the market with circuit breaker, by \(\tilde{L}^{C,+}\) the posted limit order flow and by \(\tilde{L}^{C,-}\) the cancellations of limit orders, see Section 4.1 below. **Lemma 3.1**.: _Consider a market with circuit breaker and liquidity trigger (18) and suppose the integrability condition \(\int_{E}|\rho(e)|^{n}\nu(de)<\infty\), for some \(n\in\mathbb{N}\). Then, for admissible trading strategies \(C\in\tilde{\mathcal{C}}\), the \(n\)-th moment of the total variation of the liquidity taking order flows is uniformly bounded from above:_ \[\mathbb{E}\left[\left(V_{[0,T]}(Q^{C})+V_{[0,T]}(\tilde{M}^{C})+V _{[0,T]}(\tilde{L}^{C,-})\right)^{n}\right]\] \[\leq(n+1)(\lambda_{0}-\underline{\lambda})^{n}+(n+1)(Tg(\underline {\lambda}))^{n}\nu(E)^{n-1}\int_{E}\rho^{+}(e)^{n}\nu(de),\] _where \(\rho^{+}(e)=\max(\rho(e),0)\)._ Hence, the expected total variation of the trader's inventory is uniformly bounded over a finite trading horizon \([0,T]\). ### Well-posedness of the optimization problem The trader maximises the expected utility from terminal wealth over the trading horizon \([0,T]\), so her performance criterion is \[J(C):=\mathbb{E}[U_{\alpha}(\tilde{X}_{T}^{C})],\] for any admissible control \(C\in\tilde{\mathcal{C}}\), and the utility function is \[U_{\alpha}(x)=\begin{cases}-\exp(-\alpha x),&\alpha>0,\\ \qquad\qquad\qquad x,&\alpha=0,\end{cases} \tag{20}\] where \(\alpha\geq 0\) is the risk aversion parameter. The terminal cash position is denoted by \(\tilde{X}_{T}^{C}\), including any trades that are required to complete the execution programme at the terminal time \(T\). The value function is \[v(T,s):=\max_{C\in\tilde{\mathcal{C}}}\mathbb{E}\left[U_{\alpha}( \tilde{X}_{T}^{C})\right], \tag{21}\] with \(\tilde{X}_{T}^{C}\) as in (19) and for \(S_{0-}^{C}=s=(\lambda,q,p,x)\in\mathbb{S}\), where \(\mathbb{S}:=\mathbb{R}^{4}\). Next, we assume that all moments of \(\eta\) are finite \[\int_{E}|\eta(e)|^{n}\nu(de)<\infty\quad\text{ for all }n\in \mathbb{N}, \tag{22}\] and that \(\rho\) has the following finite exponential moments \[\int_{E}\exp(n|\rho(e)|)\nu(de)<\infty,\quad\int_{E}\exp(n|\rho(e )|^{2})\nu(de)<\infty\quad\text{ for all }n\in\mathbb{N}. \tag{23}\] **Proposition 3.2**.: _Assume conditions (18), (22), and (23) hold. Then, the value function \(v\) in (21) is non-degenerate, i.e., \(v<U_{\alpha}(+\infty)\), where \(U_{\alpha}(+\infty)=0\) for \(\alpha>0\) respectively \(U_{\alpha}(+\infty)=+\infty\) for \(\alpha=0\)._ Now, for \(f\), \(g\), \(\iota\) Lipschitz continuous, the following theorem shows continuity of the value function \(v\) as in (21). **Theorem 3.3** (Continuity of the value function).: _Assume conditions (18), (22), and (23) hold, and assume \(f\), \(g\), \(\iota\) are Lipschitz continuous. Then, the value function \(v\) in (21) is continuous in \((T,s)=(T,\lambda,q,p,x)\in[0,\infty)\times[\underline{\lambda},\infty)\times \mathbb{R}\times\mathbb{R}\times\mathbb{R}\)._ For the proofs of Proposition 3.2 and Theorem 3.3, see the Appendix. Finally, in the Appendix, we show that one can use standard techniques from dynamic programming and the theory of Hamilton-Jacobi-Bellman equations to derive numerical approximations of solutions to the trader's problem. ### Performance of signal-based trading strategies In the following, we implement a numerical scheme to solve the HJB (37) from Subsection 4.2 below and illustrate the trader's signal-based strategy in an optimal acquisition problem -- Subsection 4.3 provides details of the numerical scheme. In our benchmark, the trader receives a private signal about limit orders, market orders, and cancellations of limit orders with probability \(p\). Compared to a trader without the signal, the trader with the signal optimizes her execution times and trading volumes by splitting the parent order into fewer and larger child orders than the trader without signal. She executes her orders when a signal about a liquidity taking order arrives. Signals about liquidity provision are not relevant for the execution programme of a risk averse trader because there is no incentive to execute before the liquidity providing order increases the liquidity in the book and price impact of market orders decreases. Speculative trades are not profitable in the benchmark due to the size of the bid-ask spread. Whereas when the bid-ask spread is narrow, speculation based on signals is more profitable and the trader generally initiates speculative trades upon receiving a signal about liquidity provision when the level of liquidity is currently low. After waiting for liquidity to improve and the cost of price impact to decline, she unwinds the speculative position upon receiving a signal about the imminent arrival of a liquidity taking order. Parameter specification -- Benchmark case.In the simulations, we consider six types of external market orders: buy orders of one, two, and three lots; and sell orders of one, two and three lots. Similarly, limit orders and their cancellations are of size one, two, and three lots. We choose the arrival rates \(f\) and \(g\) as \[f(\lambda)=\theta_{f}\exp(\kappa_{f}\,\lambda)\quad\text{and}\quad g(\lambda )=\theta_{g}\exp(-\kappa_{g}\,\lambda),\] where \(\theta_{f}=20\), \(\theta_{g}=40\) and \(\kappa_{f}=\kappa_{g}=0.01\), i.e., if liquidity \(\lambda\) is at level \(0\), the market expects to receive \(20\) external market orders, \(30\) limit orders, and \(10\) cancellations of limit orders over the trading horizon of length \(T=1\). The price impact function is given by \[\iota(\lambda)=\theta_{\iota}+\kappa_{\iota}\lambda,\] where \(\theta_{\iota}=0.01\) and \(\kappa_{\iota}=-0.0002\). With these parameter values, the elasticity condition (10) holds. For numerical reasons, we introduce an upper bound \(\overline{\lambda}\) for the liquidity parameter \(\lambda\) so that we obtain a bounded domain \([\underline{\lambda},\overline{\lambda}]\) for \(\lambda\), where \(\underline{\lambda}\) is the lower bound from (18). We set the lower bound at \(\underline{\lambda}=-40\) and the upper bound at \(\overline{\lambda}=40\), such that \(\iota(\lambda)\geq 0\) for every \(\lambda\in[\underline{\lambda},\overline{\lambda}]\). The volatility of the auction price in (19) is \(\sigma=0.3\). The bid-ask spread is set to one cent, i.e. \(2\zeta=0.01\). Finally, the trader's risk aversion is \(\alpha=0.1\), see (20). Signal design.The signal is given by the variable \(\Delta_{t}Z\). A value of \(\Delta_{t}Z=1\) signals an incoming limit order and a value of \(\Delta_{t}Z=-1\) signals liquidity taking, i.e., an incoming market order or the cancellation of a limit order. With fixed probability \(\hat{p}\in[0,1]\), the signal alerts the trader of an imminent order and so an external order has a \(1-\hat{p}\) chance of taking the trader by surprise. Thus, the signal informs the trader about the sign of the imminent change in liquidity, but does not provide any information about the size of an order, i.e., the trader anticipates the direction, but not the magnitude of changes in liquidity. Particularly, in the case of a signal \(\Delta_{t}Z=-1\) about liquidity taking, the trader does not know whether a buy market order, a sell market order, or the cancellation of a limit order will be posted. Note that definition (11) allows for a huge variety of conceivable signals and the above specification is considered to simplify the implementation. Value function and certainty equivalent.We implement the numerical scheme from subsection 4.3 to solve the HJB (37). Figure 3 shows the value function \(v\) as in (21) for \(p=x=0\) and when the probability of receiving a signal is \(\hat{p}=0.2\). The value function is symmetric about \(q=0\) due to the symmetry of (7) and (16) with respect to buy and sell orders. Hence, the expected utility of terminal wealth for an acquisition or liquidation programme is the same -- everything else being equal. Moreover, the value function is increasing in the liquidity \(\lambda\) because execution costs decrease as the liquidity in the market increases. When liquidity approaches the liquidity trigger \(\underline{\lambda}\), the value function is the steepest because of the additional risk of entering an auction through a market shutdown. Similarly, the value function is particularly steep for large values of \(|q|\) because a large execution programme is linked to higher risk in price and liquidity changes. Figure 3 illustrates the signal specific certainty equivalent of the value function when the probability of receiving a signal is \(\hat{p}=0.2\). It corresponds to the additional amount of initial wealth that is needed when the trader does not receive a signal to achieve the same expected utility as in the case with signal, i.e., the certainty equivalent \(CE\) is such that \[v_{\hat{p}=0}(t,q,\lambda,p,x+CE)=v_{\hat{p}=0.2}(t,q,\lambda,p,x). \tag{24}\] We rearrange (24) to write \[CE=-\frac{1}{\gamma}\log\left(\frac{v_{p=0.2}(t,q,\lambda,p,x)}{v_{p=0}(t,q, \lambda,p,x)}\right). \tag{25}\] The certainty equivalent is symmetric about \(q\) and does not depend on \(p\) or \(x\). Moreover, we see that the signal is worth the most for large \(|q|\), i.e., the signal is more valueable for larger values of inventory. At most, the trader can make up to four cents from the information of the signal which corresponds to four times the bid-ask spread, see Figure 3. The certainty equivalent is decreasing in the liquidity component \(\lambda\) because as liquidity decreases, the execution times become more relevant as price impact is more costly. Moreover, the certainty equivalent slightly increases towards the liquidity trigger \(\underline{\lambda}=-40\). Here, the signal warns the trader of potentially reaching the lower bound \(\underline{\lambda}\) and reduces the risk of executing a final trade in an auction. For liquidity greater than twenty, the certainty equivalent is zero, i.e., the signal does not add any value because, with and without a signal, a trader will immediately complete the execution programme. Benchmark -- The signal optimises times of execution and trading volume.In the benchmark with spread \(0.01\), we look at the optimal acquisition problem where the trader acquires eight lots of the asset over the time horizon \([0,T]\), so the initial inventory is \(q=-8\). The trader can buy and sell any multiple of lots and can execute speculative trades by trading away from the target of buying only eight lots over the trading window. When there is no private signal, the trader sends child orders of at most two lots when liquidity is sufficiently high, see Figure 4. However, when the trader receives private signals, she times her execution times with those of the signal with information about liquidity taking orders and optimizes the trading volume of her trades, see Figure 5 (simulated with same seed as that for results in Figure 4). Here, she executes her acquisition programme by sending two child orders of sizes three and five lots upon receiving a signal about liquidity taking when the level of liquidity is high. When liquidity is low during \(t\in[0.15,0.7]\), the trader does not act upon the signal and prefers to wait for liquidity to increase. Note that only the signals about liquidity taking orders are relevant in her execution programme. Figure 4: Optimal acquisition of eight lots. Pathwise plot when trader does not receive a signal and with spread \(0.01\). More signals lead to less execution costs and riskier strategies.Figure 6 illustrates the performance of signal-based strategies depending on the probability of receiving a signal. Figure 6 (left) shows the distribution of terminal wealth for different probabilities of receiving a signal compared to the distribution without signal. When the probability of receiving a signal increases, the densities shift to the right, i.e., the signal increases the expected terminal wealth by optimizing the execution. On the other hand, as the probability of receiving a signal increases, also the variance of terminal wealth increases because the trader takes more risks due to the additional information through the private signal. More precisely, knowing that she will be informed about liquidity shocks through the signal, the trader waits longer for liquidity to recover from shocks before trading to minimize price impact costs. However, this is linked to more risk because unsignaled external orders can still arrive which increases the variance of the trader's terminal wealth. Figure 5: Optimal acquisition of eight lots. Pathwise plot when trader receives signal with probability \(\hat{p}=0.2\) and with spread \(0.01\). Figure 6: Optimal acquisition of eight lots. Performance of trader without signal compared to a trader who receives a signal with probability \(\hat{p}\in\{0,1,0.2,0.3,0.4\}\) and with spread \(0.01\). Left: Distribution of terminal wealth. Right: Signal-Sharpe-ratio. To quantify the value of the private signal \(Z\), we introduce the Signal-Sharpe-ratio (SSR) \[SSR(Z):=\frac{\bar{X}(Z)-\bar{X}(0)}{\sigma(Z)},\] where \[\bar{X}(Z):=\frac{1}{n_{sim}}\sum_{j=1}^{n_{sim}}\tilde{X}_{T}^{j}(Z),\ \bar{X}(0):=\frac{1}{n_{sim}}\sum_{j=1}^{n_{sim}}\tilde{X}_{T}^{j}(0)\ \text{and}\ \sigma(Z):=\sqrt{\frac{1}{n_{sim}}\sum_{j=1}^{n_{sim}}( \tilde{X}_{T}^{j}(Z)-\bar{X}(Z))^{2}}.\] Here, \(\tilde{X}_{T}^{j}(Z)\) resp. \(\tilde{X}_{T}^{j}(Z0)\) denote the terminal wealth for scenario \(j\in\{1,...,n_{sim}\}\) with signal \(Z\) and without signal. Precisely, the SSR is the excess return \(\bar{X}(Z)-\bar{X}(0)\) for a trader with signal \(Z\) compared to a trader without signal weighted by the risk \(\sigma(Z)\) of a trader with signal \(Z\). Figure 6 (right) shows the SSR as a function of the probability \(p\) of receiving a signal. The SSR increases up to \(\hat{p}=0.2\) and slowly decreases thereafter because the increase in variance dominates the increase in expected terminal wealth which leads to a non-monotonicity in Figure 6 (left). Note that the mean-variance-optimization is not the objective of the trader who maximizes expected utility of terminal wealth, see (21). Signal is more valuable for speculation as bid-ask spread narrows.Next, consider a bid-ask spread of size \(0.002\), i.e., the bid-ask spread is one fifth of that in the benchmark. In this case, the signal incentivises speculative trades because, everything else being equal, the costs to execute roundtrip trades are lower. The speculative roundtrip trades start after receiving a signal on liquidity provision, i.e., when the trader knows through the signal that the speculation can be unwound at better liquidity, see Figure 7. After triggering liquidity provision through her own market order, the trader waits for liquidity to arrive in the market to unwind the speculative position with better liquidity upon receiving a signal about a liquidity taking order. Next, we study the number of paths with speculative trades among \(n_{sim}=100,000\) simulated paths with bid-ask spread \(0.01\) and \(0.002\) in the acquisition (\(q=-8\)) and pure speculation (\(q=0\)) scenario where the paths for wide and narrow spreads are simulated with the same seed.2 Footnote 2: We say that a path contains a speculative trade if the trader trades away from the target of buying eight lots over the time horizon \([0,1]\) by trading both buy and sell market orders. When the spread is wide, i.e., when the spread is \(0.01\), there is no speculation because roundtrip trades are too costly. On the other hand, when the spread is narrow, i.e., when the spread is \(0.002\), there is speculation in about \(21\%\) of paths in the acquisition problem (\(q=-8\)) and in about \(11\%\) of paths in the pure speculation scenario (\(q=0\)). There are more speculative paths in the execution example because the trader's execution of trades triggers liquidity provision and speculative trades become profitable. Especially, when the trader trades towards a position close to zero very early in the trading window, i.e., completes the acquisition problem early on, the remaining time horizon is long enough so that a speculative roundtrip starting close to zero is profitable; and vice versa if the trader completes the acquisition at a later point in the trading window. Finally, without signal, there is no speculation for both the wide and the narrow bid-ask spread. Figure 8 shows the distribution of terminal wealth in the optimal acquisition problem of \(q=-8\) when the trader receives a private signal with probability \(p=0.2\), and the bid-ask spread is 0.01 and 0.002. With a narrow spread, the terminal wealth is, on average, higher because the trader pays less spread and profits from speculative trades. The terminal wealth increases the most in the interval \([-801,-800.5]\), see Figure 9, because low level of liquidity, which cause smaller values of terminal wealth for a spread of 0.01 due to price impact costs, can lead to profitable roundtrips when the spread is at 0.002. Similarly, the variance of terminal wealth decreases because in general, the trader executes more orders to bring her inventory to zero and only deviates from this target in about 21% of cases. Similarly, Figure 9 illustrates the certainty equivalent as in (25) for spread 0.002 minus the certainty equivalent as in (25) for spread 0.01. The certainty equivalent improves when liquidity is low and when the trader's inventory is close to zero which is when speculative trades are executed. Figure 8: Optimal acquisition of eight lots. Distribution of terminal wealth when trader receives a signal with probability \(\hat{p}=0.2\) with a spread of 0.01 and 0.002. Figure 9: Certainty equivalent for a small spread of 0.002 minus the certainty equivalent for a spread of 0.01. Signal on liquidity taking orders vs signal on liquidity providing orders.Finally, we compare the performance of the strategy of a trader who receives a signal about the arrival of liquidity taking orders against the strategy of a trader who receives a signal about the arrival of liquidity provision. The trader with a signal on liquidity taking uses the signal to optimize the execution times and trading volumes which increases the terminal wealth and its variance, see Figure 10. Whereas, similarly to Figure 5, the trader does not use the signal about liquidity provision for her execution so her terminal wealth coincides with that of the trader who receives no signal, see Figure 10. ## 4 Appendix In this Appendix, we present the definition of the state process in the market with circuit breaker, the derivation of the HJB, the numerical scheme, and other more detailed model specifications. Moreover, we also give the proofs that were skipped in the main part of the paper. ### State process in the market with circuit breaker First, recall the specification of the state process \(S_{t}^{C}=(\lambda^{C},Q_{t}^{C},P_{t}^{C},X_{t}^{C})\) and the set of admissible strategies \(\mathcal{C}\) as defined in Section 2.2 for the market without circuit breaker. When introducing a circuit breaker, we denote the new state process by \(\tilde{S}_{t}^{C}=(\tilde{\lambda}^{C},\tilde{Q}_{t}^{C},\tilde{P}_{t}^{C}, \tilde{X}_{t}^{C})\) and the new set of admissible strategies by \(\tilde{\mathcal{C}}\) which are defined as follows: First, the set of admissible strategies in the market with circuit breaker is \[\tilde{\mathcal{C}}=\Big{\{}C_{t}\in\mathcal{C}:\;\Delta_{t}^{l}C\text{ is }D(\lambda_{t-}^{C})\text{-valued, }\Delta_{t}^{r}C\text{ is }D(\lambda_{t}^{C})\text{-valued, and }C_{t}=C_{\tau^{C}+}\text{ for }t>\tau^{C} \Big{\}}, \tag{26}\] where \(\mathcal{C}\) is as in (12) and where the stopping time \(\tau^{C}\) describes the point in time when the circuit breaker is triggered, i.e., \[\tau^{C}:=\inf\Big{\{}t\geq 0:\min\{\lambda_{t-}^{C}-|\Delta_{t}^{l}C|, \lambda_{t}^{C}\}<\underline{\lambda}\Big{\}},\] with \(\inf\emptyset=+\infty\). Moreover, we denote the set of admissible actions for liquidity level \(\lambda\) as \[D(\lambda):=D\cup\{\lambda-\underline{\lambda},\underline{\lambda}-\lambda\}.\] Figure 10: Optimal acquisition of eight lots. Distribution of terminal wealth for trader who receives a signal on liquidity taking (LT) with probability \(\hat{p}=0.2\) vs a trader who receives a signal on liquidity provision (LP) with probability \(\hat{p}=0.2\); spread is \(0.01\). Here, the enlargement by \(\{\lambda-\underline{\lambda},\underline{\lambda}-\lambda\}\) ensures that the trader always has the possibility to deplete the available liquidity in the market without activating the circuit breaker, even if this deviates from trading in multiples of a lot size \(\delta>0\). Note that in (26) the condition \(C_{t}=C_{\tau^{C}+}\) for \(t>\tau^{C}\) ensures that the circuit breaker cannot be triggered by the trader's continuous trades \(dC^{c}\), but only by a signal-based trade \(\Delta^{I}_{\tau^{C}}C\), a state-based trade \(\Delta^{r}_{\tau^{C}}C\) or by an external market order or a cancellation of limit orders. Next, we define the operator \(\Upsilon\) for market liquidity \(\lambda\) and trade size \(\Delta\in\mathbb{R}\) as \[\Upsilon(\Delta,\lambda):=\begin{cases}\Delta,&\text{if }\lambda-|\Delta|\geq \underline{\lambda},\\ \text{sgn}(\Delta)(\lambda-\underline{\lambda})^{+},&\text{if }\lambda-| \Delta|<\underline{\lambda}.\end{cases} \tag{27}\] It returns \(\Delta\) if the liquidity is sufficient to fill the order \(\Delta\), but it returns \(\lambda-\underline{\lambda}\) if \(\Delta\) depletes liquidity below the level \(\underline{\lambda}\) and zero if the circuit breaker has already been triggered. Here, we denote \((x)^{+}:=\max(x,0)\) for \(x\in\mathbb{R}\). Thus, we denote \(\tilde{S}^{C}_{t}:=(\tilde{\lambda}^{C}_{t},\tilde{Q}^{C}_{t},\tilde{P}^{C}_ {t},\tilde{X}^{C}_{t})\) the state process in the market with with circuit breaker which starts in \(\tilde{S}^{C}_{0-}:=(\tilde{\lambda}^{C}_{0-},\tilde{Q}^{C}_{0-},\tilde{P}^{C }_{0-},\tilde{X}^{C}_{0-})\) and which updates according to \[\tilde{S}^{C}_{t}:=\tilde{\mathfrak{s}}(\tilde{\lambda}^{C}_{t-}, \tilde{Q}^{C}_{t-},\tilde{P}^{C}_{t-},\tilde{X}^{C}_{t-}),\quad\tilde{S}^{C}_{ t+}:=\tilde{\mathfrak{s}}(\tilde{\lambda}^{C}_{t},\tilde{Q}^{C}_{t},\tilde{P}^{C }_{t},\tilde{X}^{C}_{t}),\qquad\text{ for }0\leq t\leq\tau^{C}\] \[\text{ and }\tilde{S}^{C}_{t}=\tilde{S}^{C}_{\tau^{C}+},\qquad \text{ for }t>\tau^{C},\] where, similarly to \(\mathfrak{s}\) as in Section 2.2, the state update function \(\tilde{\mathfrak{s}}\) is defined as \[\tilde{\mathfrak{s}}(\Delta,\eta,\rho;\mathbf{s}_{-}):=(\mathbf{ \lambda},\mathbf{q},\mathbf{p},\mathbf{x}),\] with \[\mathbf{\lambda}=\begin{cases}\mathbf{\lambda}_{-}-|\Delta|+\mathbbm{1}_ {\{\mathbf{\lambda}_{-}-|\Delta|\geq\underline{\lambda}\}}(-|\eta|+\rho),&\text{ if }\mathbf{\lambda}_{-}\geq\underline{\lambda},\\ \mathbf{\lambda}_{-},&\text{ if }\mathbf{\lambda}_{-}<\underline{\lambda},\end{cases}\] \[\mathbf{q}=\mathbf{q}_{-}+\Upsilon(\Delta,\mathbf{\lambda}_{-})\] \[\mathbf{p}=\mathbf{p}_{-}-I\left(\Upsilon(\Delta,\mathbf{\lambda}_{-} ),\mathbf{\lambda}_{-}\right)+\mathbbm{1}_{\{\mathbf{\lambda}_{-}-|\Delta|\geq \underline{\lambda}\}}I\left(\Upsilon(\eta,\mathbf{\lambda}_{-}-|\Delta|),\mathbf{ \lambda}_{-}-|\Delta|\right)\] \[\mathbf{x}=\mathbf{x}_{-}-\mathbf{p}_{-}\Upsilon(\Delta,\mathbf{ \lambda}_{-})-\zeta|\Upsilon(\Delta,\mathbf{\lambda}_{-})|-\Xi(\Upsilon(\Delta, \mathbf{\lambda}_{-}),\mathbf{\lambda}_{-}).\] Here, \(\tilde{\mathfrak{s}}(\Delta,\eta,\rho;\mathbf{s}_{-})\) coincides with \(\mathfrak{s}(\Delta,\eta,\rho;\mathbf{s}_{-})\) when \(\mathbf{\lambda}_{-},\mathbf{\lambda}\geq\underline{\lambda}\). Else, the order that pushes \(\mathbf{\lambda}\) below \(\underline{\lambda}\) is executed only partially via the function \(\Upsilon\) as in (27) and trading is halted for \(\mathbf{\lambda}\) below \(\underline{\lambda}\). Hence, as long as the circuit breaker has not been triggered yet, the state process in the market with and without circuit breaker coincide so that \(\tilde{S}^{C}_{t}=S^{C}_{t}\) for \(0\leq t<\tau^{C}\). Note that for simplicity of notation, we will in the following write \(\tilde{S}^{C}_{t}=(\lambda^{C}_{t},Q^{C}_{t},P^{C}_{t},X^{C}_{t})\) for the components of the state process in the market with circuit breaker. Finally, we define the external market order flow \(\tilde{M}^{C}_{t}\) in the market with circuit breaker as \[\tilde{M}^{C}_{t}=\int_{[0,t]\times E\times\mathbb{R}_{+}}\Upsilon(\eta(e), \lambda^{C}_{s-}-|\Gamma_{s}(z(e,y))|)\,\mathbbm{1}_{\{y\leq f(\lambda^{C}_{s -})\}}N(ds,de,dy). \tag{28}\] Similarly, the limit orders in the market with circuit breaker are \[\tilde{L}^{C,+}_{t}=\int_{[0,t]\times E\times\mathbb{R}_{+}}\rho^{+}(e)\mathbbm{1 }_{\{\lambda^{C}_{s-}-|\Gamma_{s}(z(e,y))|\geq\underline{\lambda}\}\cap\{y \leq g(\lambda^{C}_{s-})\}}N(ds,de,dy), \tag{29}\] and the cancellations of limit orders in the market with circuit breaker are \[d\tilde{L}_{t}^{C,-}=\int_{[0,t]\times E\times\mathbb{R}_{+}}\Upsilon(\rho^{-}(e), \lambda_{s-}^{C}-|\Gamma_{s}(z(e,y))|)\mathbb{1}_{\{y\leq g(\lambda_{s-}^{C})\} }N(ds,de,dy). \tag{30}\] Here, the function \(\Upsilon\) in (28) and (30) as well as the indicator in (29) ensure that trading is halted after circuit breaker activation and that the order flow remains constant after \(\tau^{C}\). ### The Hamilton-Jacobi-Bellman equation In the following, we derive the HJB equation that is satisfied by the value function \(v\) as in (21). Particularly, we investigate for which conditions the value process \(V_{t}^{C}:=v(T-t,\tilde{S}_{t+}^{C})\), \(t\in[0,T]\) with arbitrary, but fixed time horizon \(T\), satisfies super-martingale dynamics for every admissible \(C\in\tilde{\mathcal{C}}\) and martingale dynamics for some optimal strategy \(C^{*}\in\tilde{\mathcal{C}}\). Recall the decomposition of admissible strategies from Lemma 2.3 and suppose that the value function \(v(T,\lambda,q,p,x)\) is sufficiently smooth to apply Ito's formula so that we write at least formally for \(0\leq t\leq\tau^{C}\wedge T\) \[\begin{split}& dV_{t}^{C}=-\frac{\partial v}{\partial T}(T-t,S_{t- }^{C})dt+\left(\frac{\partial v}{\partial p}(T-t,S_{t-}^{C})\iota(\lambda)- \frac{\partial v}{\partial x}(T-t,S_{t-}^{C})P_{t-}^{C}+\frac{\partial v}{ \partial q}(T-t,S_{t-}^{C})\right)d\tilde{C}_{t}^{c}\\ &+\left(\frac{\partial v}{\partial x}(T-t,S_{t-}^{C})\zeta- \frac{\partial v}{\partial\lambda}(T-t,S_{t-}^{C})\right)|d\tilde{C}_{t}^{c} |\\ &+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int\limits_{E\times\mathbb{R}_{+}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with \(\mu=(\nu\otimes Leb)\circ(z)^{-1}\) to estimate the expression in (34) as follows: \[\int\limits_{(E\times\mathbb{R}_{+})\cap\{z(e,y)\neq 0\}}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For simplicity, we introduce the notations \[\gamma^{\lambda} :=\Upsilon(\gamma,\lambda),\] \[\eta^{\lambda,\gamma}(e,y) :=\Upsilon(\eta(e),\lambda-|\gamma^{\lambda}|)\,\mathbb{1}_{\{y\leq f (\lambda)\}},\] \[\rho^{\lambda,\gamma}(e,y) :=\Big{(}\rho^{+}(e)\mathbb{1}_{\{\lambda-|\gamma^{\lambda}|\geq \underline{\lambda}\}}+\Upsilon(\rho^{-}(e),\lambda-|\gamma^{\lambda}|)\Big{)} \mathbb{1}_{\{y\leq g(\lambda)\}},\] \[\lambda^{\lambda,\gamma}(e,y) :=\lambda-|\gamma|-|\eta(e)\mathbb{1}_{\{y\leq f(\lambda)\}}+\rho( e)\mathbb{1}_{\{y\leq g(\lambda)\}},\] \[\Delta^{\lambda} :=\Upsilon(\Delta,\lambda).\] This allows us to write the reduced HJB for \(w(T,\lambda,q)\) with \((T,\lambda,q)\in[0,\infty)\times[\underline{\lambda},\infty)\times\mathbb{R}\) when \(\alpha>0\) as \[\max \bigg{\{}-\frac{\partial w}{\partial T}(T,\lambda,q)+\int\limits_ {(E\times\mathbb{R}_{+})\cap\{z(e,y)=0\}}\hskip-10.0pt\left[\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For inventory \(q\in\mathbb{R}\), we introduce the bounded grid \(\mathbb{R}^{\overline{Q},\overline{Q}}_{\overline{\delta q}}\) on \([\underline{Q},\overline{Q}]\) with step size \(\delta q>0\) where \(\underline{Q}=N_{3}\delta q\) and \(\overline{Q}=N_{4}\delta_{q}\) for some constants \(N_{3},N_{4}\in\mathbb{Z}\), \(N_{3}<N_{4}\). For the inventory \(q\) to remain within the grid \(\mathbb{R}^{\underline{Q},\overline{Q}}_{\overline{\delta q}}\) and for \(\lambda\) to remain above \(\underline{\lambda}-\delta\lambda\), we define the set of actions \[D^{\underline{Q},\overline{Q}}_{\overline{\delta}}(q,\lambda):=\Big{\{}n\!\cdot \!\delta q\ \text{with}\ n\in\mathbb{Z}:q\!+\!n\,\delta q\in\mathbb{R}^{\underline{Q}, \overline{Q}}_{\overline{\delta q}}\ \text{and}\ \lambda\!-\!|n\delta|\geq\underline{\lambda}-\delta \lambda\Big{\}},\qquad q\in\mathbb{R}^{\underline{Q},\overline{Q}}_{\overline{ \delta q}},\lambda\in\mathbb{R}^{\overline{\lambda}}_{\delta\lambda}.\] For any function \(h^{T^{\prime}}:\mathbb{R}^{\overline{\lambda}}_{\delta\lambda}\times\mathbb{R }^{\underline{Q},\overline{Q}}_{\overline{\delta q}}\to\mathbb{R}\) with \(T^{\prime}\in\mathbb{T}_{\delta T}\), we introduce the operator \[\mathcal{L}^{\delta T,\delta\lambda}(q,\lambda,h^{T^{\prime}}):=h^{T^{\prime}} (\lambda,q)+\delta t\bigg{(}\Delta_{z=0}h^{T^{\prime}}(\lambda,q)+\int\limits _{z(E\times\mathbb{R}_{+})\setminus\{0\}}\sup_{\gamma\in D^{\underline{Q}, \overline{Q}}(q,\lambda)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 3.3 are similar. First, the space \(E\) is given by \(E=\{-3,-2,-1,1,2,3\}\times\{1,2,3\}\times\{0,1\}\). The mappings \(\eta\) and \(\rho\) are \[\eta(e)=e_{1}(1-e_{3})\quad\text{and}\quad\rho(e)=e_{2}e_{3}.\] The model is driven by the homogenous Poisson point process \(N(dt,de,dy)\) with compensator \(dt\otimes\nu(de)\otimes dy\), where \(\nu(de)=P^{1}(de_{1})\otimes P^{2}(de_{2})\otimes P^{3}(de_{3})\) with \[P^{1}(\{-3\})=P^{1}(\{3\})=0.1,\quad P^{1}(\{-2\})=P^{1}(\{2\})= P^{1}(\{-1\})=P^{1}(\{1\})=0.2,\] \[P^{2}(\{1\})=P^{2}(\{1\})=0.4,\quad P^{2}(\{3\})=0.2,\] \[P^{3}(\{0\})=P^{3}(\{1\})=0.5.\] The signal \(Z_{t}\in(\{-1,1\}\times\mathbb{R}_{+})^{\mathbb{N}}\) is given as the vector \(Z_{t}=(Z_{t}^{n})_{n\in\mathbb{N}}\), with \(Z^{n}=(Z_{t}^{n,1},Z_{t}^{n,2})\in\{-1,1\}\times\mathbb{R}_{+}\), defined as \[Z_{t}^{n}:=\int_{[0,T]\times E\times\mathbb{R}_{+}}z^{n}(e,y)N(ds,de,dy),\] \[\text{where }z^{n}(e,y)=(-\mathbbm{1}_{\{e_{3}=0\}}+\mathbbm{1}_{\{e_{ 3}=1\}},y\mathbbm{1}_{[n-1,n]}(y)).\] We disintegrate for \(\bar{z}=(\bar{z}^{n})_{n\in\mathbb{N}}\in(\{-1,1\}\times\mathbb{R}_{+})^{ \mathbb{N}}\), with \(\bar{z}^{n}=(\bar{z}^{n,1},\bar{z}^{n,2})\), \[\nu(de)\otimes dy=\int_{z(E\times\mathbb{R}_{+})}K(\bar{z};de,dy)\mu(d\bar{z}),\] where \[\mu(d\bar{z}) =\sum_{n\in\mathbb{N}}\mu^{n}(d\bar{z}^{n}),\] \[\mu^{n}(d\bar{z}) =(\nu\otimes Leb_{\left\lfloor[n-1,n]\right\rfloor})\circ z^{-1}( \bar{z}^{n}),\] \[K(\bar{z};de,dy) =\mathbbm{1}_{\{\bar{z}_{1}=-1\}}(P^{1}\otimes P^{2}\otimes \text{Dirac}_{\{0\}})(de)\otimes\text{Dirac}_{\sum_{n\in\mathbb{N}}\bar{z}^{n,2}}(dy)\] \[\quad+\mathbbm{1}_{\{\bar{z}_{1}=1\}}(P^{1}\otimes P^{2}\otimes \text{Dirac}_{\{1\}})(de)\otimes\text{Dirac}_{\sum_{n\in\mathbb{N}}\bar{z}^{n,2}}(dy).\] Here, \(\bar{z}_{1}\) and \(\bar{z}_{2}\) denote the first and second component of the signal. ### Proofs **Lemma 2.1 - The uncontrolled model dynamics admit a unique solution** Proof of Lemma 2.1.: It suffices to rule out that the mutually-exciting dynamics lead to blow-ups. To this end, we prove that the expectation of the total variation \(V_{[0,t]}(\lambda)\) of the liquidity process \(\lambda\) on \([0,t]\) is bounded for each \(t\in[0,T]\). Let \(c_{1},c_{2}\) be the constants as in (4). We use that \(\eta,\rho\in L^{1}(\nu)\) to bound the expected variation of \(\lambda\) for every \(t\in[0,T]\) from above \[\mathbb{E}[V_{[0,t]}(\lambda)]\] \[\leq\mathbb{E}\left[\int_{[0,t]\times E\times\mathbb{R}_{+}} \mathbbm{1}_{\{y\leq g(\lambda_{s-})\}}|\rho(e)|N(ds,de,dy)+\int_{[0,t]\times E \times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq f(\lambda_{s-})\}}|\eta(e)|N(ds,de, dy)\right]\] \[=\int_{E}|\rho(e)|\nu(de)\mathbb{E}\left[\int_{0}^{t}g(\lambda_{s} )ds\right]+\int_{E}|\eta(e)|\nu(de)\mathbb{E}\left[\int_{0}^{t}f(\lambda_{s}) ds\right]\] \[\leq\left(\int_{E}|\rho(e)|\nu(de)+\int_{E}|\eta(e)|\nu(de) \right)\left(Tc_{1}+c_{2}T|\lambda_{0}|+\mathbb{E}\left[\int_{0}^{t}V_{[0,s]} (\lambda)ds\right]\right).\] Apply Gronwall's inequality to obtain the uniform upper bound for all \(t\in[0,T]\) \[\mathbb{E}[V_{[0,t]}(\lambda)]\] \[\leq \left(\int_{E}|\rho(e)|\nu(de)+\int_{E}|\eta(e)|\nu(de)\right)(Tc _{1}+c_{2}T|\lambda_{0}|)\exp\left(T\int_{E}|\rho(e)|\nu(de)+T\int_{E}|\eta(e) |\nu(de)\right).\] **Lemma 2.2** -- **Link between price volatility and liquidity of the market** Proof.: The quadratic variation of the price process has dynamics \[d[P]_{t}= \int_{E}\mathbbm{1}_{\{y\leq f(\lambda_{t-})\}}I(\eta(e),\lambda_{t -})^{2}N(dt,de,dy),\quad[P]_{0}=0.\] Hence, the dynamics of its predictable compensator coincide with (8). The derivative of the expression in (9) with respect to \(\lambda\) is negative if condition (10) holds; this proves the monotonicity claim. **Lemma 2.3** -- **Decomposition of \(C\)** Proof of Lemma 2.3.: It is easy to see that \(C\) of the form (13) is \(\Lambda\)-measurable and has bounded expected total variation. To prove the reverse, we have to show that for a \(\Lambda\)-measurable process \(C\) of bounded expected variation, there exists a \(\mathcal{P}(\mathcal{F})\otimes\mathcal{B}(\mathbb{R})\)-measurable field \(\Gamma\) which satisfies the integrability condition (15) and vanishes at zero such that (14) holds. From a monotone class argument, see (Bank and Korber, 2022, Lemma 2.2), we know that there exists a \(\mathcal{P}(\mathcal{F})\otimes\mathcal{B}(\mathbb{R})\)-measurable field \(\Gamma\) such that \(\Delta_{t}^{l}C=\Gamma_{t}(\Delta_{t}Z)\), \(t\in[0,T]\). Next, because \(C\) is of bounded variation, we conclude that \(\Gamma_{t}(0)=0\) for all \(t\in[0,T]\). Moreover, \(\Gamma\) satisfies (15) because \(C\) is of bounded total variation and \(\Delta Z_{s}=0\) for all but countably many times. We finish the proof by \[\Delta_{t}^{l}C=\Gamma_{t}(\Delta_{t}Z)=\int_{E\times\mathbb{R}_{+}}\Gamma_{t }(z(e,y))N(dt,de,dy),\quad t\in[0,T].\] **An upper bound for the available liquidity in the market with circuit breaker** Let us note that \[\bar{V}(T):=\lambda_{0}-\underline{\lambda}+\int_{[0,T]\times E\times\mathbb{R}_ {+}}\mathds{1}_{\{y\leq g(\underline{\lambda})\}}|\rho(e)|N(dt,de,dy) \tag{38}\] yields an upper bound for the liquidity to arrive at the market over the time period \([0,T]\). Here, \(g(\underline{\lambda})\) is an upper bound for \(g(\lambda)\) because we have \(\lambda\geq\underline{\lambda}\) and because the function \(g\) is decreasing. **Lemma 3.1** - Finite expected \(n\)-th moment of the total variation of liquidity taking orders in the market with circuit breaker Proof of Lemma 3.1.: Let \(\rho\in L^{n}(\nu)\) for some \(n\in\mathbb{N}\). First, use that market orders and cancellations are executed as long as liquidity remains greater than \(\underline{\lambda}\) to write \[\underline{\lambda}\leq\lambda_{T}=\lambda_{0}-V_{[0,T]}(Q^{C})-V_{[0,T]}( \tilde{M}^{C})-V_{[0,T]}(\tilde{L}^{C,-})+V_{[0,T]}(\tilde{L}^{C,+}), \tag{39}\] with \(\tilde{M}^{C}\), \(\tilde{L}^{C,+}\), \(\tilde{L}^{C,-}\) as in (28), (29) and (30). By rearranging (39), we find the uniform upper bound \[V_{[0,T]}(Q^{C})+V_{[0,T]}(\tilde{M}^{C})+V_{[0,T]}(\tilde{L}^{C,-})\] \[\leq\lambda_{0}-\underline{\lambda}+V_{[0,T]}(\tilde{L}^{C,+})\] \[\leq\lambda_{0}-\underline{\lambda}+\int_{[0,T]\times E\times \mathbb{R}_{+}}\mathds{1}_{\{y\leq g(\underline{\lambda})\}}|\rho(e)|N(dt,de, dy)=\overline{V}(T),\] with \(\bar{V}(T)\) as in (38). We apply the Cauchy Schwarz inequality and finish the proof: \[\mathbb{E}\left[\left(V_{[0,T]}(Q^{C})+V_{[0,T]}(\tilde{M}^{C})+ V_{[0,T]}(\tilde{L}^{C,-})\right)^{n}\right]\leq\mathbb{E}\left[\overline{V}(T)^{n}\right]\] \[\leq(n+1)(\lambda_{0}-\underline{\lambda})^{n}+(n+1)(T\nu(E)g( \underline{\lambda}))^{n-1}\mathbb{E}\left[\int_{[0,T]\times E\times[0,g( \underline{\lambda})]}|\rho(e)|^{n}N(dt,de,dy)\right]\] \[=(n+1)(\lambda_{0}-\underline{\lambda})^{n}+(n+1)(Tg(\underline{ \lambda}))^{n}\nu(E)^{n-1}\int_{E}|\rho(e)|^{n}\nu(de).\] **Preliminaries for the proofs of Proposition 3.2 and Theorem 3.3** Here, we provide an overview of estimates that are used to prove Lemma 3.1, Proposition 3.2 and Theorem 3.3. First, with \(\bar{V}\) as in (38) and by Lemma 3.1, we conclude that for every admissible strategy \(Q\in\mathcal{Q}\) \[V_{[0,T]}(Q^{C})\leq\overline{V}(T)\quad\text{ and }\quad\sup_{t\in[0,T]}|Q^{C}_{ t}|\leq|q|+\overline{V}(T). \tag{40}\] Because \(\iota\) is decreasing, we have for every \(\lambda\in\mathbb{R}\) and \(\Delta\in\mathbb{R}\) \[|I(\Delta,\lambda)|\leq\iota(\underline{\lambda})|\Delta|,\quad|\Xi(\Delta, \lambda)|\leq\iota(\underline{\lambda})|\Delta|^{2}. \tag{41}\] Thus, we conclude \[V_{[0,T]}(P^{C})\leq\iota(\underline{\lambda})\overline{V}(T)\quad\text{ and }\quad\sup_{t\in[0,T]}|P^{C}_{t}|\leq|p|+\iota(\underline{\lambda})\overline{V}(T). \tag{42}\] Next, for \(\lambda_{1},\lambda_{2}>\underline{\lambda}\) and \(\Delta,\Delta_{1},\Delta_{2}\in\mathbb{R}\), we suppose Lipschitz continuity of \(\iota\) with Lipschitz constant \(c_{t}\) to bound the difference of \(I\) as in (7) from above \[\begin{split}&|I(|\Delta|,\lambda_{1})-I(|\Delta|,\lambda_{2})| \leq c_{\iota}|\Delta||\lambda_{1}-\lambda_{2}|,\\ &|I(|\Delta_{1}|,\lambda_{1})-I(|\Delta_{2}|,\lambda_{2})|\leq c _{\iota}\min(|\Delta_{1}|,|\Delta_{2}|)|\lambda_{1}-\lambda_{2}|+\iota( \underline{\lambda})|\Delta_{1}-\Delta_{2}|.\end{split} \tag{43}\] Similarly, we have for \(\Xi\) as in (16) \[\begin{split}&|\Xi(\Delta,\lambda_{1})-\Xi(\Delta,\lambda_{2})| \leq c_{\iota}|\Delta|^{2}|\lambda_{1}-\lambda_{2}|,\\ &|\Xi(\Delta_{1},\lambda_{1})-\Xi(\Delta_{2},\lambda_{2})|\leq c _{\iota}\min(|\Delta_{1}|,|\Delta_{2}|)|\lambda_{1}-\lambda_{2}|+\iota( \underline{\lambda})\max(|\Delta_{1}|,|\Delta_{2}|)|\Delta_{1}-\Delta_{2}|. \end{split} \tag{44}\] **Lemma 4.1**.: _Let \(\rho\in L^{2}(\nu)\) and assume starting values \((\lambda,q,p,x)\) for \((\lambda^{C},Q^{C},P^{C},X^{C})\). There exists a uniform constant \(c=c(T,\lambda,q,p,x)>-\infty\) that depends continuously on its arguments such that_ \[\inf_{C\in\mathcal{C}}\mathbb{E}\left[U_{\alpha}(\tilde{X}^{C}_{T})\right]\geq c.\] Proof.: Let \((\lambda,q,p,x)\) be a set of starting values for \((\lambda^{C},Q^{C},P^{C},X^{C})\). For simplicity, we denote by \(c=c(T,q,\lambda,p,x)>0\) a generic constant that depends continuously on \(T,q,\lambda,p,x\) and that may change from line to line. First, with (40) and (42), we write for the cash position \(X^{C}_{T}\) from trading over the time horizon \([0,T]\): \[X^{C}_{T}\geq x-\sup_{t\in[0,T]}|P^{C}_{t}|V_{[0,T]}(Q^{C})-\zeta V_{[0,T]}(Q^ {C})-\iota(\underline{\lambda})V_{[0,T]}(Q^{C})^{2}. \tag{45}\] Similarly, we have for the cash from completing the execution programme at time \(T\): \[\begin{split}&-P^{C}_{T}Q^{C}_{T}-\sigma Y\operatorname{sgn}(Q^{C}_ {T})(|Q^{C}_{T}|-(\lambda^{C}_{T}-\underline{\lambda})^{+})^{+}-\zeta|Q^{C}_{ T}|-\Xi(|Q^{C}_{T}|,\lambda_{T})\\ \geq-(\sup_{t\in[0,T]}|P^{C}_{t}|+\sigma|Y|+\zeta)(|q|+V_{[0,T]}( Q^{C}))-\iota(\underline{\lambda})(|q|+V_{[0,T]}(Q^{C}))^{2}.\end{split} \tag{46}\] We aggregate (45) and (46) and apply the estimates from (40) and (42) to bound \(\tilde{X}^{C}_{T}\) from below \[\tilde{X}^{C}_{T}\geq-c(|Y|+\overline{V}(T)+|Y|\overline{V}(T)+\overline{V}( T)^{2}). \tag{47}\] For \(\alpha=0\), the claim follows using Lemma 3.1 for \(n=2\). For \(\alpha>0\), we use the monotonocity of \(U_{\alpha}\) together with (47) to estimate \[\begin{split}\mathbb{E}[U_{\alpha}(\tilde{X}^{C}_{T})]& \geq-\mathbb{E}\bigg{[}\exp\Big{(}\alpha c(|Y|+\overline{V}(T)+| Y|\overline{V}(T)+\overline{V}(T)^{2})\Big{)}\bigg{]}\\ &=-\mathbb{E}\bigg{[}\mathbb{E}\Big{[}\exp\Big{(}\alpha c(|Y|+| Y|\overline{V}(T))\Big{)}\Big{|}\overline{V}(T)\Big{]}\exp\Big{(}\alpha c( \overline{V}(T)+\overline{V}(T)^{2})\Big{)}\bigg{]}\\ &=-\mathbb{E}\bigg{[}2\Phi(\alpha c(1+\overline{V}(T)))\exp \Big{(}\frac{\alpha^{2}c^{2}}{2}(1+\overline{V}(T))^{2}\Big{)}\exp\Big{(} \alpha c(\overline{V}(T)+\overline{V}(T)^{2})\Big{)}\bigg{]},\end{split}\] where we use the moment generating function of a folded normal distribution and where \(\Phi\) is the cumulative distribution function of a Gaussian distribution. Next, use that \(\Phi\leq 1\) to write for some constant \(c\) \[\mathbb{E}[U_{\alpha}(\tilde{X}^{C}_{T})]\geq-c\,\mathbb{E}\bigg{[}\exp\Big{(}c (1+\overline{V}(T)^{2})\Big{)}\bigg{]}\] To estimate this expresssion, we write \[\mathbb{E}\Big{[}\exp\Big{(}c\overline{V}(T)^{2}\Big{)}\Big{]} =\mathbb{E}\left[\exp\Big{(}c\Big{(}(\lambda-\underline{\lambda})+ \int_{[0,T]\times E\times[0,g(\underline{\lambda})]}|\rho(e)|N(dt,de,dy)\Big{)} ^{2}\Big{)}\right]\] \[\leq\mathbb{E}\left[\exp\Big{(}3c\Big{(}(\lambda-\underline{ \lambda})^{2}+\Big{(}\int_{[0,T]\times E\times[0,g(\underline{\lambda})]}| \rho(e)|N(dt,de,dy)\Big{)}^{2}\Big{)}\Big{)}\right].\] Next, we apply the Cauchy Schwarz inequality and have with the Levy-Khintchine formula \[\mathbb{E}\left[\exp\Big{(}3c\Big{(}(\lambda-\underline{\lambda} )^{2}+\Big{(}\int_{[0,T]\times E\times[0,g(\underline{\lambda})]}|\rho(e)|N( dt,de,dy)\Big{)}^{2}\Big{)}\Big{)}\right]\] \[\leq\exp\big{(}3c(\lambda-\underline{\lambda})^{2}\big{)}\, \mathbb{E}\left[\exp\Big{(}3cTg(\underline{\lambda})\nu(E)\int_{[0,T]\times E \times[0,g(\underline{\lambda})]}|\rho(e)|^{2}N(dt,de,dy)\Big{)}\right]\] \[=\exp\big{(}3c(\lambda-\underline{\lambda})^{2}\big{)}\exp\Big{(} Tg(\underline{\lambda})\int_{E}\Big{[}\exp(3cTg(\underline{\lambda})\nu(E)|\rho(e)|^{2} )-1\Big{]}\nu(de)\right)<\infty,\] where we use (23). This finishes the proof. **Proposition 3.2** - **The value function is non-degenerate** We use Lemma 3.1 to prove that the value function as in (21) is non-degenerate, i.e., \(v<U_{\gamma}(+\infty)\). Proof of Proposition 3.2.: Let \((\lambda,q,p,x)\) be a set of starting values for \((\lambda^{C},Q^{C},P^{C},X^{C})\). By the concavity of \(U_{\alpha}\) and with Jensen's inequality, the claim follows if there exists a uniform, finite constant \(c=c(T,q,\lambda,p,x)\) such that \[\sup_{C\in\tilde{\mathcal{C}}}\mathbb{E}\Big{[}|\tilde{X}_{T}^{C}|\Big{]}\leq c.\] By the triangle inequality, it is hence sufficient to prove \[\sup_{C\in\tilde{\mathcal{C}}}\Big{(}\mathbb{E}\left[|X_{T}^{C}|\right]+ \mathbb{E}\left[|P_{T}^{C}||Q_{T}^{C}|\right]+\mathbb{E}\left[|\sigma Y||Q_{T} ^{C}|\right]+\mathbb{E}\left[\zeta|Q_{T}^{C}|\right]+\mathbb{E}\left[\Xi(|Q_{ T}^{C}|,\lambda_{T})\right]\Big{)}\leq c. \tag{48}\] For simplicity, we denote by \(c=c(T,q,\lambda,p,x)>0\) a generic constant that depends continuously on \(T,q,\lambda,p,x\) and that my change from line to line. We apply Lemma 3.1 with \(n=1,2\) to write \[\mathbb{E}\left[V_{[0,T]}(Q^{C})+V_{[0,T]}(M^{C})\right]+\mathbb{E}\left[ \big{(}V_{[0,T]}(Q^{C})+V_{[0,T]}(M^{C})\big{)}^{2}\right]\leq c. \tag{49}\] Next, by (40) and (49), we know for every \(C\in\tilde{\mathcal{C}}\) \[\mathbb{E}[|Q_{T}^{C}|]+\mathbb{E}[|Q_{T}^{C}|^{2}]\leq c. \tag{50}\] Similarly, by (41), (40) and by independence of \(Y\), we write \[\mathbb{E}[\sigma|Y||Q_{T}^{C}|]+\mathbb{E}[\zeta|Q_{T}^{C}|]+\mathbb{E}[| \Xi(|Q_{T}^{C}|,\lambda_{T})|]\leq c.\] With (42) and (49), we have \[\mathbb{E}\left[\sup_{t\in[0,T]}|P_{t}^{C}|\right]+\mathbb{E}\left[\sup_{t\in[ 0,T]}|P_{t}^{C}|^{2}\right]\leq c. \tag{51}\] Next, we use the Cauchy Schwarz inequality together with (50) and (51) to write \[\mathbb{E}[|P_{T}^{C}||Q_{T}^{C}|]\leq\mathbb{E}[|P_{T}^{C}|^{2}]^{\frac{1}{2}} \mathbb{E}[|Q_{T}^{C}|^{2}]^{\frac{1}{2}}\leq c.\] Finally, by (40), (41), (49), (51), and the Cauchy Schwarz inequality, we have \[\mathbb{E}[|X_{T}^{C}|]\leq x+\mathbb{E}\left[\sup_{t\in[0,T]}|P_{t}^{C}|V_{[0,T]}(Q^{C})+\zeta V_{[0,T]}(Q^{C})+\iota(\underline{\lambda})V_{[0,T]}(Q^{C}) ^{2}\right]\leq c.\] Aggregating the above estimates, we conclude (48). **Theorem 3.3** - The value function is continuous Proving continuity of the value function is rather involved in the present problem because neither the state nor the control space is bounded at any point in time, rendering standard arguments inapplicable. Also the circuit breaker and the Hawkes-like jump structures pose a challenge for the proof continuity. First, consider two sets of starting values \((\lambda^{\prime},q^{\prime},p,x)\) and \((\lambda^{\prime\prime},q^{\prime\prime},p,x)\) for \((\lambda_{t}^{C},Q_{t}^{C},P_{t}^{C},X_{t}^{C})\) and let \(T^{\prime},T^{\prime\prime}\geq 0\). For simplicity and without loss of generality, we treat the case where p=x=0, a generalization is straight forward using (36). Let \(\varepsilon>0\) and let \(C^{\prime}\) be an \(\varepsilon\)-optimal strategy for the values \((T^{\prime},\lambda^{\prime},q^{\prime},p,x)\), and \(C^{\prime\prime}\) some strategy for \((T^{\prime\prime},\lambda^{\prime\prime},q^{\prime\prime},p,x)\). By concavity of the utility function \(U_{\alpha}\), the first Taylor approximation is an upper bound for the difference of the value functions for \(\alpha>0\): \[\left(w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime },\lambda^{\prime\prime},q^{\prime\prime})\right)=v(T^{\prime},\lambda^{\prime },q^{\prime},p,x)-v(T^{\prime\prime},\lambda^{\prime\prime},q^{\prime\prime},p,x)\] \[\leq\mathbb{E}\left[U_{\alpha}(\tilde{X}_{T^{\prime}}^{C^{\prime} })\right]+\varepsilon-\mathbb{E}\left[U_{\alpha}(\tilde{X}_{T^{\prime\prime}} ^{C^{\prime\prime}})\right]\leq\mathbb{E}\left[\alpha\exp(-\alpha\tilde{X}_{ T^{\prime\prime}}^{C^{\prime\prime}})(\tilde{X}_{T^{\prime}}^{C^{\prime}}- \tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}})\right]+\varepsilon.\] We rearrange the terms and apply the Cauchy Schwarz inequality to write \[w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime},\lambda^{\prime \prime},q^{\prime\prime})\leq\exp(\alpha(x+pq))\alpha\mathbb{E}\left[\exp(-2 \alpha\tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}})\right]^{1/2}\mathbb{E} \left[(\tilde{X}_{T^{\prime}}^{C^{\prime}}-\tilde{X}_{T^{\prime\prime}}^{C^{ \prime\prime}})^{2}\right]^{1/2}+\varepsilon,\] From Lemma 4.1, we know that there exists a finite constant \(c>0\) depending continuously on \((T^{\prime\prime},\lambda^{\prime\prime},q^{\prime\prime})\) such that \[\mathbb{E}\left[\exp(-2\alpha\tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}}) \right]\leq c.\] Consequently, we obtain the estimate \[w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime},\lambda^{\prime \prime},q^{\prime\prime})\leq c\mathbb{E}\left[(\tilde{X}_{T^{\prime}}^{C^{ \prime}}-\tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}})^{2}\right]^{1/2}+\varepsilon, \tag{52}\] where \(c>0\) is a finite constant depending continuously on \((T^{\prime\prime},\lambda^{\prime\prime},q^{\prime\prime})\). For \(\alpha=0\), we have the estimate \[w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime},\lambda^{\prime \prime},q^{\prime\prime})\leq\mathbb{E}\left[\tilde{X}_{T^{\prime}}^{C^{\prime }}-\tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}}\right]+\varepsilon.\] **Lemma 4.2** (Continuity in \(q\)).: _Let the conditions of Theorem 3.3 hold. Then, for \(T,\lambda\) in a compact set, the value function \(w(T,\lambda,q)\) as in (36) is locally Lipschitz continuous in \(q\), i.e., for \(q^{\prime},q^{\prime\prime}\) in a compact set, we have \(|w(T,\lambda,q^{\prime})-w(T,\lambda,q^{\prime\prime})|\leq c|q^{\prime}-q^{ \prime\prime}|\) where the Lipschitz constant \(c\) does not depend of \(T,\lambda,q^{\prime},q^{\prime\prime}\)._ Proof.: For simplicity, we treat the case \(\alpha>0\); the case \(\alpha=0\) is proven analogously. Let \(T,\lambda\) be from some compact set and without loss of generality we consider two different starting values \(q^{\prime}\) and \(q^{\prime\prime}\) such that \(\max(|q^{\prime}|,|q^{\prime\prime}|)\leq c_{max}\) and assume as above that \(p=x=0\). Let \(\varepsilon>0\) and let \(C^{\prime}\) be an \(\varepsilon\)-optimal strategy for starting value \(q^{\prime}\). We define strategy \(C^{\prime\prime}\) for starting value \(q^{\prime\prime}\) to copy the trades of \(C^{\prime}\), i.e., \(C^{\prime\prime}:=C^{\prime}\) so that the trader with strategy \(C^{\prime\prime}\) executes the additional amount of \(q^{\prime\prime}-q^{\prime}\) at terminal time \(T\). Consequently, the liquidity processes \(\lambda^{C^{\prime}}\),\(\lambda^{C^{\prime\prime}}\), the hitting times \(\tau^{C^{\prime}}\),\(\tau^{C^{\prime\prime}}\), the price processes \(P^{C^{\prime}}\), \(P^{C^{\prime\prime}}\) and the cash processes \(X^{C^{\prime}}\), \(X^{C^{\prime\prime}}\) coincide and we have \(Q^{C^{\prime\prime}}=Q^{C^{\prime}}+(q^{\prime\prime}-q^{\prime})\). For simplicity, we denote by \(c=c(T,\lambda,c_{max})>0\) a generic constant that depends continuously on \(T,\lambda,c_{max}\) and that may change from line to line. With the Taylor estimate in (52), we have \[w(T,\lambda,q^{\prime})-w(T,\lambda,q^{\prime\prime})\] \[\leq c\,\mathbb{E}\Big{[}\Big{(}X^{C^{\prime}}_{T}+P^{C^{\prime}}_ {T}Q^{C^{\prime}}_{T}+\sigma Y\operatorname{sgn}(Q^{C^{\prime}}_{T})(|Q^{C^{ \prime}}_{T}|-(\lambda^{C^{\prime}}_{T}-\underline{\lambda})^{+})^{+}-\zeta|Q ^{C^{\prime}}_{T}|-\Xi(Q^{C^{\prime}}_{T},\lambda^{C^{\prime}}_{T})\] \[-\Big{(}X^{C^{\prime\prime}}_{T}+P^{C^{\prime\prime}}_{T}Q^{C^{ \prime\prime}}_{T}+\sigma Y\operatorname{sgn}(Q^{C^{\prime\prime}}_{T})(|Q^{C ^{\prime\prime}}_{T}|-(\lambda^{C^{\prime\prime}}_{T}-\underline{\lambda})^{+ })^{+}-\zeta|Q^{C^{\prime\prime}}_{T}|-\Xi(Q^{C^{\prime\prime}}_{T},\lambda^{C ^{\prime\prime}}_{T})\Big{)}\,\Big{)}^{2}\Big{]}^{1/2}\!\!+\varepsilon\] \[\leq 4c\,\mathbb{E}\Big{[}(P^{C^{\prime}}_{T})^{2}(q^{\prime \prime}-q^{\prime})^{2}+\sigma^{2}Y^{2}(q^{\prime}-q^{\prime\prime})^{2}\] \[\qquad\quad+\zeta^{2}(q^{\prime\prime}-q^{\prime})^{2}+\Big{(} \Xi(Q^{C^{\prime}}_{T}+(q^{\prime\prime}-q^{\prime}),\lambda^{C^{\prime}}_{T}) -\Xi(Q^{C^{\prime}}_{T},\lambda^{C^{\prime}}_{T})\Big{)}^{2}\,\Big{]}^{1/2}\! \!+\varepsilon.\] By (42) and Lemma 3.1 with \(n=2\), we have \[\mathbb{E}\Big{[}(P^{C^{\prime}}_{T})^{2}(q^{\prime\prime}-q^{\prime})^{2} \Big{]}\leq(q^{\prime\prime}-q^{\prime})^{2}\,\big{(}p^{2}+\iota(\underline{ \lambda})^{2}\mathbb{E}\,\big{[}\overline{V}(T)^{2}\big{]}\big{)}\leq c(q^{ \prime}-q^{\prime\prime})^{2}.\] Similarly, we know \(\mathbb{E}\Big{[}Y^{2}(q^{\prime\prime}-q^{\prime})^{2}\Big{]}\leq c(q^{ \prime\prime}-q^{\prime})^{2}\). Finally, by (44) and with Lemma (3.1) for \(n=2\), we write \[\mathbb{E}\Big{[}\Big{(}\Xi(Q^{C^{\prime}}_{T}+(q^{\prime\prime}-q^{\prime}), \lambda^{C^{\prime}}_{T})-\Xi(Q^{C^{\prime}}_{T},\lambda^{C^{\prime}}_{T}) \Big{)}^{2}\Big{]}\leq c(q^{\prime}-q^{\prime\prime})^{2}.\] Aggregating the above estimates, we obtain \[w(t,\lambda,q^{\prime})-w(t,\lambda,q^{\prime\prime})\leq c|q^{\prime}-q^{ \prime\prime}|+\varepsilon,\] for some finite constant \(c=c(T,\lambda,c_{max})>0\) that is continuous in the variables, i.e., does not depend on \(T,\lambda\) as long as these are from some compact set. Because \(\varepsilon\) was chosen arbitrarily and because the local Lipschitz constant \(c\) does not depend on \(\varepsilon\), we finish the proof by exchanging the roles of \(q^{\prime}\) and \(q^{\prime\prime}\). **Lemma 4.3**.: _Let the conditions of Theorem 3.3 hold. For \(T,q\) in a compact set, the function \(w(T,\lambda,q)\) as in (36) is locally \(\frac{1}{2}\)-Holder continuous in \(\lambda\in[\underline{\lambda},\infty)\), i.e., for \(\lambda^{\prime},\lambda^{\prime\prime}\) in a compact set with \(|\lambda^{\prime}-\lambda^{\prime\prime}|<1\), we have \(|w(T,\lambda^{\prime},q)-w(T,\lambda^{\prime\prime},q)|\leq c|\lambda^{ \prime}-\lambda^{\prime\prime}|^{1/2}\), where \(c\) does not depend on \(T,q,\lambda^{\prime},\lambda^{\prime\prime}\)._ Proof.: The argument for \(\alpha=0\) being similar, we treat the case \(\alpha>0\). Let \(T,q\) be fixed and let \(\lambda^{\prime},\lambda^{\prime\prime}>\underline{\lambda}\) be two starting values, where we can assume without loss of generality that \(\max(|\lambda^{\prime}|,|\lambda^{\prime\prime}|)\leq c_{max}\) and as before that \(p=x=0\). For simplicity, we denote by \(c=c(T,c_{max},q)>0\) a generic constant that depends continuously on \(T,c_{max},q\) and that may change from line to line. For the starting value \(\lambda^{\prime}\), we fix \(\varepsilon>0\) and choose \(C^{\prime}\) an \(\varepsilon\)-optimal strategy with predictable field \(\Gamma^{\prime}_{s}\), and impulses \(\Delta^{r}_{s}\tilde{C}^{\prime}\), see Lemma 2.3. We denote \((\lambda^{C^{\prime}}_{s})_{s\in[0,T]}\) the respective liquidity processes for starting value \(\lambda^{\prime}\) and strategies \(C^{\prime}\). Next, we denote by \(C^{\prime\prime}\) a strategy for starting value \(\lambda^{\prime\prime}\) with the corresponding liquidity process \((\lambda_{s}^{C^{\prime\prime}})_{s\in[0,T]}\), that is defined by the predictable field \(\Gamma^{\prime\prime}_{s}\) and impulses \(\Delta_{s}^{r}C^{\prime\prime}\) that are as follows: Strategy \(C^{\prime\prime}\) copies the trades of \(C^{\prime}\) as long as \(\lambda^{C^{\prime}}\) and \(\lambda^{C^{\prime\prime}}\) remain above the lower bound \(\underline{\lambda}\), i.e., \[\Gamma^{\prime\prime}_{s}:=\Gamma^{\prime}_{s},\hskip 28.452756pt\text{if } \lambda_{s-}^{C^{\prime}}-|\Gamma^{\prime}_{s}(\Delta_{s}Z)|\geq\underline{ \lambda}\text{ and }\lambda_{s-}^{C^{\prime\prime}}-|\Gamma^{\prime}_{s}(\Delta_{s}Z)| \geq\underline{\lambda}\] and similarly for the impulse \(\Delta_{s}^{r}C^{\prime\prime}\). If the trade of \(C^{\prime}\) would trigger the circuit breaker for the trader with strategy \(C^{\prime\prime}\), i.e., if \(\lambda^{C^{\prime\prime}}-|\Gamma^{\prime}_{s}(\Delta_{s}Z)|<\underline{\lambda}\), but \(C^{\prime}\) does not trigger it, \(C^{\prime\prime}\) depletes the available liquidity \(\lambda_{s-}^{C^{\prime\prime}}-\underline{\lambda}\) without triggering the circuit breaker: \[\Gamma^{\prime\prime}_{s}:=\text{sgn}(\Gamma^{\prime}_{s})(\lambda_{s-}^{C^{ \prime\prime}}-\underline{\lambda}),\hskip 28.452756pt\text{if }\lambda_{s-}^{C^{\prime}}-|\Gamma^{ \prime}_{s}|\geq\underline{\lambda},\text{ and }\lambda_{s-}^{C^{\prime\prime}}-| \Gamma^{\prime}_{s}|<\underline{\lambda},\] and similarly for the impulse \(\Delta_{s}^{r}C^{\prime\prime}\). Note that this is an admissible action by (26). Finally, in the case where the trader with strategy \(C^{\prime}\) triggers the circuit breaker, the trader with \(C^{\prime\prime}\) triggers the circuit breaker as well, i.e., \[\Gamma^{\prime\prime}_{s}:=\text{sgn}(\Gamma^{\prime}_{s})(\lambda_{s-}^{C^{ \prime\prime}}-\underline{\lambda}+\delta^{*}),\hskip 28.452756pt\text{if }\lambda_{s-}^{C^{ \prime}}-|\Gamma^{\prime}_{s}|<\underline{\lambda},\] where we w.l.o.g. set \(\delta^{*}=\delta\) when the trader trades in multiples of some lot size \(\delta\) and \(\delta^{*}=1\) for continuous trading. The impulse \(\Delta_{s}^{r}C^{\prime\prime}\) is defined analogously. The respective hitting times are \(\tau^{C^{\prime}}\) and \(\tau^{C^{\prime\prime}}\), the inventory processes are \(Q^{C^{\prime}}\), \(Q^{C^{\prime\prime}}\), the price processes are \(P^{C^{\prime}}\), \(P^{C^{\prime\prime}}\), and the cash processes are \(X^{C^{\prime}}\), \(X^{C^{\prime\prime}}\). Next, we consider the set \(A\) where the external shocks in \(\lambda_{s}^{C^{\prime}}\) and \(\lambda_{s}^{C^{\prime\prime}}\) are different: \[\begin{split} A:=\Bigg{\{}\int_{[0,\tau^{C^{\prime}}]\times E \times\mathbb{R}_{+}}\Big{[}\mathbbm{1}_{\{f(\lambda_{s-}^{C^{\prime}})\wedge f (\lambda_{s-}^{C^{\prime\prime}})<y\leq f(\lambda_{s-}^{C^{\prime}})\lor f( \lambda_{s-}^{C^{\prime\prime}})\}}\\ +\mathbbm{1}_{\{g(\lambda_{s-}^{C^{\prime}})\wedge g(\lambda_{s- }^{C^{\prime\prime}})<y\leq g(\lambda_{s-}^{C^{\prime}})\lor g(\lambda_{s-}^{C^ {\prime\prime}})\}}\Big{]}N(ds,de,dy)\geq 1\Bigg{\}}.\end{split} \tag{53}\] By definition of \(C^{\prime\prime}\), we conclude that on its complement \(A^{c}\), both liquidity processes \(\lambda_{s}^{C^{\prime}}\) and \(\lambda_{s}^{C^{\prime}}\) reach the lower bound at the same time, i.e., we have \(\mathbbm{1}_{A^{c}}\tau^{C^{\prime}}=\mathbbm{1}_{A^{c}}\tau^{C^{\prime\prime}}\). Similarly, we have \[\mathbbm{1}_{A^{c}}|\lambda_{s+}^{C^{\prime}}-\lambda_{s+}^{C^{\prime\prime}}| \leq\mathbbm{1}_{A^{c}}|\lambda_{s-}^{C^{\prime}}-\lambda_{s-}^{C^{\prime \prime}}|\leq|\lambda^{\prime}-\lambda^{\prime\prime}|,\hskip 28.452756pts\in[0, \tau^{C^{\prime}}). \tag{54}\] For the general case, using (54) and the Lipschitz continuity of \(f\) and \(g\), we have for \(t\leq T\) \[\mathbb{E}\left[\sup_{s\in[0,t\wedge\tau C^{\prime}\wedge\tau C^{ \prime\prime}]}|\lambda^{C^{\prime}}_{s-}-\lambda^{C^{\prime\prime}}_{s-}|\right]\] \[\leq|\lambda^{\prime}-\lambda^{\prime\prime}|+\mathbb{E}\bigg{[} \int_{[0,t\wedge\tau C^{\prime}\wedge\tau C^{\prime\prime}]\times E\times \mathbb{R}_{+}}\Big{[}\mathbbm{1}_{\{f(\lambda^{C^{\prime}}_{s-})\wedge f( \lambda^{C^{\prime\prime}}_{s-})<y\leq f(\lambda^{C^{\prime}}_{s-})\lor f( \lambda^{C^{\prime\prime}}_{s-})\}}|\eta(e)|\] \[\qquad\qquad\qquad\qquad+\mathbbm{1}_{\{g(\lambda^{C^{\prime}}_{s -})\wedge g(\lambda^{C^{\prime}}_{s-})<y\leq g(\lambda^{C^{\prime}}_{s-})\lor g (\lambda^{C^{\prime\prime}}_{s-})\}}|\rho(e)|\Big{]}N(ds,de,dy)\bigg{]}\] \[\leq|\lambda^{\prime}-\lambda^{\prime\prime}|+\mathbb{E}\bigg{[} \int_{[0,t\wedge\tau C^{\prime}\wedge\tau C^{\prime\prime}]}\Big{(}|f(\lambda^ {C^{\prime}}_{s-})-f(\lambda^{C^{\prime\prime}}_{s-})|\int_{E}|\eta(e)|\nu( de)\] \[\qquad\qquad\qquad\qquad+|g(\lambda^{C^{\prime}}_{s-})-g(\lambda ^{C^{\prime\prime}}_{s-})|\int_{E}|\rho(e)|\nu(de)\Big{)}ds\bigg{]}\] \[\leq|\lambda^{\prime}-\lambda^{\prime\prime}|+c\Big{(}\int_{E}| \eta(e)|\nu(de)+\int_{E}|\rho(e)|\nu(de)\Big{)}\mathbb{E}\bigg{[}\int_{[0,t \wedge\tau C^{\prime}\wedge\tau C^{\prime\prime}]}|\lambda^{C^{\prime}}_{s-}- \lambda^{C^{\prime\prime}}_{s-}|ds\bigg{]}\] \[\leq|\lambda^{\prime}-\lambda^{\prime\prime}|+c\Big{(}\int_{E}| \eta(e)|\nu(de)+\int_{E}|\rho(e)|\nu(de)\Big{)}\int_{[0,t]}\mathbb{E}\bigg{[} \sup_{r\in[0,s\wedge\tau C^{\prime}\wedge\tau C^{\prime\prime}]}|\lambda^{C^{ \prime}}_{r-}-\lambda^{C^{\prime\prime}}_{r-}|\bigg{]}ds.\] Applying Gronwall's inequality, we obtain \[\mathbb{E}\left[\sup_{s\in[0,T\wedge\tau C^{\prime}\wedge\tau C^{\prime\prime }]}|\lambda^{C^{\prime}}_{s-}-\lambda^{C^{\prime\prime}}_{s-1}\right]\leq c^{ *}|\lambda^{\prime}-\lambda^{\prime\prime}|, \tag{55}\] for some finite constant \(c^{*}>0\) that is independent of \(T\), \(q\), \(\lambda^{\prime}\), and \(\lambda^{\prime\prime}\) for \(T,q,\lambda^{\prime},\lambda^{\prime\prime}\) from a compactum. With the Taylor estimate (52), we write with \(A\) as in (53) \[w(T,\lambda^{\prime},q)-w(T,\lambda^{\prime\prime},q)\] \[\leq c\mathbb{E}\bigg{[}\Big{(}X^{C^{\prime}}_{T}-X^{C^{\prime \prime}}_{T}+P^{C^{\prime}}_{T}Q^{C^{\prime}}_{T}-P^{C^{\prime\prime}}_{T}Q^{C ^{\prime\prime}}_{T}+\zeta(|Q^{C^{\prime}}_{T}-Q^{C^{\prime\prime}}_{T}|)+( \Xi(Q^{C^{\prime}}_{T},\lambda^{C^{\prime}}_{T})-\Xi(Q^{C^{\prime\prime}}_{T}, \lambda^{C^{\prime\prime}}_{T}))\] \[+\sigma Y\big{(}\operatorname{sgn}(Q^{C^{\prime}}_{T})(|Q^{C^{ \prime}}_{T}|-(\lambda^{C^{\prime}}_{T}-\underline{\lambda})^{+})^{+}- \operatorname{sgn}(Q^{C^{\prime\prime}}_{T})(|Q^{C^{\prime\prime}}_{T}|-( \lambda^{C^{\prime\prime}}_{T}-\underline{\lambda})^{+})^{+}\big{)}\Big{)}^{2} \Big{(}\mathbbm{1}_{A}+\mathbbm{1}_{A^{c}}\Big{)}\bigg{]}^{1/2}+\varepsilon.\] For the expectation over \(A\), we use the Cauchy Schwarz inequality to obtain the upper bound \[\mathbb{E}\bigg{[}\Big{(}X^{C^{\prime}}_{T}-X^{C^{\prime\prime}}_ {T}+P^{C^{\prime}}_{T}Q^{C^{\prime}}_{T}-P^{C^{\prime\prime}}_{T}Q^{C^{\prime \prime}}_{T}+\zeta(|Q^{C^{\prime}}_{T}-Q^{C^{\prime\prime}}_{T}|)+(\Xi(Q^{C^{ \prime}}_{T},\lambda^{C^{\prime}}_{T})-\Xi(Q^{C^{\prime\prime}}_{T},\lambda^{C^{ \prime\prime}}_{T})) \tag{56}\] We apply Lemma 3.1 for \(n=8\) to conclude \[\mathbb{E}\bigg{[}\Big{(}X^{C^{\prime}}_{T}-X^{C^{\prime\prime}}_{ T}+P^{C^{\prime}}_{T}Q^{C^{\prime}}_{T}-P^{C^{\prime\prime}}_{T}Q^{C^{\prime \prime}}_{T}+\zeta(|Q^{C^{\prime}}_{T}-Q^{C^{\prime\prime}}_{T}|)+(\Xi(Q^{C^{ \prime}}_{T},\lambda^{C^{\prime}}_{T})-\Xi(Q^{C^{\prime\prime}}_{T},\lambda^{C^{ \prime\prime}}_{T}))\] \[\qquad+\sigma Y\big{(}\operatorname{sgn}(Q^{C^{\prime}}_{T})(|Q^{ C^{\prime}}_{T}|-(\lambda^{C^{\prime}}_{T}-\underline{\lambda})^{+})^{+}- \operatorname{sgn}(Q^{C^{\prime\prime}}_{T})(|Q^{C^{\prime\prime}}_{T}|-( \lambda^{C^{\prime\prime}}_{T}-\underline{\lambda})^{+})^{+}\big{)}\Big{)}^{4} \bigg{]}\leq c.\] Next, we use Markov's inequality, the Lipschitz continuity of \(f\) and \(g\) and (55) to write \[\mathbb{E}\left[\mathbbm{1}_{A}\right]\] \[\leq\mathbb{E}\Bigg{[}\int_{[0,T]\times E\times\mathbbm{R}_{+}^{ \left[\mathbbm{1}_{+}\{f(\lambda_{s-}^{C^{\prime}})\wedge f(\lambda_{s-}^{C^{ \prime\prime}})<y\leq f(\lambda_{s-}^{C^{\prime}})\lor f(\lambda_{s-}^{C^{ \prime\prime}})\right\}}+\mathbbm{1}_{\{g(\lambda_{s-}^{C^{\prime}})\wedge g( \lambda_{s-}^{C^{\prime\prime}})<y\leq g(\lambda_{s-}^{C^{\prime}})\lor g( \lambda_{s-}^{C^{\prime\prime}})\}}]N(ds,de,dy)\Bigg{]}\] \[=\nu(E)\mathbb{E}\left[\int_{[0,T]}\Big{[}|f(\lambda_{s-}^{C^{ \prime}})-f(\lambda_{s-}^{C^{\prime\prime}})|+|g(\lambda_{s-}^{C^{\prime}})-g (\lambda_{s-}^{C^{\prime\prime}})|\Big{]}ds\right]\] \[\leq c\mathbb{E}\left[\sup_{s\in[0,T]}|\lambda_{s-}^{C^{\prime}}- \lambda_{s-}^{C^{\prime\prime}}|\right]\leq c|\lambda^{\prime}-\lambda^{ \prime\prime}|.\] Consequently, (56) is bounded from above by \(c|\lambda^{\prime}-\lambda^{\prime\prime}|^{1/2}\). Next, on the set \(A^{c}\), recall that \(\tau^{C^{\prime}}=\tau^{C^{\prime\prime}}\) and that we know with (54) \[\mathbbm{1}_{A^{c}}|\lambda_{s}^{C^{\prime}}-\lambda_{s}^{C^{\prime\prime}}| \leq|\lambda^{\prime}-\lambda^{\prime\prime}|\quad\text{ for }s\in[0,T\wedge\tau^{C^{\prime}}]. \tag{57}\] Moreover, by definition of \(C^{\prime\prime}\), the differences of the trades in \(\tilde{C}^{\prime}\) and \(\tilde{C}^{\prime\prime}\) sum up to at most \(|\lambda_{s}^{C^{\prime}}-\lambda_{s}^{C^{\prime\prime}}|\) so that with (57) we have \[\mathbbm{1}_{A^{c}}\bigg{(}\sum_{0\leq s\leq T}|\Upsilon(\Delta_{s}^{l}\tilde{ C}^{\prime},\lambda_{s-}^{C^{\prime}})-\Upsilon(\Delta_{s}^{l}\tilde{C}^{ \prime\prime},\lambda_{s-}^{C^{\prime\prime}})|+\sum_{0\leq s<T}|\Upsilon( \Delta_{s}^{r}\tilde{C}^{\prime},\lambda_{s}^{C^{\prime}})-\Upsilon(\Delta_{s} ^{r}\tilde{C}^{\prime\prime},\lambda_{s}^{C^{\prime\prime}})|\bigg{)}\leq| \lambda^{\prime}-\lambda^{\prime\prime}|. \tag{58}\] With (40), (57), (58) and (43), we conclude \[\mathbbm{1}_{A^{c}}\sup_{s\in[0,T]}|P_{s}^{C^{\prime}}-P_{s}^{C^{\prime\prime} }|^{2}\leq c|\lambda^{\prime}-\lambda^{\prime\prime}|^{2}\left(1+\overline{V} ^{2}\right).\] Now, use the triangle inequality to estimate \[\mathbb{E}[\mathbbm{1}_{A^{c}}|X_{T}^{C^{\prime}}-X_{T}^{C^{ \prime\prime}}|^{2}]\] \[\leq\mathbb{E}\bigg{[}\mathbbm{1}_{A^{c}}\Big{(}\sup_{s\in[0,T]}| P_{s}^{C^{\prime}}-P_{s}^{C^{\prime\prime}}|\overline{V}\] \[\quad+(\sup_{s\in[0,T]}|P_{s}^{C^{\prime}}|+\zeta)\!\!\sum_{0\leq s \leq T}\!\!\!\Big{(}|\Upsilon(\Delta_{s}^{l}\tilde{C}^{\prime},\lambda_{s-}^{ C^{\prime}})-\Upsilon(\Delta_{s}^{l}\tilde{C}^{\prime\prime},\lambda_{s-}^{C^{ \prime\prime}})|+|\Upsilon(\Delta_{s}^{r}\tilde{C}^{\prime},\lambda_{s}^{C^{ \prime}})-\Upsilon(\Delta_{s}^{r}\tilde{C}^{\prime\prime},\lambda_{s}^{C^{ \prime\prime}})|\Big{)}\] \[\quad+\sum_{0\leq s\leq T}|\Xi(\Upsilon(\Delta_{s}^{l}\tilde{C}^{ \prime},\lambda_{s-}^{C^{\prime}}),\lambda_{s-}^{C^{\prime}})-\Xi(\Upsilon( \Delta_{s}^{l}\tilde{C}^{\prime\prime},\lambda_{s-}^{C^{\prime\prime}}), \lambda_{s-}^{C^{\prime\prime}})|\] \[\quad+\sum_{0\leq s<T}|\Xi(\Upsilon(\Delta_{s}^{r}\tilde{C}^{ \prime},\lambda_{s}^{C^{\prime}}),\lambda_{s}^{C^{\prime}})-\Xi(\Upsilon( \Delta_{s}^{r}\tilde{C}^{\prime\prime},\lambda_{s}^{C^{\prime\prime}}), \lambda_{s}^{C^{\prime\prime}})|\Big{)}^{2}\bigg{]}.\] With (44), (55), (57),(58), and Lemma 3.1 for \(n=2,4\), we thus have \[\mathbb{E}[\mathbbm{1}_{A^{c}}|X_{T}^{C^{\prime}}-X_{T}^{C^{\prime\prime}}|^{ 2}]\leq c|\lambda^{\prime}-\lambda^{\prime\prime}|^{2}\mathbb{E}\Big{[} \overline{V}(T)^{2}+\overline{V}(T)^{4}\Big{]}\leq c|\lambda^{\prime}-\lambda^{ \prime\prime}|^{2},\] With Lemma 3.1 for \(n=4\) and (58), we write \[\mathbbm{1}_{A^{c}}|Q_{T}^{C^{\prime}}-Q_{T}^{C^{\prime\prime}}|^{ 4}\leq c|\lambda^{\prime}-\lambda^{\prime\prime}|^{4},\] \[\mathbbm{1}_{A^{c}}(|Q_{T}^{C^{\prime}}|^{4}+|Q_{T}^{C^{\prime \prime}}|^{4})\leq c(q^{4}+\overline{V}(T)^{4}),\] and by (41) and (43), we know \[\mathbbm{1}_{A^{c}}|P_{T}^{C^{\prime}}-P_{T}^{C^{\prime\prime}}|^{4} \leq c|\lambda^{\prime}-\lambda^{\prime\prime}|^{4}(1+\overline{V}(T)^{4}),\] \[\mathbbm{1}_{A^{c}}(|P_{t}^{C^{\prime}}|^{4}+|P_{t}^{C^{\prime \prime}}|^{4})\leq c(p^{4}+\overline{V}(T)^{4}).\] Hence, we have with the Cauchy Schwarz inequality \[\mathbb{E}\Big{[}\mathbbm{1}_{A^{c}}(P_{T}^{C^{\prime}}Q_{T}^{C^ {\prime}}-P_{T}^{C^{\prime\prime}}Q_{T}^{C^{\prime\prime}})^{2}\] \[\qquad+\mathbbm{1}_{A^{c}}\mathbbm{1}_{\{{}_{T^{C^{\prime}}\leq T }\}}\sigma^{2}Y^{2}\big{(}(|Q_{T}^{C^{\prime}}|-(\lambda_{T}^{C^{\prime}}- \underline{\lambda})^{+})^{+}-(|Q_{T}^{C^{\prime\prime}}|-(\lambda_{T}^{C^{ \prime\prime}}-\underline{\lambda})^{+})^{2}\big{)}\] \[\leq\mathbb{E}\bigg{[}\mathbbm{1}_{A^{c}}\Big{[}\max(Q_{T}^{C^{ \prime}},Q_{T}^{C^{\prime\prime}})^{2}(P_{T}^{C^{\prime}}-P_{T}^{C^{\prime \prime}})^{2}+\max(P_{T}^{C^{\prime}},P_{T}^{C^{\prime\prime}})^{2}(Q_{T}^{C^ {\prime}}-Q_{T}^{C^{\prime\prime}})^{2}\] \[\qquad\qquad+\mathbbm{1}_{\{{}_{T^{C^{\prime}}\leq T}\}}\sigma^{ 2}Y^{2}(Q_{T}^{C^{\prime}}-Q_{T}^{C^{\prime\prime}})^{2}+\mathbbm{1}_{\{{}_{T ^{C^{\prime}}=T}\}\cap\{\lambda_{T}^{C^{\prime}}\geq\underline{\lambda}, \lambda_{T}^{C^{\prime}}\geq\underline{\lambda}\}}\sigma^{2}Y^{2}(\lambda_{T} ^{C^{\prime}}-\lambda_{T}^{C^{\prime\prime}})^{2}\Big{]}\bigg{]}\] \[\leq c(|\lambda^{\prime}-\lambda^{\prime\prime}|^{2}+|\lambda^{ \prime}-\lambda^{\prime\prime}|^{4}),\] where we use Lemma 3.1 for \(n=2,4\) and where \(\{\tau^{C^{\prime}}=T\}\cap\{\lambda_{T}^{C^{\prime}}\geq\underline{\lambda}, \lambda_{T}^{C^{\prime}}\geq\underline{\lambda}\}\) accounts for the case when the circuit breaker is triggered by the final execution at terminal time \(T\) and not through a possible signal-base trade \(\Delta_{T}^{l}C\). Similarly, by (44), Lemma 3.1 for \(n=2\) and (58), we have \[\mathbb{E}\left[\mathbbm{1}_{A^{c}}\Big{(}\Xi(Q_{T}^{C^{\prime}}, \lambda_{T}^{C^{\prime}}))-\Xi(Q_{T}^{C^{\prime\prime}},\lambda_{T}^{C^{\prime \prime}})\big{)}\right)^{2}\bigg{]}\leq c|\lambda^{\prime}-\lambda^{\prime \prime}|^{2}.\] Finally, we aggregate the above estimates to obtain the upper bound \[\mathbb{E}\bigg{[}\Big{(}X_{T}^{C^{\prime}}-X_{T}^{C^{\prime \prime}}+P_{T}^{C^{\prime}}Q_{T}^{C^{\prime}}-P_{T}^{C^{\prime\prime}}Q_{T}^{C ^{\prime\prime}}+\zeta(|Q_{T}^{C^{\prime}}-Q_{T}^{C^{\prime\prime}}|)+(\Xi(Q_ {T}^{C^{\prime}},\lambda_{T}^{C^{\prime}})-\Xi(Q_{T}^{C^{\prime\prime}}, \lambda_{T}^{C^{\prime\prime}}))\] \[\quad+\sigma Y\big{(}\operatorname{sgn}(Q_{T}^{C^{\prime}})(|Q_{ T}^{C^{\prime}}|-(\lambda_{T}^{C^{\prime}}-\underline{\lambda})^{+})^{+}- \operatorname{sgn}(Q_{T}^{C^{\prime\prime}})(|Q_{T}^{C^{\prime\prime}}|-( \lambda_{T}^{C^{\prime\prime}}-\underline{\lambda})^{+})^{+}\big{)}\Big{)}^{2} \mathbbm{1}_{A^{c}}\bigg{]}^{1/2}\] \[\leq c(|\lambda^{\prime}-\lambda^{\prime\prime}|^{2}+|\lambda^{ \prime}-\lambda^{\prime\prime}|^{4}).\] Together with the estimate for (56), we have \[w(T,\lambda^{\prime},q)-w(T,\lambda^{\prime\prime},q)\leq c(|\lambda^{\prime}- \lambda^{\prime\prime}|+|\lambda^{\prime}-\lambda^{\prime\prime})^{4}|^{1/2}+\varepsilon.\] We use that \(\varepsilon\) was chosen arbitrarily and that \(c\) does not depend on \(\varepsilon\) and exchange the roles of \(\lambda^{\prime}\) and \(\lambda^{\prime\prime}\) to conclude \[|w(T,\lambda^{\prime},q)-w(T,\lambda^{\prime\prime},q)|\leq c(|\lambda^{\prime} -\lambda^{\prime\prime}|+|\lambda^{\prime}-\lambda^{\prime\prime}|^{4})^{1/2}.\] Here, for \(T,q\) in some compactum, the constant \(c=c(T,c_{max},q)\) does not depend on the values of \(T,q\) because it is continuous in its variables. This finishes the proof. Next, we observe monotonicity of the value function with respect to \(t\). **Lemma 4.4**.: _Let the conditions of Theorem 3.3 hold. The function \(w(T,\lambda,q)\) as in (36) is monotonously increasing in the remaining time horizon \(T\)._ Proof.: Let \(0\leq T^{\prime}\leq T^{\prime\prime}\). Let \(C^{\prime}\) be some strategy for time horizon \(T^{\prime}\). We define the strategy \(C^{\prime\prime}\) for time horizon \(T^{\prime\prime}\) as \(\Gamma_{s}^{\prime\prime}:=\Gamma_{s}^{\prime}\) for \(s\in[0,T^{\prime}]\) and \(\Delta_{s}^{\prime}C^{\prime\prime}:=\Delta_{s}^{\prime}C^{\prime\prime}\) for \(s\in[0,T^{\prime})\). At time \(T^{\prime}\), the trader with strategy \(C^{\prime\prime}\) executes her remaining position through an impulse trade of \(\Delta^{r}_{T^{\prime}}C^{\prime\prime}:=-Q^{C^{\prime\prime}}_{T^{\prime}}\). Both strategies result in the same utility from terminal wealth so that we conclude \[w(T^{\prime},\lambda,q)\leq w(T^{\prime\prime},\lambda,q).\] **Lemma 4.5**.: _Let the conditions of Theorem 3.3 hold. Then, for \(\lambda,q\) in a compact set, the value function \(w(T,\lambda,q)\) as in (21) is locally Lipschitz continuous in \(T\), i.e., for \(T^{\prime}\), \(T^{\prime\prime}\) in a compact set, we have \(|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|\leq c|T^{\prime}-T^{ \prime\prime}|\), where the Lipschitz constant \(c\) does not depend on \(T^{\prime},T^{\prime\prime},\lambda,q\)._ Proof.: For simplicity, we prove the claim for \(\alpha>0\); the case \(\alpha=0\) is treated analogously. Let \(\lambda,q\) be starting values from some compact set, let \(0<T^{\prime\prime},T^{\prime}\leq\hat{T}\). Without loss of generality, we consider the case where \(T^{\prime\prime}\leq T^{\prime}\) and assume as before that \(p=x=0\). For simplicity, we denote by \(c=c(\hat{T},\lambda,q)>0\) a generic constant that depends continuously on \(\hat{T},\lambda,q\) and that may change from line to line. For some arbitrary \(\varepsilon>0\) and for time to go \(T^{\prime}\), let \(C^{\prime}\) be a \(\varepsilon\)-optimal strategy. By a density argument, it suffices to prove the claim for \(C^{\prime}\) which only changes in jumps, i.e., for which \((C^{\prime})^{c}=0\). For the time horizon \(T^{\prime\prime}\), we define the strategy \(C^{\prime\prime}\) by \(C^{\prime\prime}_{s}:=C^{\prime}_{s}\) for \(s\in[0,T^{\prime\prime}]\) so that the trader with strategy \(C^{\prime\prime}\) copies the trades of strategy \(C^{\prime}\). At time \(T^{\prime\prime}\), the trader with strategy \(C^{\prime\prime}\) executes the remaining position \(Q^{C^{\prime\prime}}_{T^{\prime\prime}}=Q^{C^{\prime}}_{T^{\prime\prime}}\) all at once, while the trader with strategy \(C^{\prime}\) liquidates the same position over the time horizon \([T^{\prime\prime},T^{\prime}]\). With the monotonicity from Lemma 4.4, we have \[0\leq|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|=w(T^{\prime}, \lambda,q)-w(T^{\prime\prime},\lambda,q)\leq\mathbb{E}\left[U_{\alpha}(\tilde{ X}^{C^{\prime}}_{T^{\prime}})\right]+\varepsilon-\mathbb{E}\left[U_{\alpha}( \tilde{X}^{C^{\prime\prime}}_{T^{\prime\prime}})\right]. \tag{59}\] We start by recalling the definition from (19) \[\begin{split}\tilde{X}^{C^{\prime}}_{T^{\prime}}&=X^{ C^{\prime}}_{T^{\prime\prime}}+\sum_{T^{\prime\prime}<s\leq T^{\prime}} \left(-P^{C^{\prime}}_{s-}\Delta^{l}_{s}Q^{C^{\prime}}-\zeta|\Delta^{l}_{s}Q^{ C^{\prime}}|-\Xi(\Delta^{l}_{s}Q^{C^{\prime}},\lambda^{C^{\prime}}_{s-}) \right)\\ &+\sum_{T^{\prime\prime}\leq s<T^{\prime}}\left(-P^{C^{\prime}}_{ s}\Delta^{r}_{s}Q^{C^{\prime}}-\zeta|\Delta^{r}_{s}Q^{C^{\prime}}|-\Xi(\Delta^{r}_{s} Q^{C^{\prime}},\lambda^{C^{\prime}}_{s})\right)\\ &+P^{C^{\prime}}_{T^{\prime}}Q^{C^{\prime}}_{T^{\prime}}-\zeta\left| Q^{C^{\prime}}_{T^{\prime}}\right|-\Xi\left(Q^{C^{\prime}}_{T^{\prime}}, \lambda^{C^{\prime}}_{T^{\prime}}\right)+\sigma Y\operatorname{sgn}(Q^{C^{ \prime}}_{T^{\prime}})\left(\left|Q^{C^{\prime}}_{T^{\prime}}\right|-(\lambda^ {C^{\prime}}_{T^{\prime}}-\underline{\lambda})^{+}\right)^{+}.\end{split} \tag{60}\] Next, as illustrated in Figure 11, we note that we can decompose the strategy \(C^{\prime}\) that executes the position \(Q^{C^{\prime}}_{T^{\prime\prime}}\) over the time interval \([T^{\prime\prime},T^{\prime}]\) into a strategy that executes the position \(Q^{C^{\prime}}_{T^{\prime\prime}}\) in a monotone way (red dashed line) and into roundtrip trades (gray areas). Here, we consider a roundtrip trade to be a trade which is reversed by (parts of) the next one. Particularly, we know that the trades that are part of the monotone strategy are of the opposite sign as \(Q^{C^{\prime}}_{T^{\prime\prime}}\) and sum up to \(-Q^{C^{\prime}}_{T^{\prime\prime}}\). Moreover, for a roundtrip trade that is started at some time \(s\in[T^{\prime\prime},T^{\prime}]\), we know that its trade is of the same sign as the current inventory \(Q^{C^{\prime}}_{s}\) respectively any sign if \(Q^{C^{\prime}}_{s}=0\). To bound \(\tilde{X}^{C^{\prime}}_{T^{\prime}}\) pathwise from above, we start by considering the best possible price development over the time horizon \([T^{\prime\prime},T^{\prime}]\) with respect to a trader's market order of sign \(\pm\). On the one hand, the favorable price impacts can come from external market orders that trade in the opposite direction, i.e., that are of sign \(\mp\). The absolute value of the price impact of such market orders is bounded by \[|I(\mp V_{[T^{\prime\prime},T^{\prime}]}(\tilde{M}^{C^{\prime}}), \underline{\lambda})|\leq\iota(\underline{\lambda})V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}}). \tag{61}\] On the other hand, favorable price impacts can come from the trader's own roundtrip trades. Particularly, this is the case if the price impact from unwinding a roundtrip trade is in absolute value less than the price impact of the trade starting the roundtrip. For a trader's market order of sign \(\pm\) that is sent at some point in time \(t\in[T^{\prime\prime},T^{\prime}]\), this means that her own roundtrip from before time \(t\) have a favorable price impact if \[\Big{|}I\Big{(}\mp\sum_{\Delta_{s}^{l/r}Q^{C^{\prime}}\in \mathcal{R}_{[T^{\prime\prime},t]}}|\Delta_{s}^{l/r}Q^{C^{\prime}}|,\ \lambda-\sum_{\Delta_{s}^{l/r}Q^{C^{\prime}}\in \mathcal{R}_{[T^{\prime\prime},t]}}|\Delta_{s}^{l/r}Q^{C^{\prime}}|+V_{[T^{ \prime\prime},t]}(L^{C^{\prime},+})\Big{)}\Big{|} \tag{62}\] \[\leq\Big{|}I\Big{(}\pm\sum_{\Delta_{s}^{l/r}Q^{C^{\prime}}\in \mathcal{R}_{[T^{\prime\prime},t]}}|\Delta_{s}^{l/r}Q^{C^{\prime}}|,\lambda \Big{)}\Big{|},\] where we denote by \(\mathcal{R}_{[T^{\prime\prime},t]}\) the set of trades \(\Delta_{s}^{l/r}Q^{C^{\prime}}\), \(s\in[T^{\prime\prime},t]\), that start a roundtrip which is unwound before time \(t\), i.e., the size of such a roundtrip. Because by definition (7), the price impact function \(|I(\Delta,\lambda)|\) is decreasing in \(\lambda\) and symmetric in \(\Delta\), inequality (62) is equivalent to \[\sum_{\Delta_{s}^{l/r}Q^{C^{\prime}}\in\mathcal{R}_{[T^{\prime \prime},t]}}|\Delta_{s}^{l/r}Q^{C^{\prime}}|\leq V_{[T^{\prime\prime},t]}(L^{ C^{\prime},+}).\] Consequently, the favorable price changes due to roundtrip trades are linearly bounded by the liquidity provision over the time interval \([T^{\prime\prime},T^{\prime}]\) \[|I(\mp V_{[T^{\prime\prime},t]}(\tilde{L}^{C^{\prime},+}), \underline{\lambda})|\leq\iota(\underline{\lambda})V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{L}^{C^{\prime},+}). \tag{63}\] Hence, when considering the monotone part of \(C^{\prime}\), we know that the corresponding \(P^{C^{\prime}}\) terms in (60) are pathwise bounded from above by \[\Big{(}P^{C^{\prime}}_{T^{\prime\prime}}+I(\operatorname{sgn}(Q^ {C^{\prime}}_{T^{\prime\prime}})V_{[T^{\prime\prime},T^{\prime}]}(\tilde{M}^{C ^{\prime}}),\underline{\lambda})+I(\operatorname{sgn}(Q^{C^{\prime}}_{T^{ \prime\prime}})V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+}), \underline{\lambda})\Big{)}Q^{C^{\prime}}_{T^{\prime\prime}} \tag{64}\] \[\leq\Big{(}P^{C^{\prime}}_{T^{\prime\prime}}+\iota(\underline{ \lambda})\operatorname{sgn}(Q^{C^{\prime}}_{T^{\prime\prime}})V_{[T^{\prime \prime},T^{\prime}]}(\tilde{M}^{C^{\prime}})+\iota(\underline{\lambda}) \operatorname{sgn}(Q^{C^{\prime}}_{T^{\prime\prime}})V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{L}^{C^{\prime},+})\Big{)}Q^{C^{\prime}}_{T^{\prime\prime}},\] where with (61) and (63) we assume favorable price impacts for trades of sign \(-\operatorname{sgn}(Q^{C^{\prime}}_{T^{\prime\prime}})\). Next, we consider the terms in (60) that correspond to roundtrip trades in \(C^{\prime}\). For this we consider a toy example of a roundtrip over some time interval \([\tau_{1},\tau_{2}]\subset[T^{\prime\prime},T^{\prime}]\) that starts with Figure 11: Example trajectory of the trader’s inventory; stepwise separation of roundtrips from monotone strategy. a trade of size \(\Delta\) at time \(\tau_{1}\) at price \(p_{1}\) and at liquidity level \(\lambda_{1}\). The profit \(x(\Delta)\) from the roundtrip is bounded from above by \[x(\Delta) \leq-p_{1}\Delta-\zeta|\Delta|-\Xi(\Delta,\lambda_{1})-\Xi(\Delta, \lambda_{1}-|\Delta|)+\Xi(\Delta,\lambda_{1}-|\Delta|)\] \[\quad+\Big{(}p_{1}+I(\Delta,\lambda_{1})+I(-\operatorname{sgn}( \Delta)V_{[\tau_{1},\tau_{2}]}(\tilde{M}),\underline{\lambda})+I(- \operatorname{sgn}(\Delta)V_{[\tau_{1},\tau_{2}]}(\tilde{L}^{C^{\prime},+}), \underline{\lambda})\Big{)}\,\Delta\] \[-\zeta|\Delta|-\Xi(\Delta,\lambda_{1}-\Delta+V_{[\tau_{1},\tau_{2 }]}(\tilde{L}^{C^{\prime},+}))-\Xi(\Delta,\lambda-|\Delta|)+\Xi(\Delta, \lambda-|\Delta|).\] Here, we apply (61) and (63) to bound the best possible price for the completion of the roundtrip. Moreover, in \(\Xi(\Delta,\lambda-\Delta+V_{[\tau_{1},\tau_{2}]}(\tilde{L}^{C^{\prime},+}))\), we assume the best possible liquidity developement over \([\tau_{1},\tau_{2}]\) and we artificially added the terms \(-\Xi(\Delta,\lambda-|\Delta|)+\Xi(\Delta,\lambda-|\Delta|)=0\). Next, we note that the terms \[-p_{1}\Delta-2\zeta|\Delta|-\Xi(\Delta,\lambda)+p_{!}\Delta+I(\Delta,\lambda) \Delta-\Xi(\Delta,\lambda-\Delta)\] correspond to the profit from an immediate roundtrip and are thus smaller than or equal to zero. Hence, we obtain the estimate \[x(\Delta) \leq\Big{(}I(V_{[\tau_{1},\tau_{2}]}(\tilde{M}^{C^{\prime}}), \underline{\lambda})+I(V_{[\tau_{1},\tau_{2}]}(\tilde{L}^{C^{\prime},+}), \underline{\lambda})\Big{)}|\Delta|\] \[+|\Xi(\Delta,\lambda-|\Delta|)-\Xi(\Delta,\lambda-|\Delta|+V_{[ \tau_{1},\tau_{2}]}(\tilde{L}^{C^{\prime},+}))|\] \[\leq\iota(\underline{\lambda})\Big{(}V_{[\tau_{1},\tau_{2}]}( \tilde{M}^{C^{\prime}})+V_{[\tau_{1},\tau_{2}]}(\tilde{L}^{C^{\prime},+}) \Big{)}|\Delta|+c_{\iota}V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{ \prime},+})|\Delta|^{2}\] \[\leq\iota(\underline{\lambda})\Big{(}V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^ {C^{\prime},+})\Big{)}|\Delta|+c_{\iota}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})|\Delta|^{2}.\] where we apply (44) to bound the \(\Xi\) terms, with \(c_{\iota}\) the Lipschitz constant of \(\iota\), and where \(\bar{V}(\hat{T})\) is as in (38). Consequently, the total contributions from roundtrips in (60) are pathwise bounded from above by \[\iota(\underline{\lambda})\Big{(}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{ \prime},+})\sum_{\Delta_{s}^{l/r}QC^{\prime}\in\mathcal{R}_{[T^{\prime\prime},T ^{\prime}]}}|\Delta_{s}^{l/r}QC^{\prime}|\,+c_{\iota}V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{L}^{C^{\prime},+})\sum_{\Delta_{s}^{l/r}QC^{\prime}\in\mathcal{ R}_{[T^{\prime\prime},T^{\prime}]}}|\Delta_{s}^{l/r}QC^{\prime}|^{2}\] \[\leq\iota(\underline{\lambda})\Big{(}V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^ {C^{\prime},+})\bar{V}(\hat{T})+c_{\iota}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})\bar{V}(\hat{T})^{2}, \tag{65}\] where \(\bar{V}(\hat{T})\) is as in (38). Consequently, with (64) and (65), we obtain the following pathwise estimate \[\tilde{X}_{T^{\prime}}^{C^{\prime}} \leq\Big{(}P_{T^{\prime\prime}}^{C^{\prime}}+\iota(\underline{ \lambda})\operatorname{sgn}(Q_{T^{\prime\prime}}^{C^{\prime}})\Big{(}V_{[T^{ \prime\prime},T^{\prime}]}(\tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{L}^{C^{\prime},+})\Big{)}\Big{)}\,Q_{T^{\prime\prime}}^{C^{ \prime}}\] \[\quad-\zeta|Q_{T^{\prime\prime}}^{C^{\prime}}|-\Xi\left(Q_{T^{ \prime\prime}}^{C^{\prime}},\lambda_{T^{\prime\prime}}^{C^{\prime}}+V_{[T^{ \prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})\right) \tag{66}\] \[\quad+\iota(\underline{\lambda})\Big{(}V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^ {C^{\prime},+})\bar{V}(\hat{T})+c_{\iota}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})\bar{V}(\hat{T})^{2}\] \[\quad+\sigma Y\operatorname{sgn}(Q_{T^{\prime}}^{C^{\prime}}) \left(\left|Q_{T^{\prime}}^{C^{\prime}}\right|-(\lambda_{T^{\prime}}^{C^{ \prime}}-\underline{\lambda})^{+}\right)^{+},\] where we neglect the remaining \(\zeta\) terms and where for the \(\Xi\) terms (which are just transaction costs from crossing the spread) in (60) that correspond to the monotone part of \(C^{\prime}\), we assume the best possible liquidity developement to obtain the pathwise bound \[-\Xi(Q_{T^{\prime\prime}}^{C^{\prime}},\lambda_{T^{\prime\prime}}^{C^{\prime}}+V_{[ T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})).\] Finally, we need to consider the \(\sigma Y\) terms in \(\tilde{X}^{C^{\prime}}_{T^{\prime}}\) and \(\tilde{X}^{C^{\prime}}_{T^{\prime\prime}}\), i.e., the additional price term when the circuit breaker is activated. In the case when we have \(\tau^{C^{\prime}},\tau^{C^{\prime\prime}}>T^{\prime\prime}\), there is no circuit breaker term in \(\mathbb{E}[U_{\alpha}(\tilde{X}^{C^{\prime\prime}}_{T^{\prime\prime}})]\), but potentially one in \(\mathbb{E}[U_{\alpha}(\tilde{X}^{C^{\prime}}_{T^{\prime}})]\). We note that in general, for any \(\mathcal{F}_{T^{\prime}}\) measurable random variable \(X\), we have \[\mathbb{E}\left[U_{\alpha}\Big{(}X+\sigma Y\left(\left|Q^{C^{ \prime}}_{T^{\prime}}\right|-(\lambda^{C^{\prime}}_{T^{\prime}}-\underline{ \lambda})^{+}\right)\Big{)}\Big{|}\mathcal{F}_{T^{\prime\prime}}\right]\] \[=\mathbb{E}\Big{[}-\mathbb{E}\left[\exp\Big{(}-\alpha X-\alpha \sigma Y\left(\left|Q^{C^{\prime}}_{T^{\prime}}\right|-(\lambda^{C^{\prime}}_ {T^{\prime}}-\underline{\lambda})^{+}\right)\Big{)}\Big{|}\mathcal{F}_{T^{ \prime}}\right]\Big{|}\mathcal{F}_{T^{\prime\prime}}\Big{]}\] \[=\mathbb{E}\Big{[}-\exp\Big{(}-\alpha X)\exp\Big{(}\frac{\alpha^ {2}}{2}\sigma^{2}\Big{(}\Big{(}\left|Q^{C^{\prime}}_{T^{\prime}}\right|-( \lambda^{C^{\prime}}_{T^{\prime}}-\underline{\lambda})^{+}\Big{)}^{+}\Big{)}^ {2}\Big{)}\Big{|}\mathcal{F}_{T^{\prime\prime}}\Big{]}\] \[\leq\mathbb{E}\Big{[}-\exp\Big{(}-\alpha X\Big{)}\Big{|}\mathcal{ F}_{T^{\prime\prime}}\Big{]}=\mathbb{E}\left[U_{\alpha}(X)\Big{|}\mathcal{F}_{T^{ \prime\prime}}\right]\!,\] so that due to \(\mathcal{F}_{T^{\prime\prime}}\)-measurability of \(\mathbbm{1}_{\{\tau^{C^{\prime}},\tau^{C^{\prime\prime}}>T^{\prime}\}}\), we can omit the circuit breaker term in the case \(\mathbb{E}[\mathbbm{1}_{\{\tau^{C^{\prime}},\tau^{C^{\prime\prime}}>T^{\prime}\}} U_{\alpha}(\tilde{X}^{C^{\prime}}_{T^{\prime}})]\) to obtain a bound from above. In the case when \(\tau^{C^{\prime}},\tau^{C^{\prime\prime}}\leq T^{\prime\prime}\), the terminal cash positions and the circuit breaker terms in \(\tilde{X}^{C^{\prime}}_{T^{\prime}}\) and \(\tilde{X}^{C^{\prime}}_{T^{\prime\prime}}\) coincide and we have \[\mathbb{E}[\mathbbm{1}_{\{\tau^{C^{\prime}},\tau^{C^{\prime\prime}}\leq T^{ \prime\prime}\}}U_{\alpha}(\tilde{X}^{C^{\prime}}_{T^{\prime}})]-\mathbb{E}[ \mathbbm{1}_{\{\tau^{C^{\prime}},\tau^{C^{\prime\prime}}\leq T^{\prime\prime}\} }U_{\alpha}(\tilde{X}^{C^{\prime\prime}}_{T^{\prime\prime}})]=0.\] Hence, it remains to consider the case when for the trader with strategy \(C^{\prime\prime}\), the circuit breaker is triggered at time \(T^{\prime\prime}\) when she has to execute her remaining position. Meanwhile, for the trader with time horizon \(T^{\prime}\), it is not triggered at time \(T^{\prime\prime}\) and she can continue trading. We denote this event as \(A:=\{\tau^{C^{\prime\prime}}=T^{\prime\prime},\tau^{C^{\prime}}>T^{\prime}\}\). More precisely, the trader with strategy \(C^{\prime\prime}\) who finishes at time \(T^{\prime\prime}\) executes \[\lambda^{C^{\prime}}_{T^{\prime\prime}}-\underline{\lambda}\] shares regularly in the market and \[\left|Q^{C^{\prime}}_{T^{\prime\prime}}\right|-(\lambda^{C^{\prime}}_{T^{ \prime\prime}}-\underline{\lambda})\] shares in the auction after the circuit breaker activation. Because liquidity taking relies on liquidity provision, the trader with time horizon \(T^{\prime}\) can buy at most \[\lambda^{C^{\prime}}_{T^{\prime}}+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^ {C^{\prime},+})-\underline{\lambda}\] shares regularly in the market. If this is not sufficient to execute the position of \(Q^{C^{\prime}}_{T^{\prime\prime}}\), she will have to execute at least \[\left|Q^{C^{\prime}}_{T^{\prime\prime}}\right|-V_{[T^{\prime\prime},T^{\prime}] }(\tilde{L}^{C^{\prime},+})-(\lambda^{C^{\prime}}_{T^{\prime\prime}}- \underline{\lambda}) \tag{67}\] shares in an auction after circuit breaker activation. Thus, in the event of \(A\), the amount of shares to be executed in an auction for traders with strategies \(C^{\prime}\) and \(C^{\prime\prime}\) differ by at most \(V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})\). With (66) and with the above argumentation for the circuit breaker term, we bound (59) from above by \[|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|\] \[\quad\leq\mathbb{E}\bigg{[}U_{\alpha}\Big{(}X_{T^{\prime\prime}}^{ C^{\prime}}+P_{T^{\prime}}^{C^{\prime}}Q_{T^{\prime\prime}}^{C^{\prime}}+ \Big{(}\iota(\underline{\lambda})V_{[T^{\prime\prime},T^{\prime}]}(\tilde{M}^ {C^{\prime}})+\iota(\underline{\lambda})V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})\Big{)}\left|Q_{T^{\prime\prime}}^{C^{\prime}}\right|- \zeta\left|Q_{T^{\prime\prime}}^{C^{\prime}}\right|\] \[\quad\quad+\iota(\underline{\lambda})(V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}})+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L} ^{C^{\prime},+}))\bar{V}(\hat{T})+c_{t}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})\bar{V}(\hat{T})^{2}\Big{)}\bigg{]}+\varepsilon- \mathbb{E}\left[U_{\alpha}(\tilde{X}_{T^{\prime\prime}}^{C^{\prime\prime}}) \right].\] We use (67), estimate \(|Q_{T^{\prime\prime}}^{C^{\prime}}|\leq\bar{V}(\hat{T})\) and use an analogous estimate as in (52) to write \[|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|\] \[\quad\leq c\mathbb{E}\Big{[}\iota(\underline{\lambda})^{2}V_{[T^{ \prime\prime},T^{\prime}]}(\tilde{M}^{C^{\prime}})^{2}\bar{V}(\hat{T})^{2}+ \iota(\underline{\lambda})^{2}V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C ^{\prime},+})^{2}\bar{V}(\hat{T})^{2}\] \[\quad+\Big{(}\Xi\left(Q_{T^{\prime\prime}}^{C^{\prime}},\lambda_ {T^{\prime\prime}}^{C^{\prime}}+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C ^{\prime},+})\right)-\Xi\left(Q_{T^{\prime\prime}}^{C^{\prime}},\lambda_{T^{ \prime\prime}}^{C^{\prime}}\right)\Big{)}^{2}+\mathbb{1}_{A}\sigma^{2}Y^{2}V_{ [T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})^{2} \tag{68}\] \[\quad+\iota(\underline{\lambda})^{2}V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{M}^{C^{\prime}})^{2}\bar{V}(\hat{T})^{2}+\iota(\underline{ \lambda})^{2}V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})^{2} \bar{V}(\hat{T})^{2}+c_{t}V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{ \prime},+})^{2}\bar{V}(\hat{T})^{4}\Big{]}^{1/2}+\varepsilon.\] To bound the \(\Xi\) terms in (68) from above, we apply (44) to write \[\mathbb{E}\left[\left(\Xi\left(Q_{T^{\prime\prime}}^{C^{\prime}}, \lambda_{T^{\prime\prime}}^{C^{\prime}}+V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})\right)-\Xi\left(Q_{T^{\prime\prime}}^{C^{\prime}}, \lambda_{T^{\prime\prime}}^{C^{\prime}}\right)\right)^{2}\right] \tag{69}\] \[\quad\leq\mathbb{E}\left[c_{t}^{2}V_{[T^{\prime\prime},T^{\prime }]}(\tilde{L}^{C^{\prime},+})^{2}\bar{V}(\hat{T})^{4}\right].\] We plug the estimate (69) into (68) and have \[|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|\] \[\quad\leq c\mathbb{E}\Big{[}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{M}^{C^{\prime}})^{2}\bar{V}(\hat{T})^{2}+V_{[T^{\prime\prime},T^{\prime }]}(\tilde{L}^{C^{\prime},+})^{2}\bar{V}(\hat{T})^{2}\] \[\qquad\quad+V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{ \prime},+})^{2}\bar{V}(\hat{T})^{4}+Y^{2}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})^{2}\Big{]}^{1/2}+\varepsilon\] \[\quad\leq\left(\mathbb{E}\Big{[}V_{[T^{\prime\prime},T^{\prime}]} (\tilde{M}^{C^{\prime}})^{4}\Big{]}^{1/4}\mathbb{E}\left[\bar{V}(\hat{T})^{4} \right]^{1/4}+\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{ \prime},+})^{4}\right]^{1/4}\mathbb{E}\left[\bar{V}(\hat{T})^{4}\right]^{1/4}\] \[\quad\quad+\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})^{4}\right]^{1/4}\mathbb{E}\left[\bar{V}(\hat{T})^{8} \right]^{1/4}+\mathbb{E}[Y^{2}]^{1/2}\,\mathbb{E}\left[V_{[T^{\prime\prime},T^{ \prime}]}(\tilde{L}^{C^{\prime},+})^{2}\right]^{1/2}\right)+\varepsilon, \tag{70}\] where we use the independence of \(Y\) and apply the Cauchy Schwarz inequality to bound the expectation of the repective products from above. In the next step, to estimate the moments of \(V_{[T^{\prime\prime},T^{\prime}]}(\tilde{M}^{C^{\prime}})\) and \(V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C^{\prime},+})\), we introduce the maximum liquidity for the time horizon \([0,\hat{T}]\) \[\overline{\lambda}(\hat{T}):=\lambda+\int_{[0,\hat{T}]\times E\times\mathbb{R}_{+ }}\mathbb{1}_{\{y\leq g(\underline{\lambda})\}}|\rho(e)|N(ds,de,dy),\] for which we know by the Cauchy Schwarz inequality \[\mathbb{E}[\overline{\lambda}(\hat{T})^{4}]\leq 3\lambda^{4}+3\hat{T}^{4}g( \underline{\lambda})^{4}\nu(E)^{3}\int_{E}|\rho(e)|^{4}\nu(de). \tag{71}\] Again by the Cauchy Schwarz inequality, we bound the fourth moment of the variation of external market orders over \([T^{\prime\prime},T^{\prime}]\) from above by \[\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}(\tilde{M}^{C^{ \prime}})^{4}\right]\leq\mathbb{E}\left[\Big{(}\int_{[T^{\prime\prime},T^{ \prime}]\times E\times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq f(\overline{\lambda}( \hat{T}))\}}|\eta(e)|N(ds,de,dy)\Big{)}^{4}\right]\] \[\leq\mathbb{E}\left[|T^{\prime}-T^{\prime\prime}|^{3}f(\overline {\lambda}(\hat{T}))^{3}\nu(E)^{3}\int_{[T^{\prime\prime},T^{\prime}]\times E \times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq f(\overline{\lambda}(\hat{T}))\}}| \eta(e)|^{4}N(ds,de,dy)\right]\] \[\leq|T^{\prime}-T^{\prime\prime}|^{4}\underbrace{\mathbb{E}[f( \overline{\lambda}(\hat{T}))^{4}]}_{\leq c_{f}\cdot(\ref{eq:T1})}\nu(E)^{3} \left(\int_{E}|\eta(e)|^{4}\nu(de)\right)\] \[\leq c|T^{\prime}-T^{\prime\prime}|^{4}, \tag{72}\] where we use Lipschitz continuity of \(f\) with Lipschitz constant \(c_{f}\). Again with the Cauchy Schwarz inequality, we write for the \(n\)-th moment, \(n\in\{2,4\}\), of the variation of limit orders over \([T^{\prime\prime},T^{\prime}]\), \[\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}(L^{C^{\prime},+ })^{n}\right] \leq\mathbb{E}\left[\left(\int_{[T^{\prime\prime},T^{\prime}] \times E\times\mathbb{R}_{+}}\mathbbm{1}_{\{y\leq g(\underline{\lambda})\}}| \rho(e)|N(ds,de,dy)\right)^{n}\right]\] \[\leq|T^{\prime}-T^{\prime\prime}|^{n}g(\underline{\lambda})^{n} \nu(E)^{n-1}\int_{E}|\rho(e)|^{n}\nu(de). \tag{73}\] Finally, we combine the estimates (72) and (73) with Lemma 3.1 for \(n=4,8\) for the finiteness of the \(n\)-th moments of \(\bar{V}(\hat{T})\) to bound (70) from above by \[|w(T^{\prime},\lambda,q)-w(T^{\prime\prime},\lambda,q)|\] \[\leq c\Big{(}\mathbb{E}\Big{[}V_{[T^{\prime\prime},T^{\prime}]}( \tilde{M}^{C^{\prime}})^{4}\Big{]}^{1/4}\,\mathbb{E}\left[\bar{V}(\hat{T})^{4 }\right]^{1/4}+\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C ^{\prime},+})^{4}\right]^{1/4}\mathbb{E}\left[\bar{V}(\hat{T})^{4}\right]^{1/4}\] \[+\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}(\tilde{L}^{C ^{\prime},+})^{4}\right]^{1/4}\mathbb{E}\left[\bar{V}(\hat{T})^{8}\right]^{1/4 }+\mathbb{E}[Y^{2}]^{1/2}\,\mathbb{E}\left[V_{[T^{\prime\prime},T^{\prime}]}( \tilde{L}^{C^{\prime},+})^{2}\right]^{1/2}\Big{)}+\varepsilon\] \[\leq c|T^{\prime}-T^{\prime\prime}|,\] which finishes the proof. Finally, we conclude continuity of the value function \(v\) as stated in Theorem 3.3: Proof of Theorem 3.3.: By (36), it is sufficient to prove continuity of the dimension reduced value function \(w(T,\lambda,q)\) as in (36). Let \((T^{\prime},\lambda^{\prime},q^{\prime})\) and \((T^{\prime\prime},\lambda^{\prime\prime},q^{\prime\prime})\) be from a compact set so that \(\lambda^{\prime},\lambda^{\prime\prime}\geq\underline{\lambda}\) and \(|\lambda^{\prime}-\lambda^{\prime\prime}|<1\). We use the triangle inequality to write \[|w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime}, \lambda^{\prime\prime},q^{\prime\prime})|\] \[\leq|w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime}, \lambda^{\prime\prime},q^{\prime})|+|w(T^{\prime},\lambda^{\prime\prime},q^{ \prime})-w(T^{\prime\prime},\lambda^{\prime\prime},q^{\prime})|+|w(T^{\prime \prime},\lambda^{\prime\prime},q^{\prime})-w(T^{\prime\prime},\lambda^{\prime \prime},q^{\prime\prime})|.\] By Lemmata 4.2, 4.3 and 4.5, we know the existence of a constant fixed constant \(c\) so that \[|w(T^{\prime},\lambda^{\prime},q^{\prime})-w(T^{\prime\prime},\lambda^{\prime \prime},q^{\prime\prime})|\leq c(|T^{\prime}-T^{\prime\prime}|+|\lambda^{\prime}- \lambda^{\prime\prime}|^{1/2}+|T^{\prime}-T^{\prime\prime}|),\] which finishes the proof.
2302.03807
A Prototype-Oriented Clustering for Domain Shift with Source Privacy
Unsupervised clustering under domain shift (UCDS) studies how to transfer the knowledge from abundant unlabeled data from multiple source domains to learn the representation of the unlabeled data in a target domain. In this paper, we introduce Prototype-oriented Clustering with Distillation (PCD) to not only improve the performance and applicability of existing methods for UCDS, but also address the concerns on protecting the privacy of both the data and model of the source domains. PCD first constructs a source clustering model by aligning the distributions of prototypes and data. It then distills the knowledge to the target model through cluster labels provided by the source model while simultaneously clustering the target data. Finally, it refines the target model on the target domain data without guidance from the source model. Experiments across multiple benchmarks show the effectiveness and generalizability of our source-private clustering method.
Korawat Tanwisuth, Shujian Zhang, Pengcheng He, Mingyuan Zhou
2023-02-08T00:15:35Z
http://arxiv.org/abs/2302.03807v2
# A prototype-oriented clustering for domain shift with source privacy ###### Abstract Unsupervised clustering under domain shift (UCDS) studies how to transfer the knowledge from abundant unlabeled data from multiple source domains to learn the representation of the unlabeled data in a target domain. In this paper, we introduce Prototype-oriented Clustering with Distillation (PCD) to not only improve the performance and applicability of existing methods for UCDS, but also address the concerns on protecting the privacy of both the data and model of the source domains. PCD first constructs a source clustering model by aligning the distributions of prototypes and data. It then distills the knowledge to the target model through cluster labels provided by the source model while simultaneously clustering the target data. Finally, it refines the target model on the target domain data without guidance from the source model. Experiments across multiple benchmarks show the effectiveness and generalizability of our source-private clustering method. ## 1 Introduction Supervised learning methods require a tremendous amount of labeled data, limiting their use cases in many situations (Adadi, 2021). By contrast, unsupervised clustering seeks to group similar data points into clusters without labels (Hartigan, 1972). Clustering has become one of the most popular methods in various applications, such as computer vision (Coleman and Andrews, 1979; Lei et al., 2018; Liu et al., 2021; Mittal et al., 2021), natural language processing (Biemann, 2006; Yoon et al., 2019), reinforcement learning (Mannor et al., 2004; Xu et al., 2014; Ahmadi et al., 2021), and multi-modal learning (Hu et al., 2019; Chen et al., 2021). In many of these applications, data naturally come from multiple sources and may not contain labels since they are expensive to acquire (Girshick et al., 2014; Lin et al., 2014). As an example, medical institutions collaborate to achieve a large and diverse dataset (Mojab et al., 2020). However, this partnership faces privacy and ownership challenges (Shelter et al., 2020). Across different domains, users may also have varying amounts of resources and data (Salehi et al., 2019). Another example is the inference-as-a-service paradigm, a business scheme where providers serve models trained on multiple sources of data as APIs (_e.g._, Google AI platforms, Amazon Web Services, GPT-3 (Brown et al., 2020)) without giving clients direct access to them. To exploit the rich data from multiple domains for limited-data-and-resource users while also taking into account privacy challenges, one may consider applying methods from Unsupervised Domain Adaptation (UDA) (Shimodaira, 2000; Farhadi and Tabrizi, 2008; Saenko et al., 2010). These methods nonetheless require labeled data in the source domains, making them not applicable in many scenarios. To overcome the assumption of UDA, Menapace et al. (2020) have recently introduced Unsupervised Clustering under Domain Shift (UCDS), a learning scenario where both the source and target domains have no labels. The goal of this problem setting is to transfer the knowledge from the abundant unlabeled data from multiple source domains to a target domain with limited data. To solve this problem, Menapace et al. (2020) propose Adaptive Clustering of Images under Domain Shift (ACIDS), a method that uses an information-theoretic loss (Ji et al., 2019) for clustering and batch normalization alignment (Li et al., 2016) for target adaptation. However, it has two major drawbacks. First, it assumes that we have full access to the source model parameters to initialize the target model before clustering, limiting its use in privacy-sensitive situations where access to the source model is restricted. Second, it requires batch normalization, a specific architectural design of the source model that may not be applicable in some recently proposed state-of-the-art models such as Vision Transformer (Dosovitskiy et al., 2020). In this paper, we consider a more practical problem that is a variant of UCDS (see Table 1): in addition to the data privacy, we also consider model privacy. Target data owners have no direct access to the source model but can query it to obtain cluster labels during target adaptation. This requirement is important because, given full access to the model, target users or other adversaries may exploit it to recover the source data, jeopardizing source data privacy Chen et al. (2019); Luo et al. (2020). To address this important and challenging problem, we propose Prototype-oriented Clustering with Distillation (PCD), a holistic method that consists of three stages. First, we construct a source clustering model from multiple-domain data. To achieve this, we use optimal transport (Kantorovich, 2006; Peyre and Cuturi, 2019) to align the distributions of data and prototypes, as well as a mutual-information maximization to assist the learning of the feature encoder and prototypes (Krause et al., 2010; Shi and Sha, 2012; Liang et al., 2020). Second, we use the target cluster assignments provided by the source model to distill the knowledge to the target model while simultaneously clustering the target data. Finally, we perform clustering on the target data alone to further refine the target model. Figure 1 illustrates the schematic diagram of our approach. PCD achieves the following benefits. Our approach can be directly applied to the inference-as-a-service paradigm, which is becoming increasingly popular (Soifer et al., 2019). Many providers currently serve users with API services without sharing direct access to their models. Our method also protects the privacy of both the data and model in the source domains, which is especially critical in practical applications such as healthcare. Moreover, we no longer require the source and target models to share the same architecture, allowing for more flexibility in the training process. Unlike source data owners, target users may have limited resources and cannot afford to train large models. Our main contributions include: **1)** We propose a generalized approach for tackling the problem of data-and-model private unsupervised clustering under domain shift. PCD integrates a prototype-oriented clustering algorithm and knowledge distillation into a unified method. Our clustering algorithm synergistically combines optimal transport with the mutual-information objective for prototype and data alignment. **2)** We verify the effectiveness and general applicability of the proposed method in practical settings: model transfer as well as limited-data and cluster-imbalanced scenarios. **3)** We provide comprehensive study and experiments on multiple datasets and demonstrate consistent gains over the baselines. ## 2 Method To address the clustering problem under domain shift and privacy concerns, we provide a general recipe that consists of three main parts: **1)** source model learning: learn a transferable model that can guide the target model; **2)** target model clustering: train a target model with the knowledge from the source model as well as the target data; and **3)** target model refinement: refine the target model on the target data alone. The resulting strategy, referred to as PCD, can effectively solve the clustering problem under domain shift while fully preserving the privacy of the source data and model. We include the pseudocode in Algorithm 1 in Appendix F. ### Background In unsupervised clustering under domain shift, we are given \(D\) unlabeled datasets from the source domains, denoted as \(\mathcal{X}^{s}=\{\mathcal{X}^{s}_{d}\}_{d=1}^{D}\) where \(\mathcal{X}^{s}_{d}=\{\mathbf{x}^{s}_{dj}\}_{j=1}^{n^{s}_{d}}\) represents a dataset from a source domain \(d\) with \(n^{s}_{d}\) samples. We are also given an unlabeled dataset from the target domain, denoted as \(\mathcal{X}^{t}=\{\mathbf{x}^{t}_{j}\}_{j=1}^{n_{t}}\) with \(n_{t}\) target samples. There are \(K\) underlying clusters in both the source and target domains with similar semantic content, but there is a shift between the source and target data distributions. The clustering model consists of a feature encoder, \(F_{\mathbf{\theta}}:\mathcal{X}\rightarrow\mathbb{R}^{d_{f}}\), parameterized \begin{table} \begin{tabular}{l c c c} \hline \hline & Source labels & Target labels & Source data access & Source model’s parameters access \\ \hline Unsupervised Domain adaptation & ✓ & ✗ & ✓ \\ Source-free Unsupervised Domain Adaptation & ✓ & ✗ & ✗ & ✓ \\ Unsupervised Clustering under Domain Shift & ✗ & ✗ & ✗ & ✓ \\ Ours & ✗ & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of different domain transfer settings. by \(\mathbf{\theta}\), and a linear clustering head \(C_{\mathbf{\mu}}:\mathbb{R}^{d_{f}}\rightarrow\mathbb{R}^{K}\), parameterized by \(\mathbf{\mu}\). To simplify the notation, \(G=C_{\mathbf{\mu}}(F_{\mathbf{\theta}}(\cdot))\) will denote the composition of the feature encoder and linear clustering head. We denote \(G^{s}\) and \(G^{t}\) as the source and target models, respectively. The goal is to learn a model that can discover the underlying clusters of target samples under domain shift. Although the existing approach by Menapace et al. (2020) can achieve this objective, it directly uses \(G_{s}\) to initialize \(G_{t}\), compromising the privacy of the source domain and requiring \(G_{s}\) and \(G_{t}\) to have the same architecture. We now discuss how the different components of our method address these issues. ### Source model learning To effectively capture the feature distribution of the source data and avoid clustering based on domain information, we propose a clustering algorithm that consists of three components: prototype-oriented clustering, mutual-information maximization, and regularization via CutMix. The first two components help capture the feature distribution, while the last one curtails clustering based on domain information. #### 2.2.1 Prototype-oriented clustering Our goal is to learn global representations of prototypes that capture the source data distributions and a feature encoder that maps the data from different domains to the prototypes. In our model, we have a linear clustering head, \(C_{\mathbf{\mu}}=[\mathbf{\mu}_{1},\mathbf{\mu}_{2},\ldots,\mathbf{\mu}_{K}]\in\mathbb{R}^{d_ {f}\times K}\), where \(d_{f}\) denotes the dimension of both the prototype and the output of the feature encoder. The vector \(\mathbf{\mu}_{k}\) represents a prototype of the \(k\)th cluster in the latent space. To discover the underlying clusters, we want to align the distribution of the global prototypes with the distribution of the feature representations in each domain. We represent the distribution of the feature in each domain using the empirical distribution which is expressed as: \(P_{d}=\sum_{j=1}^{n_{d}^{*}}\frac{1}{n_{d}^{*}}\delta\mathbf{f}_{d_{ij}}\) where \(\mathbf{f}_{d_{ij}}^{*}=F_{\mathbf{\theta}}^{*}(\mathbf{x}_{dj}^{*})\) denotes the output of the feature encoder. While we use a set of global prototypes to learn domain-invariant representations, we carefully construct the distribution of prototypes in each domain such that the prototypes can align well with the data. Since the proportion of clusters in each domain may vary, we consider the domain-specific distribution of prototypes, \(Q_{d}\), which is defined as: \(Q_{d}=\sum_{k=1}^{K}\mathbf{B}_{dk}\delta_{\mu_{k}}\), where \(\mathbf{B}_{dk}\) denotes the proportion of cluster \(k\) in domain \(d\) (\(\mathbf{B}_{dk}\geq 0\) and \(\sum_{k=1}^{K}\mathbf{B}_{dk}=1\ \forall d\)). We emphasize here that the prototypes are shared across different domains, but the proportion of the prototypes is domain-specific. To align the distributions of prototypes and data, we want to quantify their difference. A principled way to compare two discrete distributions is to consider the optimal transport problem (Kantorovich, Figure 1: The illustration of the proposed clustering framework under domain shift and privacy concerns. The semantic content of the source (Art and Cartoon) and target (Photo) data stays the same. However, the bias of the data in each domain leads to a distribution shift. During the adaptation phase, target users are only allowed to query from the source model, protecting the privacy of the source domain information. 2006; Peyre and Cuturi, 2019; Zhang et al., 2021a). Thus, we consider the entropic regularized optimal transport formulation (Cuturi, 2013) that is defined as: \[OT(P_{d},Q_{d})=\min_{\mathbf{T}_{a}\in\Pi(\mathbf{u},\mathbf{v})}\mathrm{Tr}((\mathbf{T }_{d})^{T}\mathbf{C}_{d})+\epsilon h(\mathbf{T}_{d}), \tag{1}\] where \(\mathbf{C}_{d}\in\mathbb{R}_{>0}^{n_{d}^{2}\times K}\) stands for the transport cost matrix in domain \(d\), \(\mathrm{Tr}\) denotes the trace operation, \(h(\mathbf{T}_{d})=-\sum_{j,k}(\mathbf{T}_{d})_{jk}\log(\mathbf{T}_{d})_{jk}\) is the entropy of the transport plan, \(\epsilon\) controls the strength of the regularization term, and \(\mathbf{T}_{d}\in\mathbb{R}_{>0}^{n_{d}^{2}\times K}\) is a doubly stochastic matrix in domain \(d\) such that \(\Pi(\mathbf{u},\mathbf{v})=\{\mathbf{T}_{d}|\mathbf{T}_{d}\mathbf{1}=\mathbf{u},\mathbf{1 }^{T}\mathbf{T}_{d}=\mathbf{v}\}\). The probability vectors \(\mathbf{u}=\frac{\mathbf{1}}{n_{d}^{2}}\in\mathbf{\Sigma}^{n_{d}^{2}}\) and \(\mathbf{v}=\mathbf{B}_{d}\in\mathbf{\Sigma}^{K}\), where \(\mathbf{\Sigma}^{M}\) stands for the probability simplex of \(\mathbb{R}^{M}\), denote the respective probabilities for \(P_{d}\) and \(Q_{d}\). We define the point-wise transport cost \((\mathbf{C}_{d})_{jk}\) as the cosine dissimilarity: \((\mathbf{C}_{d})_{jk}=1-\frac{\mu_{d}^{2}\,F_{d}^{2}}{\|\mathbf{\mu}_{k}\|\,\|P_{d }^{2}\|}\), where \(\mathbf{f}_{dj}^{*}=F_{\mathbf{\theta}}^{*}(\mathbf{x}_{dj}^{*})\) denotes the output of the feature encoder. The intuition here is that if \((\mathbf{C}_{d})_{jk}\) is high, it is less likely for sample \(j\) to be transported to cluster \(k\). To summarize, for a fixed \(\mathbf{\theta}\) and \(\mathbf{\mu}\), we can solve Eq. (1) to obtain \(\mathbf{T}_{d}\), the probabilities of moving prototypes to data points in each domain. After obtaining the transport plans, we update the parameters of the encoder \(\mathbf{\theta}\) and prototypes \(\mathbf{\mu}\) to minimize the total transport cost for the given transport plan using mini-batch stochastic gradient descent. The final transport loss is expressed as: \(\mathcal{L}_{transport}\left(G^{s};\mathcal{X}^{s}\right)=\frac{1}{D}\sum_{d=1}^ {D}OT(P_{d},Q_{d})\). The connections of our method with other deep clustering algorithms (Caron et al., 2018; Asano et al., 2019) are provided in Appendix D. #### 2.2.2 Learning domain-specific cluster proportions In the previous section, we utilize cluster proportions, \(\mathbf{B}_{d}\), as the marginal constraint when solving the optimal transport problems. Assuming that each cluster contains roughly the same number of samples, we can use a uniform distribution for \(\mathbf{B}_{d}\). However, this assumption is not valid in practice. Since each domain may have different distributions over the clusters, we propose a way to estimate domain-specific cluster proportions, \(\mathbf{B}_{d}\). To infer these quantities, we first initialize them with a uniform prior over clusters \(\mathbf{B}_{dk}=\frac{1}{K}\) and iteratively refine them using an EM-like update (Saerens et al., 2002; Kang et al., 2018; Alexandari et al., 2020): \[\tilde{\mathbf{B}}_{dk}^{l+1}=\frac{1}{M_{d}}\sum_{j=1}^{M_{d}}\pi_{\mathbf{ \theta}}^{l}(\mathbf{\mu}_{k}\,|\,\mathbf{f}_{dj}^{*}),\ \ \text{where}\ \ \pi_{\mathbf{\theta}}^{l}(\mathbf{\mu}_{k}\,|\,\mathbf{f}_{dj}^{*})=\frac{\exp(\mathbf{\mu}_{k}^ {T}\mathbf{f}_{dj}^{*})\mathbf{B}_{dk}^{l}}{\sum_{k^{\prime}=1}^{K}\exp(\mathbf{\mu}_{k ^{\prime}}^{T}\mathbf{f}_{dj}^{*})\mathbf{B}_{dk^{\prime}}^{l}}, \tag{2}\] where \(M_{d}\) stands for the number of samples in domain \(d\) in a mini-batch, \(\mathbf{B}_{dk}^{l+1}\) refers to the proportion of cluster \(k\) in domain \(d\) at the \(l+1\) th iteration, \(\pi_{\mathbf{\theta}}^{l}(\mathbf{\mu}_{k}\,|\,\mathbf{f}_{dj}^{*})\) denotes the predicted cluster probabilities at the \(l\) th iteration, and \(\mathbf{f}_{dj}^{*}\) indicates the \(j\) th feature sample in domain \(d\). To obtain a reliable estimates of the full dataset, we iteratively update the proportions with \(\mathbf{B}_{dk}^{l+1}\leftarrow\beta^{l}\mathbf{B}_{dk}^{l}+(1-\beta^{l}) \tilde{\mathbf{B}}_{dk}^{l+1},\) where \(\beta^{l}\) follows a cosine learning rate schedule. #### 2.2.3 Global alignment with mutual-information maximization The transport loss introduced in the previous section aligns the local distributions of data and prototypes. To assist the learning of the feature encoder and prototypes on a global level, we utilize the widely-adopted mutual-information objective (Krause et al., 2010; Shi and Sha, 2012). This objective ensures that the feature representations are tightly clustered around each prototype. If the data are close to the prototypes, we expect the posterior probabilities to be close to one-hot vectors. To make this more likely, we minimize the entropy of the conditional distribution of cluster labels given the data. However, minimizing this loss alone could lead to a degenerate solution since the model can assign all the samples to one cluster (Morerio et al., 2017; Wu et al., 2020). To prevent such a solution, we maximize the marginal entropy of the cluster label distribution to encourage the average predictions to be close to a uniform distribution. The mutual-information objective is thus expressed as: \[\mathcal{L}_{mi}\left(G^{s};\mathcal{X}^{s}\right) =-[H\left(\mathcal{Y}^{s}\right)-H\left(\mathcal{Y}^{s}\mid \mathcal{X}^{s}\right)]\] \[=-[h\left(\mathbb{E}_{\mathbf{x}^{s}\in\mathcal{X}^{s}}G^{s}\left( \mathbf{x}^{s}\right)\right)-\mathbb{E}_{\mathbf{x}^{s}\in\mathcal{X}^{s}}h\left(G^{s }\left(\mathbf{x}^{s}\right)\right)], \tag{3}\] where \(H\left(\mathcal{Y}^{s}\right)\) and \(H\left(\mathcal{Y}^{s}\mid\mathcal{X}^{s}\right)\) denote the marginal entropy and conditional entropy of the cluster labels \(\mathcal{Y}^{s}\), which are latent variables, respectively and \(h(p)=-\sum_{i}p_{i}\log p_{i}\). To avoid clustering based on domain information, we add the CutMix (Yun et al., 2019) regularization, which mixes two samples by interpolating images and labels. Since the data have no labels, the predicted cluster probabilities are utilized as the pseudo-labels. The CutMix regularization is defined as: \(\mathcal{L}_{cutmix}=\mathbb{E}_{\mathbf{x}_{i}^{s},\mathbf{x}_{j}^{s}\in\mathcal{X}^{s} }L(G^{s}(\tilde{x}),\tilde{y})\), where \(L(\cdot,\cdot)\) is the cross-entropy loss and (\(\tilde{\mathbf{x}}\), \(\tilde{y}\)) are the interpolated samples from the pair \((\mathbf{x}_{i}^{s},G_{a}^{s}(\mathbf{x}_{i}^{s}))\) and \((\mathbf{x}_{j}^{s},G_{a}^{s}(\mathbf{x}_{j}^{s}))\), with \(G_{a}^{s}\) indicating no gradient optimization. We construct the final objective function to update the prototypes and feature encoder. \[\mathcal{L}_{clustering}(G^{s};\mathcal{X}^{s})=\mathcal{L}_{ transport}(G^{s};\mathcal{X}^{s})+\mathcal{L}_{mi}(G^{s};\mathcal{X}^{s})+\mathcal{L}_{cutmix}(G^{s}; \mathcal{X}^{s}). \tag{4}\] ### Target model learning Because of the domain shift, we divide our target model learning into two stages--target model clustering and target model refinement--to ensure that the knowledge transferred from the source domain does not interfere with the learning in the target domain (Shu et al., 2018). The first phase aims to transfer the knowledge from the source model to the target model while protecting the privacy of the source domain. The second phase focuses on refining the target model so that target samples are tightly clustered around each prototype. #### 2.3.1 Target model clustering In many practical applications, it is crucial to preserve the privacy of both the source model and data (Ziller et al., 2020, 2021). Thus, directly using the source model to initialize the target model is not ideal. Instead, we consider the practical problem where the source model can only provide a cluster label for each target example. The source model is simply an API, and we have access to neither its architecture nor model parameters. With the predicted cluster assignments given by the source model, we want to learn a well-trained clustering model on the target data. **Source knowledge transfer with knowledge distillation.** Given unlabeled target samples, \(\{\mathbf{x}_{i}^{t}\}_{i=1}^{n_{t}}\), we can obtain cluster assignments, \(G^{s}(\mathbf{x}_{i}^{t})\), through the source model. Our algorithm can work for both hard and soft labels; however, it is more practical to consider hard labels from the source domain since soft labels may not be available for all models (Sanyal et al., 2022). Thus, we consider hard label assignments from the source domain in our experiments. To transfer the knowledge from the source to target models, we utilize a knowledge distillation loss (Hinton et al., 2015) to train the target model to mimic the predicted output from the source. The loss can be formulated as follows: \(\mathcal{L}_{kd}\left(G^{t};\mathcal{X}^{t},G^{s}\right)=\mathbb{E}_{x^{t} \in\mathcal{X}^{t}}\mathcal{D}_{kl}\left(G^{s}(\mathbf{x}^{t})\|G^{t}(\mathbf{x}^{t})\right)\), where \(\mathcal{D}_{kl}(G^{s}(\mathbf{x}^{t})\|G^{t}(\mathbf{x}^{t}))=\sum_{k=1}^{K}G^{s}( \mathbf{x}^{t})k\log\frac{G^{s}(\mathbf{x}^{t})}{G^{s}(\mathbf{x}^{t})^{k}}\) stands for the Kullback-Leibler divergence between two distributions and \(G^{t}\) is initialized with a pre-trained feature encoder. Because of the domain shift, the source model may not always cluster target samples based on their semantic content. Thus, we propose to refine the predicted target assignments using two simple strategies: label smoothing (Pereyra et al., 2017; Zhang et al., 2021; Le and Aila, 2016; Fan et al., 2021; Kim et al., 2021). Muller et al. (2019) discover that label smoothing can help the penultimate layer representation form tight clusters, allowing the model to discover underlying clusters more easily. To utilize label smoothing, we interpolate the hard assignments with a uniform distribution to obtain soft labels: \(\hat{y}_{k}^{LS}=(1-\gamma)G^{s}(\mathbf{x}^{t})_{k}+\frac{\gamma}{K}\), where \(\gamma\) is the weight of the uniform distribution. As the target model improves, we can leverage its predicted cluster probabilities across different iterations to form a temporal ensemble: \((\hat{y}^{t})^{t}\leftarrow\tau(\hat{y}^{t})^{t-1}+(1-\tau)G^{t}(\mathbf{x}^{t})^ {t}\), where \(\tau\) determines how much weight we give to past assignments, \((\hat{y}_{t})^{t-1}\) is the assignment at the \(l-1\) th iteration, and \(G^{t}(\mathbf{x}^{t})^{t}\) is the current assignment. We initialize \((\hat{y}^{t})^{0}\) with the smooth assignments from the source model. The refined cluster assignments from the source model \(\hat{y}^{t}\) then replaces \(G^{s}(x^{t})\) in the distillation loss. Thus, for target model clustering, the training includes the following losses: \(\mathcal{L}_{target\_clustering}(G^{t};\mathcal{X}^{t},G^{s})=\mathbb{E}_{x_{t }\in\mathcal{X}^{t}}\mathcal{D}_{kl}\left(\hat{y}^{t}\|\,G^{t}\left(\mathbf{x}^{t }\right)\right)+\mathcal{L}_{clustering}(G^{t};\mathcal{X}^{t})\). #### 2.3.2 Target model refinement In the previous section, we use both source and target domain knowledge to learn our clustering model. While the source domain knowledge can assist target domain learning, the bias in distribution due to domain shift could lead the target model to learn noisy domain information from the source model. Similar to the observation by Shu et al. (2018), we find that the target model could benefit from further clustering on the target data alone. We utilize the clustering objective in Eq. (4) with target data and model as arguments and without the CutMix loss. The CutMix regularization term is not included since there is no source knowledge transfer and the target data come from a single domain. Also, the regularizer makes the predicted probabilities unconfident. During this stage, we want the target feature representations to be clustered tightly around the target prototypes (confident network outputs). The target refinement loss is thus formulated as: \(\mathcal{L}_{target\_refinement}(G^{t};\mathcal{X}^{t})=\mathcal{L}_{transport}( G^{t};\mathcal{X}^{t})+\mathcal{L}_{mi}(G^{t};\mathcal{X}^{t})\). ## 3 Related work **Clustering.** For a complete picture of the field, readers may refer to the survey by Min et al. (2018). We emphasize deep-clustering-based approaches, which attempt to learn the feature representation of the data while simultaneously discovering the underlying clusters: K-means (Yang et al., 2017; Caron et al., 2018), information maximization (Menapace et al., 2020; Ji et al., 2019; Kim and Ha, 2021; Do et al., 2021), transport alignment(Asano et al., 2019; Caron et al., 2020; Wang et al., 2022), neighborhood-clustering (Xie et al., 2016; Huang et al., 2019; Dang et al., 2021), contrastive learning (Pan and Kang, 2021; Shen et al., 2021), probabilistic approaches (Yang et al., 2020; Monnier et al., 2020; Falck et al., 2021; Manduchi et al., 2021), and kernel density (Yang and Li, 2021). These works primarily focus on clustering data for downstream tasks for a single domain, whereas our clustering algorithm is designed to cluster the data from multiple domains. Moreover, our method solves the problem of transferring the knowledge from the data-rich source domain to the target domain. Distinct from ACIDS (Menapace et al., 2020) which maximizes the mutual information between different views of the same image, our method maximizes the mutual information between cluster labels and images. In addition to data privacy, we also consider model privacy. **Source-free knowledge transfer.** Early domain adaptation methods (Ben-David et al., 2006; Blitzer et al., 2006; Tzeng et al., 2014; Ganin and Lempitsky, 2015; Long et al., 2017, 2018, 2015; Tzeng et al., 2017; Courty et al., 2017) focus on reducing the distributional discrepancies between the source and target domain data. These methods, however, require access to the source and target data simultaneously during the adaptation process, compromising the privacy of the source domain. To overcome this issue, several methods (Kuzborskij and Orabona, 2013; Du et al., 2017; Liang et al., 2020; Li et al., 2020; Kundu et al., 2020; Kurmi et al., 2021; Yeh et al., 2021; Tanwisuth et al., 2021) have been developed for source data-free domain adaptation. For a more thorough literature review of this field, we refer the reader to the survey paper by Yang et al. (2021). In contrast to those methods, we consider a more challenging adaptation setting, as used in previous works (Lipton et al., 2018; Deng et al., 2021; Liang et al., 2021; Zhang et al., 2021), where the privacy of both data and models is the main concern. Different from these lines of work, our approach relies on labeled data in neither the source nor target domain. ## 4 Experiments In this section, we evaluate our method on Office-31, Office-Home, and PACS datasets under three different transfer learning scenarios. The first setting (standard setting) includes only input distribution shift. The second setup (model transfer setting) contains both input and model shifts. The last scenario (limited-data and cluster-imbalanced setting) involves both input and cluster-proportion shifts. ### Experimental setup **Comparable methods.** We benchmark against existing clustering approaches--_DeepCluster of Caron et al. (2018), Invariant Information Clustering (IIC) of Ji et al. (2019)_, and _Adaptive Clustering of Images under Domain Shift (ACIDS) of Menapace et al. (2020)_--in the UCDS setting when the results are available. Unless specified otherwise, the reported baseline results are directly taken from Menapace et al. (2020). IIC and DeepCluster train on target data only while ACIDS trains a source model and then adapts on the target data. We also compare our approach to the following alternative methods, which are different components of our framework: _Pre-trained Only (PO)_, which uses a pre-trained network to cluster target data directly; _Source training Only (SO)_, which trains a model on all the source data using Eq. (4) and directly tests on the target data; _Target Training Only (TO)_, which trains a model on the target data using the loss in Section 2.3.2 without source knowledge transfer; _Adaptation Only (AO)_, which performs the first two stages of our framework, source model training and target model clustering, without further refining on the target data; _PCD (Ours)_ refers to using all three stages of our approach: source model learning, target model clustering, and target model refinement. SO allows us to see the significance of the source model training. Compared with PCD, TO enables us to evaluate the importance of the source knowledge transfer, while AO helps us see the improvement from target refinement. **Pre-trained networks.** To verify the compatibility of our approach with different models, we consider multiple types of pre-trained network architectures and pre-training schemes in our experiments. For pre-training schemes, we explore supervised and self-supervised pre-trainings on ImageNet (Russakovsky et al., 2015). For network architectures, we experiment with supervised ResNet-18 as well as self-supervised ResNet-50 (He et al., 2016) and Vision Transformer (ViT) (Dosovitskiy et al., 2020). In particular, we adopt the network trained by SWAV (Caron et al., 2020) for ResNet-50 and that trained by DINO (Caron et al., 2021) for Vision Transformer for our self-supervised pre-training. **Datasets and evaluation metric.** We use the following datasets in our experiments: Office-31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), and PACS (Li et al., 2017). The Office-31 dataset has three domains (Amazon, Webcam, DSRL) with 4,652 images. The Office-Home dataset consists of 15,500 images with four domains (Art, Clipart, Product, and Real-world). The PACS dataset contains four domains (Art, Painting, Cartoon, and Sketch) with 9,991 images. Following prior works (Ji et al., 2019; Menapace et al., 2020), we evaluate all methods using clustering accuracy on the target dataset. The metric is calculated by first solving a linear assignment problem to match the clusters to ground-truth classes. We set \(K\), the number of clusters, equal to the number of classes in each dataset for evaluation purposes. **Implementation details.** We follow the standard protocols for source-free domain adaptation (Liang et al., 2020). Specifically, we use mini-batch SGD with a momentum of \(0.9\) and weight decay of \(0.001\). Both source and target encoders are initialized with ImageNet pre-trained networks (Russakovsky et al., 2015), but the prototypes and the projection layer of the encoder are initialized with a random linear layer. The initial learning rates are set to \(0.001\) for the pre-trained encoders and \(0.01\) for the randomly initialized layer. The learning rates, \(\eta\), follows the following schedule: \(\eta=\eta_{0}(1+10p)^{-0.75}\) where \(\eta_{0}\) is the initial learning rate. We use the batch size of \(64\) in both source and target learning. All three loss terms are equally weighted, while other choices are possible. We report the sensitivity of the coefficients in front of the loss terms in Appendix B. The initial value of \(\beta_{0}\) to learn domain-specific proportions is set to \(0.9999\) for source clustering and \(0.99\) for target clustering in all settings. We run our method with three different random seeds to calculate the standard deviation. Full implementation details are included in Appendix E ### Main results **Standard setting.** In real-world applications, the source and target data distributions often differ. To test our method under input distribution shift, we evaluate our method on Office-31, Office-Home, and PACS datasets. For each experiment, we select one domain as the target and all the other, denoted as \(\mathcal{R}\), as the source domains. We use the same model architecture in both the source and target domains. We report the results for ResNet-18 (supervised pre-training) in Table 2. The full results with standard error are shown in Appendix A. Compared with the results reported by Menapace et al. (2020), our algorithm outperforms ACIDS consistently in all three datasets (see Table 2): \(19.2\%\) on Office-31, \(12.3\%\) on Office-Home, and \(12.9\%\) on PACS. Though ACIDS does not address the problem of our setting with the same pre-training scheme and backbones as our method, we report the results for comparison. The results of ACIDS with this pre-training scheme are included in Appendix A in Table 7. We observe that our approach still outperforms ACIDS on three out of four tasks with 4% higher in the average accuracy, emphasizing the general applicability and strong performance of PCD. With no adaptation, TO achieves higher clustering accuracy than both IIC and DeepCluster, demonstrating the effectiveness of our clustering method. Compared with our own alternative methods (\(i.e.,\) PO, SO, TO, and AO), PCD achieves steady gains in performance except for one task. Notably, on the task \(\mathcal{R}\rightarrow\) A of the PACS dataset, we notice a negative transfer (Wang et al., 2019) as TO performs the best (\(56.5\%\) vs. \(49.7\%\)). We hypothesize \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Senties} & \multicolumn{4}{c|}{Office-31} & \multicolumn{4}{c|}{Office-Home} & \multicolumn{4}{c}{PACS} \\ \cline{2-13} & \(\mathbb{R}+\mathbb{A}\) & \(\mathbb{R}-\mathbb{W}\) & \(\mathbb{R}+\mathbb{D}\) & Avg & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) & \(\mathbb{R}-\mathbb{A}\) \\ \hline DeepCluster (Caros et al., 2018) & 13.6 & 18.9 & 18.7 & 19.1 & 8.9 & 11.1 & 16.9 & 13.3 & 12.6 & 27.9 & 22.2 & 24.4 & 27.1 & 25.4 \\ ACIDS (Dosovitskiy et al., 2019) & 31.9 & 37.0 & 34.0 & 34.4 & 12.0 & 15.2 & 22.5 & 15.9 & 16.4 & 70.6 & 39.8 & 39.6 & 46.6 & 49.2 \\ ACIDS (Dosovitskiy et al., 2020) & 34.4 & 37.6 & 36.1 & 35.7 & 12.0 & 16.2 & 23.9 & 15.7 & 17.0 & 6.4 & 62.4 & 44.5 & 51.1 & 39.3 \\ \hline PO & 14.1 & 17.9 & 15.3 & 16.8 & 11.4 & 9.0 & 12.9 & 10.8 & 11.0 & 30.5 & 24.1 & 19.3 & 20.8 & 23.8 \\ SO & 34.5 & 46.7 & 43.0 & 14.4 & 26.5 & 15.6 & 23.1 & 21.8 & 21.0 & 30.8 & 35.7 & 27.6 & 25.0 & 30.0 \\ TO & 38.0 & 46.6 & 45.3 & 43.1 & 21.3 & 12.2 & 30.6 & 24.2 & 22.1 & 88.4 & **56.5** & 40.1 & 62.6 \\ AO & 42.8 & 54.4 & 50.8 & 52.3 & 30.0 & 27.2 & 27.3 & 23.4 & 24.6 & 8.6 & 51.3 & 47.7 & 52.3 & 49.1 & 0.02 \\ \hline **PCD** & **46.8** & **60.0** & **57.8** & **54.9** & **33.3** & **24.4** & **31.4** & **28.1** & **29.3** & **59.6** & **49.7** & **56.7** & **53.4** & **63.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Clustering accuracy \((\%)\) on different datasets for ResNet-18-based methods. \(\mathcal{R}\) denotes the rest of the domains. that the Art domain looks quite distinct from the source domain data, and the supervised-pretraining backbone is strong enough to yield good performance using target training only. SO improves upon PO on all the tasks, showing that the knowledge from the source domain can benefit the target domain learning. Likewise, we see consistent improvements around \(2-3\%\) over AO. This result illustrates the importance of target model refinement. We observe similar patterns using self-supervised ResNet-50 as the backbone (see Appendix A). **Model transfer setting.** In many applications, source and target data owners may have different resource requirements. As an example, unlike source providers such as Google, target clients may have limited resources. Thus, they may not be able to use the same model architecture as the source provider. To illustrate the flexibility and demonstrate the generalizability of our framework under model shift, we experiment with different model architectures and pre-training schemes in the source and target domains. We explore three different combinations of source and target model architectures and pre-training schemes: ViT-B/16 (self-supervised) \(\rightarrow\) ResNet-50 (self-supervised), ViT-B/16 (self-supervised) \(\rightarrow\) ResNet-50 (supervised), and ViT-B/16 (self-supervised) \(\rightarrow\) ResNet-18 (supervised). The results are reported in Table 3. In both settings, we continually see improvements in average performance. This finding shows that our method still performs well even though the source and target domain architectures differ, providing strong evidence for the generalizability and compatibility of different components of our framework. **Limited-data and cluster-imbalanced setting.** In real-world scenarios, target domain data are often scarce and imbalanced. To further show the benefit of our clustering loss under this setting, we follow the experimental procedures in Tachet des Combes et al. (2020) and Zhang et al. (2022). Specifically, we drop 70% of the target data in the first \(\lfloor K/2\rfloor\) clusters to create this scenario. The experiments are done on the Office-31 dataset. To illustrate the use of our method in a label-free pipeline, we utilize self-supervised ResNet-50 as the feature encoder for both source and target domains. This scenario is extremely challenging for transfer learning methods since there are shifts in both image and cluster-label distributions. However, as we see in Table 4, PCD still outperforms TO by \(4\%\). We note that TO also adaptively learns the target proportions but does not have to deal with distribution shifts. We also observe consistent improvements over other alternative methods. This result highlights the use of our method in practical settings with limited and imbalanced data. ## 5 Analysis **Ablation study.** To see the contribution of each component, we remove one part at a time from the whole framework and present the results in Table 5. Overall, PCD achieves higher clustering accuracy than all other alternative versions with privacy constraints. We observe that the clustering accuracy drops dramatically (\(10.3\%\)) without the prototype clustering, illustrating the importance of this element. The mutual-information objective is also significant since omitting it leads to a drop in clustering accuracy of \(7.3\%\). This observation shows that the two losses are complementary to each other. The temporal ensemble of the cluster labels produced by the source model still improves the model but does not significantly hurt the performance if removed. We also report the result of directly initializing the target model with the source model (w/o model privacy). We notice around \(3\%\) improvement. When pooling all the source domains together into a single source domain for \begin{table} \begin{tabular}{c|c c c c} \hline \hline Settings & \(\mathcal{R}\)\(\rightarrow\) sub-A & \(\mathcal{R}\)\(\rightarrow\) sub-W & \(\mathcal{R}\)\(\rightarrow\) sub-D & Avg \\ \hline PO & \(14.6\) & \(16.7\) & \(21.5\) & \(17.6\) \\ SO & \(21.1\) & \(32.5\) & \(36.0\) & \(29.9\) \\ TO & \(31.4\) & \(41.9\) & \(45.1\) & \(39.5\) \\ AO & \(34.7\) & \(40.8\) & \(43.9\) & \(39.8\) \\ \hline **PCD** & \(\mathbf{37.8}\) & \(\mathbf{46.4}\) & \(\mathbf{47.0}\) & \(\mathbf{43.7}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Clustering accuracy \((\%)\) on sub-sampled version of Office-31 for ResNet-50-based methods. \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c} \hline \hline Settings & \multicolumn{4}{c|}{ViT-B/16 (sml) \(\rightarrow\) ResNet-50 (sml)} & \multicolumn{4}{c|}{ViT-B/16 (sml) \(\rightarrow\) ResNet-50 (sml)} & \multicolumn{4}{c}{ViT-B/16 (sml) \(\rightarrow\) ResNet-18 (sml)} \\ \hline & \(\mathcal{R}\)\(\rightarrow\) A & \(\mathcal{R}\)\(\rightarrow\) W & \(\mathcal{R}\)\(\rightarrow\) D & Avg & \(\mathcal{R}\)\(\rightarrow\) A & \(\mathcal{R}\)\(\rightarrow\) W & \(\mathcal{R}\)\(\rightarrow\) D & Avg & \(\mathcal{R}\)\(\rightarrow\) A & \(\mathcal{R}\)\(\rightarrow\) W & \(\mathcal{R}\)\(\rightarrow\) D & Avg \\ \hline PO & \(20.1/13.8\) & \(26.7/16.7\) & \(27.2/19.2\) & \(24.7/16.6\) & \(20.1/15.7\) & \(26.7/27.2/18.2\) & \(27.3/16.9\) & \(28.4/17.9\) & \(20.1/14.8\) & \(28.7/17.9\) & \(27.2/13.8\) & \(24.7/16.8\) \\ SO & \(32.2\) & \(46.4\) & \(37.2\) & \(2.3\) & \(4.2\) & \(4.2\) & \(37.2\) & \(2.3\) & \(43.2\) & \(43.2\) & \(43.4\) & \(37.2\) & \(42.3\) \\ TO & \(32.6\) & \(34.2\) & \(33.7\) & \(33.5\) & \(43.7\) & \(55.8\) & \(\mathbf{52.0}\) & \(50.5\) & \(38.0\) & \(45.3\) & \(36.6\) & \(43.3\) \\ AO & \(50.6\) & \(49.7\) & \(36.4\) & \(45.6\) & \(52.5\) & \(37.4\) & \(44.2\) & \(50.1\) & \(53.0\) & \(47.2\) & \(43.9\) & \(48.0\) \\ \hline PCD & \(\mathbf{31.7}\) & \(\mathbf{51.7}\) & \(\mathbf{41.9}\) & \(\mathbf{48.4}\) & \(\mathbf{54.4}\) & \(\mathbf{60.8}\) & \(\mathbf{-0.2}\) & \(\mathbf{54.8}\) & \(\mathbf{54.6}\) & \(\mathbf{53.6}\) & \(\mathbf{46.7}\) & \(\mathbf{51.6}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Clustering accuracy \((\%)\) on Office-31 for different model transfer settings. _ssl_ and _sup_ denote self-supervised and supervised pre-trainings, respectively. (source) \(\rightarrow\) (target). clustering (pooled source), we see a drop of \(2.2\%\). This result indicates that we should respect the local structures of data in each domain. **Results analysis.**_Visualization._ In Figure 2, we visualize the estimated target proportions versus the true proportions, which are calculated from the ground-truth labels. The learned cluster distribution achieves lower L1 loss than the uniform distribution, meaning that the estimated values reflect the data distribution better than the uniform proportions. We plot the t-SNE visualization of the outputs of the feature encoder for the model trained with target training only (Section 2.3.2) in Figure 2(a) and the one trained with the whole framework (Algorithm 1) in Figure 2(b). Using the whole framework, we can see that the samples are more tightly clustered around the prototypes, illustrating that the knowledge from the source domain benefits the target model learning. _Running time and parameter size._ We report the number of parameters and running time per step for comparison in Appendix C, where we see that our method is more efficient in both time and memory than ACIDS. Figure 3: t-SNE visualizations of the encoder’s outputs on the task \(\mathcal{R}\rightarrow\) W. Different colors represent semantic classes from the ground-truth labels. Figure (a) shows the outputs trained with Target training Only (TO), while Figure (b) depicts those trained with the whole framework. Samples with similar semantic content are more tightly clustered around the prototypes (\(\star\)) in Figure (b). \begin{table} \begin{tabular}{l l l l l l} \hline Full & w/o prototype clustering & w/o MI & w/o CuMiK & w/o Temporal Ensemble & w/o model privacy & pooled source \\ \hline 60.0 & 49.7 & 52.7 & 54.5 & 58.3 & 63.1 & 57.8 \\ \hline \end{tabular} \end{table} Table 5: Clustering accuracy (%) on the task \(\mathcal{R}\rightarrow\) W (Office-31) under different variants (ResNet-18). Figure 2: Visualization of the cluster proportions for the sub-sampled version of the task \(\mathcal{R}\rightarrow\) sub-W on the Office-31 dataset. To create this plot, we first match the predicted clusters with the true labels using optimal assignment. The blue bars exhibit the true cluster proportions, whereas the orange bars depict the estimated cluster proportions. The L1 loss of the estimated cluster distribution is lower than that of the uniform proportion (0.26 vs. 0.6), demonstrating the success of our estimation. Conclusion We study a practical transfer learning setting that does not rely on labels in the source and target domains and considers the privacy of both the source data and model. To solve this problem, we provide a novel solution that utilizes prototype clustering, mutual-information maximization, data augmentation, and knowledge distillation. Experiments show that our clustering approach consistently outperforms the baselines and works well across different datasets and model architectures.
2307.00508
Rational Cuntz states peak on the free disk algebra
We apply realization theory of non-commutative rational multipliers of the Fock space, or free Hardy space of square--summable power series in several non-commuting variables to the convex analysis of states on the Cuntz algebra. We show, in particular, that a large class of Cuntz states which arise as the `non-commutative Clark measures' of isometric NC rational multipliers are peak states for Popescu's free disk algebra in the sense of Clou\^atre and Thompson.
Robert T. W. Martin, Eli Shamovich
2023-07-02T08:08:39Z
http://arxiv.org/abs/2307.00508v1
# Rational Cuntz states peak on the free disk algebra ###### Abstract We apply realization theory of non-commutative rational multipliers of the Fock space, or _free Hardy space_ of square-summable power series in several non-commuting variables to the convex analysis of states on the Cuntz algebra. We show, in particular, that a large class of Cuntz states which arise as the 'non-commutative Clark measures' of isometric NC rational multipliers are peak states for Popescu's free disk algebra in the sense of Clouatre and Thompson. ## 1 Introduction This paper applies non-commutative (NC) analysis to a question in non-commutative convexity. Our main result constructs states on the Cuntz algebra, \(\mathcal{O}_{d}\), which peak at non-commutative rational inner functions in Popescu's _free disk algebra_, \(\mathbb{A}_{d}\). We also provide a novel characterization of unital quantum channels in terms of NC rational inner functions. Non-commutative convexity first appeared in the seminal work of Arveson [5]. In [5], Arveson extended classical Choquet theory to a non-commutative, operator-algebraic setting. Classical Choquet theory studies the extreme boundary of compact convex sets and representing measures. Namely, let \(K\) be a compact convex set; a probability measure \(\mu\) on \(K\) represents a point \(x\in K\) if the restriction of the corresponding state from \(C(K)\), the continuous functions on \(K\), to the space of continuous affine functions on \(K\), is the functional of evaluation at \(x\). Bauer characterized the extreme points of \(K\) as those that admit only one representing measure (necessarily \(\delta_{x}\)). This characterization lends itself to an extension to a more general setting of unital subspaces \(1\in M\subset C(X)\), where \(X\) is a compact Hausdorff space. The Choquet boundary of such a subspace is the collection of all points \(x\in X\), such that \(\delta_{x}|_{M}\) admits a unique extension to \(C(X)\). One way to obtain points in the Choquet boundary is to find peak points. A point \(x\in X\) is an \(M-\)peak point, if there exists an \(f\in M\), such that \(|f(x)|=\|f\|\) and \(|f(y)|<\|f\|\) for any \(y\neq x\). In particular, if \(1\in A\subset C(X)\) is a uniform algebra, and \(X\) is metrizable, then an important result of Bishop states that the Choquet boundary of \(A\) is precisely the set of peak points of \(A\). The field of non-commutative convexity has expanded quickly with contributions from Wittstock, Effros and his collaborators, and many others. The reader is referred to the monograph of Davidson and Kennedy [16] and the references therein for further details on non-commutative convexity theory. Of particular interest is the non-commutative Choquet theory first introduced and developed by Arveson [2, 3, 4, 5]. Suppose \(B\) is a unital \(C^{*}\)-algebra and \(1\in A\subset B\) is an operator algebra. In this case, we say that an irreducible representation \(\pi\colon B\to B(\mathcal{O})\) is a boundary representation if \(\pi|_{A}\) has a unique extension to \(B\). The image of the direct sum of all boundary representations gives the \(C^{*}-\)_envelope_, \(C^{*}_{min}(A)\) of \(A\). This is a \(C^{*}-\)algebra in which \(A\) embeds, completely isometrically, and it is universal and minimal in the sense that if \(A\) embeds completely isometrically into any \(C^{*}-\)algebra, \(B\), then there is a \(*-\)homomorphism from \(B\) onto \(C^{*}_{min}(A)\) which intertwines the embeddings. Existence and construction of the \(C^{*}-\)envelope via boundary representations was a long-unsolved problem in operator algebra theory until it was resolved in full generality by Davidson and Kennedy [2, 5, 15, 22, 26]. Arveson also introduced the concept of a peaking representation in [3]. His ideas were further extended by Clouatre [11], Clouatre and Thompson [12, 13], and Davidson and Passer [18]. In particular, Clouatre and Thompson propose to study 'peaking states'. Precise definitions and details on peaking states can be found in Subsection 2.3. Clouatre and Thompson [13] suggest that studying peaking phenomena on the Cuntz algebra will be interesting. We will provide a family of examples of peaking states and corresponding representations in this setting. In more detail, we will study states on the Cuntz algebra \(\mathcal{O}_{d}\), which peak at elements of Popescu's free or non-commutative disk algebra, \(\mathbb{A}_{d}\). Here, \(\mathcal{E}_{d}:=C^{*}\{I,L_{1},\cdots,L_{d}\}\), is the _Cuntz-Toeplitz algebra_, the unital \(C^{*}-\)algebra generated by the left creation operators, \(L_{k}\), on the full Fock space, \(\mathbb{H}_{d}^{2}\). Here, \(\mathbb{H}_{d}^{2}\) can be defined as the Hilbert space of complex square-summable power series in several NC variables, equipped with the \(\ell^{2}-\)inner product of the complex power series coefficients. In this viewpoint the \(L_{k}=M_{\mathbb{A}^{k}}^{L}\) act as isometric left multiplications by the \(d\) independent formal NC variables, \(\mathfrak{z}=(\mathfrak{z}_{1},\cdots,\mathfrak{z}_{d})\). This algebra contains the compact operators, \(\mathcal{K}(\mathbb{H}_{d}^{2})\), on \(\mathbb{H}_{d}^{2}\) and the Cuntz algebra is then defined as the quotient \(C^{*}-\)algebra, \(\mathcal{O}_{d}:=\mathcal{E}_{d}/\mathcal{K}(\mathbb{H}_{d}^{2})\). Popescu's free disk algebra \(\mathbb{A}_{d}:=\mathrm{Alg}\{I,L_{1},\cdots L_{d}\}\) is the unital norm-closed algebra generated by the left creation operators. Moreover, \(\mathbb{A}_{d}\) can be identified, completely isometrically with \(\mathrm{Alg}\{I,S_{1},\cdots,S_{d}\}\), where the \(S_{j}\) denote the generators of the Cuntz algebra. Our primary tools are free analysis and non-commutative function theory. These are rapidly growing fields in modern analysis. Free or non-commutative analysis was initially motivated by the study of analytic functional calculus of several commuting and non-commuting operators, as pioneered by Taylor [46, 47], Voiculescu's free probability theory [48, 49], and Takesaki's extension of Gelfand duality to arbitrary, non-commutative \(C^{*}-\)algebras in [45]. Popescu [37, 39, 41, 42] studied non-commutative functions to extend the classical Sz. Nagy-Foias theory of dilations and von Neumann's inequality to the multivariable setting. He introduced the full Fock space as an analog of the classical Hardy Hilbert space of analytic function in the complex unit disk, \(\mathbb{D}\). We provide the necessary definitions and background theory in Subsection 2.1. A particularly well-studied class of NC functions is the set of all non-commutative rational functions. These functions arise naturally in many different branches of pure and applied mathematics, such as the theory of localizations and quasideterminants in non-commutative rings [1, 14, 25], the theory of formal languages [9, 27], and systems theory [6, 7]. This paper will focus on non-commutative rational functions that are elements of the weak operator topology (WOT)-closed algebra generated by the left creation operators on the full Fock space. Such functions were extensively studied in [31, 32]. In this paper, there is a critical interplay between non-commutative rational inner functions, _i.e._ NC rational functions which define isometric (inner) multipliers on the Fock space, states on the Cuntz algebra which peak on the free disk algebra, and finite-dimensional row coisometries. A row contraction is a contractive linear map from several copies of a Hilbert space into one copy. The necessary background and details are in Subsection 2.2. Given an irreducible and finite-dimensional row coisometry, \(T=(T_{1},\cdots,T_{d})\), and a unit vector \(x\), we can define a linear functional on \(\mathbb{A}_{d}\) via \(\mu(f)=\langle x,f(T)x\rangle\). One can apply a Gelfand-Naimark-Segal construction to \((\mu,\mathbb{A}_{d})\), and this yields a GNS-Hilbert space, \(\mathbb{H}_{d}^{2}(\mu)\) and a \(*-\)representation of the Cuntz-Toeplitz \(C^{*}-\)algebra, \(\pi_{\mu}\). Since \(T\) is a row-coisometry, one can show that \(\Pi_{\mu}:=\pi_{\mu}(L)\) is a Cuntz, _i.e._ surjective row isometry, so that \(\mu\) admits a unique extension to a state on \(\mathcal{O}_{d}\), \(\hat{\mu}\) by [33, Proposition 5.11]. This state, \(\hat{\mu}\), is a finitely-correlated state, as introduced by Bratteli and Jorgensen [10]. Our main result is Theorem 3.5, which states that every such finitely-correlated state is an \(\mathbb{A}_{d}\)-peak state. In particular, by results of Clouatre, this implies that every such state is an exposed extreme point of the state space of the non-commutative or free disk operator system. The proof of the theorem constructs a non-commutative rational inner, such that the state peaks at it. However, this inner is constructed from \((T_{1}^{t},\cdots,T_{d}^{t})\), the coordinate-wise transpose of \(T\). This duality leads us to Theorem 3.8, which shows that a non-commutative rational inner \(\mathfrak{b}\) arises from a quantum channel if and only if the 'transpose' \(\mathfrak{b}^{t}\), obtained by reversing the order of all products in monomials in the power series of \(\mathfrak{b}\), is an inner as well. En route to these results, we obtain some results on spectra of non-commutative rational functions regular at the origin, which refine and extend our previous results in [32]. ## 2 Background ### Non-commutative Hardy space and its multipliers Given \(d\in\mathbb{N}\), the _full Fock space_ is the Hilbert space direct sum \(\mathbb{H}_{d}^{2}=\oplus_{n=0}^{\infty}\left(\mathbb{C}^{d}\right)^{\otimes n}\), with the usual convention of \(\left(\mathbb{C}^{d}\right)^{\otimes 0}\cong\mathbb{C}\). We will call \(\mathbb{H}_{d}^{2}\) the _free_ or _non-commutative_ (NC) _Hardy space_ as it has many properties in common with the classical Hardy space, \(H^{2}\), of square-summable power series in the complex unit disk. In particular, the elements of \(\mathbb{H}_{d}^{2}\) can be viewed as NC functions on the NC unit ball \[\mathbb{B}_{N}^{d}=\bigsqcup_{n=1}^{\infty}\mathbb{B}_{n}^{d},\text{ where }\mathbb{B}_{n}^{d}=\left\{X\in\mathscr{B}(\mathbb{C}^{n}\otimes\mathbb{C}^{d}, \mathbb{C}^{n})\mid XX^{*}<I\right\}.\] The interpretation of \(\mathbb{H}_{d}^{2}\) as a space of non-commutative functions was first considered by Popescu in [41]. A natural way to see \(\mathbb{H}_{d}^{2}\) as a space of functions is by noting that \(\mathbb{H}_{d}^{2}\) is the completion of the free algebra, \(\mathbb{C}\langle\mathfrak{z}_{1},\ldots,\mathfrak{z}_{d}\rangle=\mathbb{C} \langle\mathfrak{z}\rangle\), of NC or free polynomials, with respect to the inner product that makes the monomials orthonormal. The NC monomials correspond to words in the alphabet \(\{1,\ldots,d\}\). Given a word \(\alpha=i_{1}\cdots i_{n}\), \(i_{j}\in\{1,\cdots,d\}\), we write \(\mathfrak{z}^{\alpha}=\mathfrak{z}_{i_{1}}\cdots\mathfrak{z}_{i_{n}}\). If the word is empty, we define \(\mathfrak{z}^{\emptyset}:=1\). We will also denote the length of \(\alpha=i_{1}\cdots i_{n}\) by \(|\alpha|=n\). For a \(d\)-tuple of operators \(T=(T_{1},\ldots,T_{d})\), we set \(T^{\alpha}=T_{i_{1}}\cdots T_{i_{n}}\) and \(T^{\emptyset}=I\). Note that for any free polynomial, \(p\in\mathbb{C}\langle\mathfrak{z}\rangle\), we can evaluate \(p\) on any \(d\)-tuple of matrices \(Z=(Z_{1},\cdots,Z_{d})\in\mathbb{C}^{n\times n}\otimes\mathbb{C}^{1\times d}= :\mathbb{C}_{n}^{d}\) and obtain a function with the following properties: \(p(Z)=p(Z_{1},\cdots,Z_{d})\) 1. is **graded**: for every \(X\in\mathbb{C}_{n}^{d}\), \(p(X)\in\mathbb{C}^{n\times n}\), 2. **respects direct sums**: for every \(X\in\mathbb{C}_{n}^{d}\) and \(Y\in\mathbb{C}_{m}^{d}\), we have \[p(X\oplus Y)=p\left(\begin{pmatrix}X&0\\ 0&Y\end{pmatrix}\right)=\begin{pmatrix}p(X)&0\\ 0&p(Y)\end{pmatrix}=p(X)\oplus p(Y),\] 3. **respects similarities**: for every \(X\in\mathbb{C}_{n}^{d}\) and \(S\in\mathrm{GL}_{n}\), we have \[p(S^{-1}XS)=p(S^{-1}X_{1}S,\ldots,S^{-1}X_{d}S)=S^{-1}p(X)S.\] The elements of \(\mathbb{H}_{d}^{2}\), thus, can be viewed as power series in non-commuting variables. The power series converge uniformly and absolutely on all tuples of matrices of norm \(\leq r\), for every \(0<r<1\). The properties above hold, except that the third property needs to be modified to hold only if \(S^{-1}XS\in\mathbb{B}_{N}^{d}\). Therefore, we obtain a space of NC functions on \(\mathbb{B}_{N}^{d}\). In direct analogy with classical Hardy space theory, (left) multiplication by any of the \(d\) independent variables define isometries, \(L_{j}=M_{\mathfrak{z}_{j}}^{L}\) on \(\mathbb{H}_{d}^{2}\) with pairwise orthogonal ranges. The 'row' operator, \[L=(L_{1},\ldots,L_{d})\colon\mathbb{H}_{d}^{2}\otimes\mathbb{C}^{d}\to \mathbb{H}_{d}^{2},\] is then a _row isometry_, _i.e._ an isometry from several copies of a Hilbert space into one copy. The operators \(L_{j}\) are called the left creation operators or the left free shifts. The WOT-closed algebra \(\mathbb{H}_{d}^{\infty}\), the _free Hardy algebra_, generated by the \(L_{j}\) was extensively studied by Popescu [41, 42, 39] and Davidson-Pitts [19, 20, 21]. The free Hardy algebra is completely isometrically isomorphic to the algebra of all uniformly bounded NC functions on \(\mathbb{B}_{N}^{d}\) with the uniform norm. \[\|f\|=\sup_{X\in\mathbb{B}_{N}^{d}}\|f(X)\|.\] The space \(\mathbb{H}_{d}^{2}\) is an NC reproducing kernel Hilbert space (RKHS) in the sense of Ball, Marx, and Vinnikov [8]. The free Hardy algebra can then be identified as the algebra of left multipliers of this NC-RKHS. Popescu and Davidson-Pitts have generalized the classical inner-outer factorization of bounded analytic functions on the disk to \(\mathbb{H}_{d}^{\infty}\). Here, as in classical \(H^{2}\) theory, an inner NC function in \(\mathbb{H}_{d}^{\infty}\) defines an isometric left multiplier on \(\mathbb{H}_{d}^{2}\) and an outer NC function is one that defines a left multiplier with a dense range. Recall that the free disk algebra, \(\mathbb{A}_{d}\), is the unital norm-closed algebra generated by the left creation operators on the full Fock space. We assume throughout that \(d\geq 2\). This algebra is a very close NC analogue of the classical disk algebra, \(\mathbb{A}_{1}=A(\mathbb{D})\). The Cuntz algebra is the universal \(C^{*}\)-algebra of a surjective row isometry. That is, if \(S=(S_{1},\ldots,S_{d})\) denotes the row isometry of generators of the Cuntz algebra, then \(\sum_{j=1}^{d}S_{j}S_{j}^{*}=I\). The quotient map from the Cuntz-Toeplitz algebra, \(\mathscr{E}_{d}\) onto \(\mathbb{O}_{d}\) restricts to a complete isometry on \(\mathbb{A}_{d}\), and since the Cuntz algebra is simple, it is the \(C^{*}\)-envelope of \(\mathbb{A}_{d}\). Classically, \(C(\mathbb{T})\) is the \(C^{*}\)-envelope of \(A(\mathbb{D})\). Moreover, \(C(\mathbb{T})=\overline{A(\mathbb{D})+A(\mathbb{D})^{*}}\). In the NC setting, \(\mathscr{A}_{d}:=(\mathbb{A}_{d}+\mathbb{A}_{d}^{*})^{-\|\cdot\|}\) is an operator system that embeds, completely isometrically, into \(\mathscr{O}_{d}\), but does not coincide with the Cuntz algebra. In particular, \(\mathscr{A}_{d}\) is not an algebra or a \(C^{*}-\)algebra. Namely, by [40, Theorem 3.1], if \(S_{1},\cdots,S_{d}\) denote the generators of \(\mathbb{O}_{d}\), then the free disk algebra \(\mathbb{A}_{d}:=\mathrm{Alg}\{I,L_{1},\cdots,L_{d}\}^{-\|\cdot\|}\) and \(\mathbb{A}_{d}(S):=\mathrm{Alg}\{I,S_{1},\cdots,S_{d}\}^{-\|\cdot\|}\) are completely isometrically isomorphic. Moreover, by [36, Proposition 3.5], the _free disk system_, \(\mathscr{A}_{d}:=(\mathbb{A}_{d}+\mathbb{A}_{d}^{*})^{-\|\cdot\|}\) and the operator system \(\mathscr{A}_{d}(S)=(\mathbb{A}_{d}(S)+\mathbb{A}_{d}(S)^{*})^{-\|\cdot\|}\) are then completely isometrically isomorphic. For the remainder of the paper, we identify \(\mathbb{A}_{d}\) with \(\mathbb{A}_{d}(S)\) and \(\mathscr{A}_{d}\) with \(\mathscr{A}_{d}(S)\) so that the free disk algebra and the free disk system are viewed as subspaces of the Cuntz algebra, \(\mathscr{O}_{d}\). An important property of \(\mathbb{A}_{d}\) is that it is semi-Dirichlet, namely that \(\mathbb{A}_{d}^{*}\mathbb{A}_{d}\subset\mathscr{A}_{d}\). This property enables one to perform a Gelfand-Naimark-Segal (GNS)-type construction directly from positive linear functionals on \(\mathscr{A}_{d}\). Jury and the first author [28] have developed an NC extension of the classical Alexandrov-Clark measure theory. In this NC theory, the positive linear functionals on the free disk system, denoted by \((\mathscr{A}_{d}^{\dagger})_{+}\), play the role of positive measures on the unit circle. There is, in particular, a one-to-one correspondence between states on \(\mathscr{A}_{d}\) and contractive functions in \(\mathbb{H}_{d}^{\infty}\) that vanish at \(0\). We will be primarily interested in the positive _finitely-correlated states_ or 'NC measures' which arise as the 'NC Clark measures' of NC rational multipliers of the Fock space, and we will introduce these in more detail in the following section. ### Non-commutative rational functions The theory of non-commutative rational functions has been developed independently in pure and applied disciplines ranging from pure algebra to computational and systems theory. In particular, the algebra of all NC rational functions is the _free skew field_ as constructed by Amitsur [1] and Cohn [14] and is denoted by \(\mathbb{C}\!\not<\!\!_{\mathfrak{s}}\!\not>\) (In NC algebra, Amitsur and Cohn proved that this 'free skew field' is the universal 'field of fractions' of the free algebra, \(\mathbb{C}\!\langle\mathfrak{s}\rangle\), of free or non-commutative complex polynomials in the \(d\) NC variables, \(\mathfrak{s}=(\mathfrak{s}_{1},\cdots,\mathfrak{s}_{d})\).) The domain of an NC rational function is roughly the largest collection of \(d\)-tuples of matrices to which our NC rational function can be continued. We will denote the domain of \(\mathfrak{r}\) by \(\mathrm{Dom}\,\mathfrak{r}\). This paper will focus on NC rational functions that are defined and bounded on \(\mathbb{H}_{\mathbb{N}}^{d}\). This assumption simplifies much of the theory. The interested reader should consult [34, 35, 50] and the references therein for more detail on the theory of NC rational functions. Since \(0\in\mathbb{H}_{\mathbb{N}}^{d}\), we will assume that our NC rational functions always have \(0\) in their domains, and we will denote the algebra of all such NC rational functions by \(\mathbb{C}_{0}\!\not<\!\!_{\mathfrak{s}}\!\not>\). This assumption simplifies the definition of an NC rational function somewhat. We will say that an NC rational function in \(\mathbb{C}_{0}\!\not<\!_{\mathfrak{s}}\!\not>\) is any expression of the form \[\mathfrak{r}(\mathfrak{s})=c^{*}\left(I-\sum_{j=1}^{d}\mathfrak{s}_{j}A_{j} \right)^{-1}b.\] Here \(A_{1},\ldots,A_{d}\in\mathbb{C}_{n}^{d}\) and \(b,c\in\mathbb{C}^{n}\). The evaluation of such a function on a tuple \(Z_{1},\ldots,Z_{d}\in\mathbb{C}_{m}^{d}\) is performed via tensor products, i.e., \[\mathfrak{r}(Z)=(I_{m}\otimes c)^{*}\left(I_{m}\otimes I_{n}-\sum_{j=1}^{d}Z_{j }\otimes A_{j}\right)^{-1}(I_{m}\otimes b). \tag{2.1}\] One should note that this is not the original definition of an NC rational function but rather a result in the sense that an NC rational function is usually defined as a certain equivalence class of valid 'NC rational expressions' obtained by applying the arithmetic operations '\(+,\cdot\), and, \(-^{1}\)', to the free algebra, \(\mathbb{C}\langle\mathfrak{z}\rangle\). One can then prove that any such NC rational function in \(\mathbb{C}_{0}\triangleleft\mathfrak{z}\,\mathfrak{z}\,\mathfrak{z}\,\mathfrak{z}\,\mathfrak{z}\,\mathfrak{z}\) obeys a'realization formula' as in Equation (2.1) above. Such a triple, \((A,b,c)\in\mathbb{C}_{n}^{d}\times\mathbb{C}^{n}\times\mathbb{C}^{n}\) is called a descriptor realization of the NC rational function \(\mathfrak{r}\). For every NC rational function, there exist many such descriptor realizations. However, there exists one with \(n\) minimal, the _minimal realization_. The article "the" is justified because two realizations of \(r\) with minimal \(n\) are jointly similar. Namely, given two minimal realizations \((A,b,c)\) and \((\widetilde{A},\widetilde{b},\widetilde{c})\), if \[\widetilde{c}^{*}\left(I-\sum_{j=1}^{d}\mathfrak{z}_{j}\tilde{A}_{j}\right)^{ -1}\tilde{b}=\mathfrak{r}(\mathfrak{z})=c^{*}\left(I-\sum_{j=1}^{d}\mathfrak{z }_{j}A_{j}\right)^{-1}b,\] then there exists \(S\in\mathrm{GL}_{n}\), such that \(\tilde{A}_{j}=S^{-1}AS\), for \(1\leq j\leq d\), \(\tilde{b}=Sb\), and \(\tilde{c}=S^{-1*}c\)[9, Theorem 2.4]. Moreover, a realization \((A,b,c)\) of size \(n\) is minimal if and only if \(c\) is \(A-\)cyclic and \(b\) is \(A^{*}-\)cyclic in the sense that \[\bigvee_{\omega\in\mathbb{F}^{d}}A^{\omega}c=\mathbb{C}^{n}=\bigvee A^{*\omega }b,\] where \(\mathbb{F}^{d}\) denotes the free monoid of all words in the \(d\) letters \(\{1,\cdots,d\}\). We will write \(L_{A}(\mathfrak{z}):=I-\sum_{j=1}^{d}\mathfrak{z}_{j}\tilde{A}_{j}\) whose inverse appears in the above expression. This object, \(L_{A}(Z)\), is called a _linear pencil_ (it is affine linear). It is a result of Vinnikov-Kaliuzhnyi-Verbovetskyi [34] and Volcic [50] that the domain of our function \(\mathfrak{r}\) can be described as the collection of all \(Z\), such that \(\det L_{A}(Z)\neq 0\), where \(A\) comes from the minimal realization. NC rational functions in \(\mathbb{H}_{d}^{2}\) and \(\mathbb{H}_{d}^{\infty}\) were studied by Jury, and the authors in [31]. In fact, \(\mathfrak{r}\in\mathbb{H}_{d}^{2}\) if and only if the tuple \(A=(A_{1},\ldots,A_{d})\) appearing in its minimal realization has joint spectral radius strictly less than \(1\). Here, the joint spectral radius of \(A\) was defined by Popescu [43] as a natural multivariate analogue of Beurling's spectral radius formula, \[\rho(A)=\lim_{n\to\infty}\left\|\sum_{|\alpha|=n}A^{\alpha}A^{\alpha*}\right\| ^{\frac{1}{2n}}.\] It further follows, by Popescu's multi-variable Rota-Strang theorem, that \(\mathfrak{r}\in\mathbb{H}_{d}^{2}\) if and only if it has a row ball of radius strictly greater than \(1\) in its domain [43]. In particular, if this is the case, then \(\mathfrak{r}\in\mathbb{A}_{d}\subset\mathbb{H}_{d}^{\infty}\). See [31, Theorem A] for several characterizations equivalent to membership of an NC rational function in the full Fock space. Every NC rational contractive function can be associated (essentially) uniquely to an NC Clark measure, i.e., a positive functional on \(\mathscr{A}_{d}\). Jury and the authors in [32] characterized such linear functionals or 'NC rational Clark measures'. It turns out that the NC Clark measures that arise from inner NC rational functions in \(\mathbb{H}_{d}^{\infty}\) with \(\mathfrak{r}(0)=0\) are precisely the finitely-correlated states studied by Bratteli and Jorgensen [10] and later by Davidson, Kribs, and Shpigel [17]. Here, note that an NC Clark measure, \(\mu_{b}\), corresponding to any \(b\in[\mathbb{H}_{d}^{\infty}]_{1}\) can be any positive linear functional on the free disk system, and this functional will be a state, _i.e._\(\mu_{b}(I)=1\), if and only if \(b(0)=0\). Moreover, [32, Theorem 4.1] provides a complete description of all NC rational miners in terms of finite-dimensional row coisometries. Since we will apply this description, we will recall it: Let \(\mathfrak{b}\) be a contractive NC rational function in the unit row-ball with \(\mathfrak{b}(0)=0\). (Again, \(\mathfrak{b}(0)=0\) ensures that its NC Clark measure, \(\mu_{b}\), is a state.) Then, such an \(\mathfrak{b}\) is inner if and only if there exists a row coisometry \(T=(T_{1},\cdots,T_{d})\colon\mathbb{C}^{n}\otimes\mathbb{C}^{d}\to\mathbb{C}^ {n}\) and a unit vector \(x\in\mathbb{C}^{n}\), such that \(x\) is \(T\) and \(T^{*}-\)cyclic and \(\mathfrak{b}\) is given by the realization formula, \[\mathfrak{b}(\mathfrak{z})=(P_{0}x)^{*}\left(I-\sum_{j=1}^{d}\mathfrak{z}_{j}T _{0,j}^{*}\right)^{-1}\left(\sum_{j=1}^{d}z_{j}T_{j}^{*}x\right). \tag{2.2}\] Moreover, in this case, \(\mu_{\mathfrak{b}}(L^{\omega})=\mu_{T,x}(L^{\omega}):=x^{*}T^{*\omega}x\). Here we set \(\mathcal{H}_{0}=\bigvee_{\alpha\neq\emptyset}T^{*\alpha}x\), \(P_{0}\) is the orthogonal projection on \(\mathcal{H}_{0}\), and for \(1\leq j\leq d\), \(T^{*}_{0,j}=T^{*}_{j}(I-xx^{*})|_{\mathcal{H}_{0}}\). In particular, if \(T\) is irreducible (the co-ordinate matrices of \(T\) generate \(\mathbb{C}^{n\times n}\) as an algebra), then every unit vector will give rise to such a realization and \(\mathcal{H}_{0}=\mathbb{C}^{n}\). This form of realization is called a Fornasini-Marchesini (FM) realization. More generally, an FM realization is one of the form \[\mathfrak{r}(\mathfrak{z})=D+C^{*}\left(I-\sum_{j=1}^{d}\mathfrak{z}_{j}A_{j} \right)^{-1}\left(\sum_{j=1}^{d}\mathfrak{z}_{j}B_{j}\right).\] Note that \(D=\mathfrak{r}(0)\), so that if we assume that \(\mathfrak{r}(0)=0\), we obtain the preceding form of Equation (2.2). We will denote the FM realization as \((A,B,C,D)\). One can pass from a descriptor realization to an FM one quite easily. The following lemma is well-known to experts, but since we do not have a reference, we chose to include it. **Lemma 2.1**.: _Let \(\mathfrak{r}\in\mathbb{C}_{0}\text{-}\mathfrak{z}\nmid\mathfrak{z}\nmid \mathfrak{z}\nmid\) If \((A,B,C,D)\) is a minimal FM realization of \(\mathfrak{r}\), then_ \[\operatorname{Dom}\mathfrak{r}=\left\{Z\in\mathbb{C}_{\mathbb{N}}^{d}| \det L_{A}(Z)\neq 0\right\}.\] Proof.: By [50, Theorem 3.5], the domain of \(\mathfrak{r}\) is the complement of the singularity locus of the linear pencil \(L_{\hat{A}}(Z)\), where \((\hat{A},b,c)\) is a minimal descriptor realization of \(\mathfrak{r}\). A minimal FM realization, \((A^{\prime},B^{\prime},C^{\prime},D^{\prime})\) of \(\mathfrak{r}\) can then be constructed by setting \(\mathcal{H}^{\prime}:=\bigvee_{\omega\neq\emptyset}\hat{A}^{\omega}c\), \(A^{\prime}:=\hat{A}|_{\mathcal{H}^{\prime}}\), \(B^{\prime}:=\hat{A}c\), \(C^{\prime}=(P_{\mathcal{H}^{\prime}}b)^{*}\) and \(D^{\prime}:=\mathfrak{r}(0)\). By uniqueness of minimal realizations (uniqueness also holds for minimal FM realizations), we can assume that \((A^{\prime},B^{\prime},C^{\prime},D^{\prime})=(A,B,C,D)\). Since \(\mathcal{H}^{\prime}\) has codimension at most \(1\), if \(\mathcal{H}^{\prime}\subsetneq\mathcal{H}\), then \(\hat{A}\) decomposes as \[\hat{A}=\begin{pmatrix}A&*\\ 0&a\end{pmatrix},\] where \(a\in\mathbb{C}^{d}\). Again, by [50, Theorem 3.5], if \(Z\in\operatorname{Dom}\mathfrak{r}\) then \[0\neq\det L_{\hat{A}}(Z)=\det(L_{A}(Z))\det L_{a}(Z),\] so that \(\det L_{A}(Z)\neq 0\). Conversely if \(\det L_{A}(Z)\) is not \(0\) for some \(Z\in\mathbb{C}_{n}^{d}\), then \(\mathfrak{r}(Z)\) is well-defined as the transfer function, \[DI_{n}+I_{n}\otimes CL_{A}(Z)^{-1}Z\otimes B,\] so that \(Z\in\operatorname{Dom}\mathfrak{r}\). If \(\mathfrak{r}\in\mathbb{C}_{0}\text{-}\mathfrak{z}\nmid\mathfrak{z}\nmid\) with minimal descriptor realization \((A,b,c)\), we will allow the domain of \(\mathfrak{r}\) to include \(d\)-tuples of operators in an infinite dimensional Hilbert space. We will denote such operator \(d-\)tuples by \(\mathbb{C}_{\infty}^{d}\) or \(\mathscr{B}(\mathcal{H})^{d}\), where \(\mathcal{H}\) is a separable Hilbert space. Namely, given \(Z=(Z_{1},\cdots,Z_{d})\in\mathbb{C}_{\infty}^{d}\), we will say that \(Z\in\operatorname{Dom}\mathfrak{r}\) if \(L_{A}(Z)\) is invertible. It will be convenient to introduce some basic notations; we view any \(Z\in\mathbb{C}_{n}^{d}\), \(n\in\mathbb{N}\) as a row \(d-\)tuple of operators, \(Z=(Z_{1},\cdots,Z_{d})\), \(Z_{j}\in\mathbb{C}^{n\times n}\). Any such row defines a bounded linear map from \(\mathbb{C}^{n}\otimes\mathbb{C}^{d}\) into \(\mathbb{C}^{n}\). Hence, by \(Z^{*}\), we mean the 'column operator', \(Z^{*}:=\left(\begin{smallmatrix}Z_{1}^{*}\\ \vdots\\ \vdots_{d}^{*}\end{smallmatrix}\right):\mathbb{C}^{n}\rightarrow\mathbb{C}^{n} \otimes\mathbb{C}^{d}\), obtained as the Hilbert space adjoint of the linear map \(Z\). Similarly, \(Z^{\mathfrak{r}}:=\left(\begin{smallmatrix}Z_{1}^{*}\\ \vdots\\ \vdots_{d}^{*}\end{smallmatrix}\right)\) denotes the transpose of the row operator, \(Z\), with respect to the standard bases of \(\mathbb{C}^{n}\) and \(\mathbb{C}^{n}\otimes\mathbb{C}^{d}\). We will, however, also have occasion to consider the row operator \(\operatorname{row}(Z^{*}):=(Z_{1}^{*},\cdots,Z_{d}^{*}):\mathbb{C}^{n}\otimes \mathbb{C}^{d}\rightarrow\mathbb{C}^{n}\) obtained as the component-wise adjoint of \(Z\). The row operator \(\operatorname{row}(Z^{\mathfrak{r}})\) is defined similarly. We will also consider the operation \(\overline{(\cdot)}:\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\), defined by \(x\mapsto\overline{x}\), where \(\overline{x}\) denotes entry-wise complex conjugation with respect to the standard basis of \(\mathbb{C}^{n}\). If \(Z=(Z_{1},\cdots,Z_{d})\in\mathbb{C}_{n}^{d}\), we define \(\overline{Z}:=(\overline{Z}_{1},\cdots,\overline{Z}_{d})\), where \(\overline{Z}_{j}:=\overline{(\cdot)}\circ Z_{j}\circ\overline{(\cdot)}\), so that \(\overline{Z}_{j}\) is obtained by entry-wise complex conjugation of the matrix \(Z_{j}\) and \(\overline{Z}=(Z^{*})^{\mathfrak{r}}=(Z^{*})^{*}\) is a row \(d-\)tuple of matrices. The following lemmas give us a useful condition on the spectra of NC rational functions with the origin in their domains. **Lemma 2.2**.: _Let \(\mathcal{H}\) and \(\mathcal{K}\) be Hilbert spaces. Let \(A\in B(\mathcal{H})\) and \(D\in B(\mathcal{K})\) be invertible and \(B\in B(\mathcal{K},\mathcal{H})\) and \(C\in B(\mathcal{H},\mathcal{K})\). Then, the Schur complement, \(A-BD^{-1}C\), has a non-trivial kernel if and only if \(D-CA^{-1}B\) has a non-trivial kernel. Moreover, the map, \(D^{-1}C|_{\ker(A-BD^{-1}C)}\colon\ker(A-BD^{-1}C)\to\ker(D-CA^{-1}B)\), is an isomorphism._ Proof.: Since the claim is symmetric, we will prove only the forward implication. Let us assume that \(0\neq v\in\ker(A-BD^{-1}C)\). Then, \(0\neq Av=BD^{-1}Cv\). In particular, \(D^{-1}Cv\neq 0\). Therefore, \[(D-CA^{-1}B)D^{-1}Cv=Cv-CA^{-1}BD^{-1}Cv=CA^{-1}(Av-BD^{-1}Cv)=0.\] To prove the last part of the claim, we let \(\Phi=D^{-1}C|_{\ker(A-BD^{-1}C)}\). As we saw above, \(\Phi\) is well-defined and injective. Similarly, let \(\Psi=A^{-1}B|_{\ker(D-CA^{-1}B)}\). Then, for every \(v\in\ker(A-BD^{-1}C)\), we have that \[\Psi\Phi v=A^{-1}BD^{-1}Cv=A^{-1}Av=v.\] Similarly, in the other direction. Hence, we obtain our isomorphism. **Proposition 2.3**.: _Let \(\mathfrak{r}\in\mathbb{C}_{0}\mbox{$\prec\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: ([32, Proposition 5.6]) We have that \(\zeta\in\partial\mathbb{D}\cap\sigma(\mathfrak{b}(Z))\) if and only if \[\det\left(I\otimes I-\sum_{j=1}^{d}Z_{j}\otimes T(\zeta)_{j}^{*}\right)=0.\] Taking adjoints, this happens if and only if \[\det\left(I\otimes I-\sum_{j=1}^{d}Z_{j}^{*}\otimes T(\zeta)_{j}\right)=0.\] By vectorization, this happens if and only if there is a matrix, \(X\), so that \[X-\sum_{j=1}^{d}T(\zeta)_{j}X\overline{Z}_{j}=0.\] If \[T(\zeta)^{*}=:\begin{pmatrix}A^{*}&B^{*}\\ &C^{*}\end{pmatrix},\] then take \[Z=\begin{pmatrix}A^{\mathrm{t}}&0\\ 0&0\end{pmatrix},\] and if \(A\in\mathbb{C}_{k}^{d}\), let \(X=I_{k}\) so that \[X-T(\zeta)X\overline{Z} = \begin{pmatrix}I_{k}&0\\ 0&0\end{pmatrix}-\begin{pmatrix}A&0\\ B&C\end{pmatrix}\begin{pmatrix}I_{k}&0\\ 0&0\end{pmatrix}\begin{pmatrix}A^{*}&0\\ 0&0\end{pmatrix}\] \[= \begin{pmatrix}I_{k}&0\\ 0&0\end{pmatrix}-\begin{pmatrix}AA^{*}&0\\ BA^{*}&0\end{pmatrix}=\begin{pmatrix}I_{k}&0\\ 0&0\end{pmatrix}-\begin{pmatrix}I_{k}&0\\ 0&0\end{pmatrix}=0.\] Here we have used the fact that \(T(\zeta)\) is a row coisometry. ### Peaking states and representations In this section we follow the work of Clouatre and Thompson in non-commuative convexity in operator algebra and operator system theory [13]. Let \(A\) be a unital operator algebra and let \(B\) be a \(C^{*}\)-cover of \(A\). Namely, we have a unital, completely isometric embedding \(\iota\colon A\to B\), such that \(B=C^{*}(\iota(A))\). Let us denote by \(K(B)\) the state space of \(B\). Clouatre and Thompson say that \(\mu\in K(B)\) is \(A\)_-peaking_, or an \(A\)_-peak state_, if there exists a contraction \(a\in A\), such that \(\mu(a^{*}a)=1\) and \(\nu(a^{*}a)<1\), for all \(\nu\in K(B)\setminus\{\mu\}\). If \(\mu\) is \(A\)-peaking, then it is quite immediate that \(\mu\) is pure and \(\mu\) has the unique extension property (UEP), i.e., the functional \(\mu|_{A}\) has a unique Hahn-Banach extension to \(B\). In fact, any \(\mu\in(\mathscr{A}_{\mu}^{\dagger})_{+}\) with the property that its GNS row isometry is a Cuntz row isometry has the UEP by [33, Proposition 5.11]. If \(\pi_{\mu}\colon B\to B(\mathscr{H}_{\mu})\) is a GNS representation of \(\mu\) with cyclic vector \(\xi_{\mu}\), then, by Cauchy-Schwarz, \(\pi_{\mu}(a)\xi_{\mu}=\xi_{\mu}\). This implies that there is a finite-dimensional subspace \(F\subset\mathscr{H}_{\mu}\), such that \(\|P_{F}\pi_{\mu}(a)|_{F}\|=1\), where \(P_{F}\) is the projection onto \(F\). Similarly, in [11], if \(B\) is a \(C^{*}\)-algebra and \(\mathscr{S}\subseteq B\) is an operator system, a state, \(\mu\in K(B)\) is said to be \(\mathscr{S}-\)peaking if there is a self-adjoint element \(s\in\mathscr{S}\) so that \(\|s\|=1\) and \(|\lambda(s)|<\mu(s)=1\) for every \(\lambda\in K(B)\setminus\{\mu\}\). An irreducible representation \(\pi\colon B\to B(\mathscr{H}_{\pi})\) is called local \(A\)-peak by Clouatre and Thompson, if there exists \(n\in\mathbb{N}\), \(T\in M_{n}(A):=A\otimes M_{n}\) with \(\|T\|=1\), such that for every irreducible representation \(\sigma\colon B\to B(\mathscr{H}_{\sigma})\) unitarily inequivalent to \(\pi\) and every finite-dimensional subspace \(G\subset\mathscr{H}_{\pi}^{\oplus n}\), \(\|P_{G}\sigma^{(n)}(T)|_{G}\|<1\). This implies, that there is a finite-dimensional subspace \(F\subset\mathscr{H}_{\pi}^{\oplus n}\), such that \(\|P_{F}\pi^{(n)}(T)|_{F}\|=1\)[13, Lemma 2.2]. By [13, Theorem 2.7], if \(\mu\in K(B)\) is \(A\)-peaking, then \(\pi_{\mu}\) is local \(A\)-peak. The converse, however, is false [13, Example 2]. One can say slightly more about the connection between the two notions. This is the goal of the following two lemmas. **Lemma 2.4**.: _Let \(B\) be a unital \(C^{*}\)-algebra and \(A\subset B=C^{*}(A)\) be a unital operator algebra. Let \(\pi\colon B\to B(\mathcal{H})\) be a representation, such that there exists a contraction \(a\in A\) and a finite-dimensional subspace \(G\subset\mathcal{H}\) with \(\|P_{G}\pi(a)|_{G}\|=1\). Then, there exists a vector \(\xi\in G\), such that \(\pi(a^{*}a)\xi=\xi\)._ Proof.: Let \(\varphi\) be the ucp \(\varphi(b)=P_{G}\pi(b)|_{G}\in B(G)\). By Kadison-Schwarz, \(\varphi(a)^{*}\varphi(a)\leq\varphi(a^{*}a)\). Hence, \[1=\|\varphi(a)\|^{2}\leq\|\varphi(a^{*}a)\|\leq 1\] In particular, \(\|\varphi(a^{*}a)\|=1\) and since it is a selfadjoint operator on a finite-dimensional space, there exists \(\xi\in G\), such that \(\varphi(a^{*}a)\xi=\xi\). Since \(\pi(a^{*}a)\) is a contraction, we have that \(\pi(a^{*}a)\xi=\xi\). **Lemma 2.5**.: _Let \(B\) be a unital \(C^{*}\)-algebra and \(1\in A\subset B\) an operator algebra, such that \(B=C^{*}(A)\). Let \(a\in A\) be a contraction and set_ \[F_{a}=\left\{\mu\in K(B)\mid\mu(a^{*}a)=1\right\}.\] _Then, \(F_{a}\) is a closed face of \(K(B)\). Moreover, if \(\pi\colon A\to B(\mathcal{H})\) is local \(A\)-peak with witness \(a\), then \(F_{a}\neq\emptyset\) and all the extreme points of \(F_{a}\) are states arising from \(\pi\). Lastly, \(\partial_{e}F_{a}\) is in one to one correspondence with vectors \(\eta\in\mathcal{H}\), such that \(\pi(a^{*}a)\eta=\eta\)._ Proof.: Since \(a^{*}a\) defines a weak*-continuous contractive real functional on \((B_{sa})^{*}\), then \(F_{a}\) is the closed face where this functional attains its maximum on \(K(B)\). Let \(\pi\) be a local \(A\)-peak representation with witness \(a\). Then, by the previous lemma, there exists \(\xi\in G\), such that \(\pi(a^{*}a)\xi=\xi\). Hence, the state \(\mu(b)=\langle\xi,\pi(b)\xi\rangle\) is in \(F_{a}\). Since \(F_{a}\) is a closed face, its extreme points are pure states. Let \(\varphi\in F_{a}\) be a pure state and \((\sigma,\mathcal{K},\eta)\) be its GNS representation. Then, it is immediate that \(\sigma(a^{*}a)\eta=\eta\). Thus, if \(G^{\prime}\subset\mathcal{K}\) is the space spanned by \(\eta\) and \(\sigma(a)\eta\), then \(\|P_{G^{\prime}}\sigma(a)|_{G^{\prime}}\|=1\) and thus \(\sigma\) is unitarily equivalent to \(\pi\) In this paper, we are interested in the case when \(A=\mathbb{A}_{d}\) and \(B=\mathcal{O}_{d}\). Since \(\mathbb{A}_{d}\) is semi-Dirichlet, we can say more about \(\mathbb{A}_{d}\) -peak states and connect them to peaking states for operator systems. **Lemma 2.6**.: _If \(\mu\in K(\mathcal{O}_{d})\) is an \(\mathbb{A}_{d}\)-peak state, then there exists \(b\in\mathscr{A}_{d}\) positive, such that \(\mu(b)=1\) and \(\nu(b)<1\), for all states \(\nu\neq\mu\). In particular, \(\mu\) is an \(\mathscr{A}_{d}\)-peak state, \(\mu|_{\mathscr{A}_{d}}\) is a weak* exposed extreme point of the unit ball of \((\mathscr{S}_{d}^{*})_{sa}\) and it has the unique extension property._ Proof.: Let \(\mu\in K(\mathcal{O}_{d})\) be an \(\mathbb{A}_{d}\)-peak state. Then, there is a contraction \(a\in\mathscr{A}_{d}\), such that \(\mu(a^{*}a)=1\) and for every state \(\nu\neq\mu\), \(\nu(a^{*}a)<1\). However, since \(A\) is semi-Dirichlet, \(a^{*}a\in\mathscr{A}_{d}\). Hence, we are done. The second part follows from [11, Theorem 3.2]. One could try and argue the converse claim. Let \(b\in\mathscr{A}_{d}\) be a positive element, such that \(\mu(b)=1\) and for every \(\nu\in K(\mathcal{O}_{d})\setminus\{\mu\}\), \(\nu(b)<1\). By replacing \(b\) by \(\frac{1}{2}(1+b)\), we may assume that \(b\) is invertible. Thus, \(b\) is factorizable in the sense of Popescu [38]. Namely, there exists \(c\in\mathbb{H}_{d}^{\infty}\), such that \(b=c^{*}c\). However, we do not know that \(c\in\mathbb{A}_{d}\). This is true if we assume that \(b\) is the real part of an NC rational function. In this case, \(c\) is NC rational by the NC rational Fejer-Riesz theorem of [30, Theorem 6.5]. However, we do not need this result for our purposes and only require the following observation to construct a large class of examples of \(\mathbb{A}_{d}\)-peaking states. **Lemma 2.7**.: _Let \(b\in\mathbb{A}_{d}\) be inner. Let \(\mu\in K(\mathcal{O}_{d})\) be such that \(\mu(b)=1\) and for all \(\nu\in K(\mathcal{O}_{d})\setminus\{\mu\}\), \(|\nu(b)|<1\). Then, \(\mu\) is \(\mathbb{A}_{d}\)-peaking._ Proof.: Since \(b\) is inner, we have that: \[\frac{1}{4}(1+b^{*})(1+b)=\frac{1}{4}(2+2\mathrm{Re}(b))=\frac{1}{2}(1+\mathrm{ Re}(b)).\] Let \(a=\frac{1}{2}(1+\mathrm{Re}(b))\), then \(a\geq 0\) and \(\mu(a)=1\). Moreover, if \(\nu\in K(\mathcal{O}_{d})\setminus\{\mu\}\) is such that \(\nu(a)=1\), then \(\mathrm{Re}\nu(b)=1\). However, this contradicts the assumption that \(|\nu(b)|<1\). The above observation suggests the following strategy: To construct examples of states on the Cuntz algebra which peak on the free disk algebra, we will show that if \(\mathfrak{b}\in\mathbb{H}_{d}^{\infty}\) is NC rational and inner, then there are finite points, \(A\), on the boundary of the unit row-ball so that \(\mathfrak{b}(A)\) has \(1\) as an eigenvalue of multiplicity one. Main result A relationship between spectra and intertwiners is encoded in the following lemmas. Let \(Z\in\mathbb{C}_{n}^{d}\) be an irreducible row contraction of row norm \(1\). Let \(Y\in B(\mathcal{H})^{d}=\mathbb{C}_{\infty}^{d}\) be another row contraction on a separable Hilbert space. Let \(\psi_{Y,Z}\colon B(\mathbb{C}^{n},\mathcal{H})\to B(\mathbb{C}^{n}, \mathcal{H})\) be the map \(\psi_{Y,Z}(T)=\sum_{j=1}^{d}Y_{j}TZ_{j}^{*}\). Assume that \(1\in\sigma_{p}(\psi_{Y,Z})\). Then, there exists \(0\neq T\in B(\mathbb{C}^{n},\mathcal{H})\), such that \(\psi_{Y,Z}(T)=T\). We may assume that \(\|T\|=1\). Then, since \(n<\infty\), there is a unit vector \(v\in\mathbb{C}^{n}\), such that \(\|Tv\|=1\) and hence \[1=\|Tv\|^{2}=\langle\psi_{Y,Z}(T)v,Tv\rangle=\langle(I\otimes T)Z^{*}v,Y^{*}Tv\rangle.\] Since \(T\), \(Z\), and \(Y\) are contractions, we conclude from the equality clause of Cauchy-Schwarz that \(Y^{*}Tv=(I\otimes T)Z^{*}v\). Therefore, for every \(1\leq n\leq d\). The same argument can be applied to the self-compositions \(\psi_{Y,Z}^{\circ n}\) to conclude that for every \(p\in\mathbb{C}\langle\mathfrak{z}\rangle\), \(Tp(Z^{*})v=p(Y^{*})Tv\). Now since \(Z\) is irreducible, so is \(Z^{*}\). This implies that \(v\) is cyclic. Therefore, for every \(w\in\mathbb{C}^{n}\), there exists an NC polynomial \(p\), such that \(p(Z^{*})v=w\). Hence, for every \(1\leq j\leq d\), \[TZ_{j}^{*}w=TZ_{j}^{*}p(Z^{*})v=Y_{j}^{*}p(Y^{*})Tv=Y_{j}^{*}Tp(Z^{*})v=Y_{j}^ {*}Tw.\] We conclude that \(T\) is a homomorphism of \(\mathbb{C}\langle\mathfrak{z}\rangle\)-modules from \(\mathbb{C}^{n}\) to \(\mathcal{H}\). In particular, since the kernel of a homomorphism is a submodule. \(T\) must be injective. Thus, we have obtained the following lemma: **Lemma 3.1**.: _Let \(Z\in\mathbb{C}_{n}^{d}\) be an irreducible row contraction of row norm \(1\) and \(Y\in B(\mathcal{H})^{d}\) a row contraction, such that \(1\in\sigma_{p}(\psi_{Y,Z})\). Then, there exists a \(Y\)-covinvariant \(n\)-dimensional subspace \(\mathcal{K}\subset\mathcal{H}\), such that \(Y^{*}|_{\mathcal{K}}\) is similar to \(Z^{*}\)._ We can do slightly better, assuming that both \(Z\) and \(Y\) are coisometries. Under the assumptions of the lemma, let \(T\) be the intertwinner obtained above. Then, \[Z(I\otimes T^{*}T)Z^{*}=T^{*}YY^{*}T=T^{*}T.\] However, since \(Z\) is irreducible, a result of Farenick [24, Theorem 2] implies that the map \(A\mapsto Z(I\otimes A)Z^{*}\) is irreducible and thus, by the Perron-Frobenius theorem for positive maps of Evans and Hoegh-Krohn [23, Theorem 2.3], we know that there is a unique (up to scalar multiplication) eigenvector of this map that corresponds to eigenvalue \(1\). However, since \(Z\) is a coisometry, the corresponding map is a ucp. Hence, \(T^{*}T\) is a scalar multiple of the identity of norm \(1\). Therefore, \(T^{*}T=I\) and \(T\) is an isometry. We summarize **Lemma 3.2**.: _Let \(Z\in\mathbb{C}_{n}^{d}\) be an irreducible row coisometry and let \(Y\in B(\mathcal{H})^{d}\) be a row coisometry. If \(1\in\sigma_{p}(\psi_{Y,Z})\), then there exists a unique isometry \(V\colon\mathbb{C}^{n}\to\mathcal{H}\), such that \(Y^{*}V=(I\otimes V)Z^{*}\)._ Note that we can canonically identify \(B(\mathbb{C}^{n},\mathcal{H})\) with \((\mathbb{C}^{n})^{*}\otimes\mathcal{H}\). The identification sends \(\varphi\otimes\xi\) to the linear map \(v\mapsto\varphi(v)\xi\). Now let \(Y\in B(\mathcal{H})^{d}\) and \(X\in\mathbb{C}_{n}^{d}\). Then, we can define an operator \(\sum_{j=1}^{d}X_{j}^{t}\otimes Y_{j}\in B((\mathbb{C}^{n})^{*}\otimes\mathcal{ H})\) acting via \[\left(\sum_{j=1}^{d}X_{j}^{t}\otimes Y_{j}\right)(\varphi\otimes\xi)=\sum_{j=1 }^{d}(\varphi\circ X_{j})\otimes Y_{j}\xi.\] Now via identification, the right-hand map is \[\left(\sum_{j=1}^{d}(\varphi\circ X_{j})\otimes Y_{j}\xi\right)v=\sum_{j=1}^{d }\varphi(X_{j}v)Y_{j}\xi=\left(\sum_{j=1}^{d}Y_{j}(\varphi\otimes\xi)X_{j} \right)v.\] Therefore, we get that the map \(\psi_{Y,X}\) corresponds to the product \(\sum_{j=1}^{d}\overline{X_{j}}\otimes Y_{j}\). Hence, \(1\in\sigma_{p}(\psi_{Y,X})\) if and only if \(1\in\sigma_{p}\left(\sum_{j=1}^{d}\overline{X_{j}}\otimes Y_{j}\right)\), Assume that \(T\in\mathbb{C}_{n}^{d}\) is an irreducible row co-isometry. By [32, Lemma 2.1], \(\operatorname{row}(T^{t})\) has joint spectral radius \(1\) and by [44, Lemma 4.10], \(\operatorname{row}(T^{t})\) is jointly similar to an irreducible row co-isometry, \(Z\in\mathbb{B}_{n}^{d}\). Fix a unit vector, \(x\in\mathbb{C}^{n}\). Then there is a unique NC rational inner function, \(\mathfrak{b}=\mathfrak{b}_{T,x}\), corresponding to the pair \((T,x)\) by [32, Theorem 4.1]. Let \(S\in\operatorname{GL}_{n}\) be an invertible matrix, such that \(S^{-1}T^{t}S=Z\). Since \(\mathfrak{b}\) is NC, we have that \[\mathfrak{b}(Z)=\mathfrak{b}(S^{-1}T^{t}S)=S^{-1}\mathfrak{b}(T^{t})S.\] In particular, the dimension of the eigenspace corresponding to \(1\) of \(\mathfrak{b}(T^{t})\) and \(\mathfrak{b}(Z)\) are the same. **Lemma 3.3**.: _In the above setting, \(\mathfrak{b}(Z)\) has an eigenspace of dimension \(1\) corresponding to eigenvalue \(1\)._ Proof.: Consider the map \(\psi_{T}(X)=\sum_{j=1}^{d}T_{j}XT_{j}^{*}\). This map corresponds to the tensor \(\sum_{j=1}^{d}\overline{T_{j}}\otimes T_{j}\). Since the map is unital, we have that \(1\in\sigma\left(\sum_{j=1}^{d}\overline{T_{j}}\otimes T_{j}\right)\). Moreover, \(1\) is the Perron-Frobenius eigenvalue of \(\psi_{T}\), and since \(T\) is irreducible, the dimension of the corresponding eigenspace is \(1\). Taking the adjoint, we get the tensor \(\sum_{j=1}^{d}T_{j}^{t}\otimes T_{j}^{*}\). By Lemma 2.3, we know that the dimension of the eigenspace corresponding to \(1\) of \(\mathfrak{b}(T^{t})\) is \(1\), as well. The claim for \(Z\) now follows from the observation preceding the lemma. Let \(y\) be a unit vector in this one-dimensional eigenspace of \(\mathfrak{b}(Z)\) to eigenvalue \(1\) and define the finitely-correlated Cuntz state \(\mu=\mu_{Z,y}\in(\mathscr{A}_{d}^{\dagger})_{+}\) by \(\mu(L^{\omega}):=y^{*}Z^{\omega}y\). Let \(V\) denote the minimal row isometric dilation of \(Z\) on \(\mathscr{H}\supseteq\mathbb{C}^{n}\). By [17, Theorem 6.5], \(V\) is an irreducible Cuntz row isometry. Given any \(\mu\in(\mathscr{A}_{d}^{\dagger})_{+}\), we can, as in [32], apply a Gelfand-Naimark-Segal (GNS) construction to \((\mu,\mathbb{A}_{d})\) to obtain a GNS-Hilbert space, \(\mathbb{H}_{d}^{2}(\mu)\), and a row isometry, \(\Pi_{\mu}\), acting on \(\mathbb{H}_{d}^{2}(\mu)\), where \(\mathbb{H}_{d}^{2}(\mu)\) is defined as the completion of the free algebra, \(\mathbb{C}\langle\mathfrak{z}\rangle\), modulo vectors of zero-length, with respect to the pre-inner product, \[\langle p,q\rangle_{\mu}:=\mu(p(L)^{*}q(L)).\] Equivalence classes of free polynomials, \(p+N_{\mu}\), where \(N_{\mu}\) denotes the left ideal of zero-length vectors with respect to \(\|\cdot\|_{\mu}\), are dense in \(\mathbb{H}_{d}^{2}(\mu)\) by construction. This construction also comes equipped with a row isometry, \(\Pi_{\mu}\), defined by left multiplications by the independent variables on \(\mathbb{H}_{d}^{2}(\mu)\), \(\Pi_{\mu;j}p+N_{\mu}:=\mathfrak{z}_{j}p+N_{\mu}\). Also note that defining \(\pi_{\mu}(L_{k}):=\Pi_{\mu;k}\), \(1\leq k\leq d\), yields a \(*-\)representation of the Cuntz-Toeplitz algebra, \(\mathscr{E}_{d}\), and vice versa. Moreover, \(\Pi_{\mu}\) is a Cuntz (surjective) row isometry if and only if \(\mu\in(\mathscr{A}_{d}^{\dagger})_{+}\) has a unique positive extension to \(\mathcal{O}_{d}\) by [33, Proposition 5.11]. Hence, any \(\mu\in(\mathscr{A}_{d}^{\dagger})_{+}\) with GNS row isometry \(\Pi_{\mu}\), of Cuntz type can be uniquely identified with a positive state, \(\hat{\mu}\), on \(\mathcal{O}_{d}\), and applying the GNS construction to \((\mu,\mathbb{A}_{d})\) or to \((\hat{\mu},\mathcal{O}_{d})\) yields the same GNS-Hilbert space, \(\mathbb{H}_{d}^{2}(\mu)\), and the same Cuntz representation, \(\pi_{\mu}\). By [32, Proposition 3.6], if \(\mu=\mu_{Z,y}\) and we define \[\mathscr{H}_{\mu}:=\bigvee\Pi_{\mu}^{*\omega}1+N_{\mu},\qquad Z_{\mu}:=(\Pi_{ \mu}^{*}|_{\mathscr{H}_{\mu}})^{*},\] then the pair \((Z,V)\) are jointly unitarily equivalent to \((Z_{\mu},\Pi_{\mu})\) by a unitary which sends \(y\) to \(1+N_{\mu}\). In particular, \(\mathscr{H}_{\mu}\) is finite-dimensional, \(Z_{\mu}\) is an irreducible row co-isometry, and \(\Pi_{\mu}\) is its minimal row isometric dilation, and this is irreducible and Cuntz. **Lemma 3.4**.: _Let \(\pi\colon\mathcal{O}_{d}\to B(\mathscr{H})\) be a representation so that \(1\in\sigma_{p}(\pi(\mathfrak{b}))\). Then, there exists a unique isometry \(V\colon\mathbb{C}^{n}\to\mathscr{H}\), such that \(\pi(S)^{*}V=(I\otimes V)Z^{*}\)._ Proof.: Since \(\xi=\pi(\mathfrak{b})\xi=\mathfrak{b}(\pi(S))\xi\), by Proposition 2.3, we have that \(1\in\sigma_{p}\left(\sum_{j=1}^{d}\pi(S_{j})\otimes T_{j}^{*}\right)\). Now applying the interchange unitary, we have that \(1\in\sigma_{p}\left(\sum_{j=1}^{d}T_{j}^{*}\otimes\pi(S_{j})\right)\). The latter corresponds to the map \(\psi\colon B(\mathbb{C}^{n},\mathscr{H})\to B(\mathbb{C}^{n},\mathscr{H})\) given by \(\psi(X)=\sum_{j=1}^{d}\pi(S_{j})X\overline{T_{j}}\). We note that \(\bar{T}=(T^{t})^{*}\). However, we also know that \(\overline{T_{j}}=S^{*}Z_{j}^{*}S^{-1*}\). Thus, \[\psi(X)=\sum_{j=1}^{d}\pi(S_{j})X\overline{T_{j}}=(I\otimes S^{*})\left(\sum_{j= 1}^{d}\pi(S_{j})XZ_{j}^{*}\right)(I\otimes S^{-1*}).\] Hence, if we set \(\varphi(X)=\sum_{j=1}^{d}\pi(S_{j})XZ_{j}^{*}\). Then, \(1\in\sigma_{p}(\varphi)\). Therefore, by Lemma 3.2, we get that there exists a unique isometry \(V\colon\mathbb{C}^{n}\to\mathcal{H}\), such that \(\pi(S)^{*}V=(I\otimes V)Z^{*}\). **Theorem 3.5**.: _Let \(T\in\mathbb{C}_{n}^{d}\) be an irreducible row co-isometry and \(x\in\mathbb{C}^{n}\) a unit vector so that \(\mathfrak{b}=\mathfrak{b}_{T,x}\) is the unique NC rational inner function corresponding to \((T,x)\). Let \(Z\) be the irreducible row co-isometry which is jointly similar to \(\operatorname{row}(T^{\mathfrak{t}})\) via an invertible matrix, \(S\), and let \(y\in\mathbb{C}^{n}\) be the unique unit eigenvector of \(\mathfrak{b}(Z)\) to eigenvalue \(1\). Then the finitely-correlated Cuntz state \(\mu:=\mu_{Z,y}\in(\mathscr{A}_{d}^{\dagger})_{+}\) is an \(\mathbb{A}_{d}-\)peak state which peaks at \(\mathfrak{b}\in\mathbb{A}_{d}\)._ Proof.: As described above, since \(\mu(L^{\omega}):=y^{*}Z^{\omega}y\) and \(\mathfrak{b}(Z)y=y\), where the unit vector, \(y\), spans the eigenspace for \(\mathfrak{b}(Z)\) corresponding to eigenvalue \(1\), it follows that \(\mu(\mathfrak{b})=y^{*}\mathfrak{b}(Z)y=1\). Note that \(\mu\) is a pure Cuntz state since \(\pi_{\mu}\) is an irreducible representation of the Cuntz algebra. By the equality in the Cauchy-Schwarz inequality, it follows as before that \(\pi_{\mu}(\mathfrak{b})1+N_{\mu}=\mathfrak{b}(\Pi_{\mu})1+N_{\mu}=1+N_{\mu}\). Assume that \(h\in\mathbb{H}_{d}^{2}(\mu)\) is any other element such that \(\mathfrak{b}(\Pi_{\mu})h=h\). Note that \(\mathcal{H}_{\mu}\) is \(\Pi_{\mu}-\)co-invariant by construction so that \(\mathcal{H}_{\mu}^{\perp}\) is invariant. Consider the block decomposition of \(\Pi_{\mu}\) and \(h\) with respect to \(\mathbb{H}_{d}^{2}(\mu)=\mathcal{H}_{\mu}\oplus\mathcal{H}_{\mu}^{\perp}\): \[\mathfrak{b}(\Pi_{\mu})h=\begin{pmatrix}\mathfrak{b}(Z_{\mu})&0\\ *&*\end{pmatrix}\begin{pmatrix}h_{1}\\ h_{2}\end{pmatrix}=\begin{pmatrix}h_{1}\\ h_{2}\end{pmatrix},\] and we conclude that \(\mathfrak{b}(Z_{\mu})h_{1}=h_{1}\). Since \(y\) is the unique (up to scalars) eigenvector of \(\mathfrak{b}(Z)\) to eigenvalue \(1\), \(1+N_{\mu}\) is the unique eigenvector of \(\mathfrak{b}(Z_{\mu})\) so that \(h_{1}=\alpha 1+N_{\mu}\) for some \(\alpha\in\mathbb{C}\). It further follows that \(h_{2}=h-\alpha(1+N_{\mu})\in\mathcal{H}_{\mu}^{\perp}\) is an eigevector of \(\mathfrak{b}(\Pi_{\mu})|_{\mathcal{H}_{\mu}^{\perp}}\) to eigenvalue \(1\). By [17, Corollary 5.3], \(\mathcal{H}_{\mu}^{\perp}\simeq\mathbb{H}_{d}^{2}\otimes\mathbb{C}^{k}\) and \(\Pi_{\mu}|_{\mathcal{H}_{\mu}^{\perp}}\simeq L\otimes I_{k}\). However, since \(P_{\mathcal{H}_{\mu}^{\perp}}\mathfrak{b}(\Pi_{\mu})|_{\mathcal{H}_{\mu}^{ \perp}}\simeq\mathfrak{b}(L)\otimes I_{k}\), this is a pure isometry and we conclude that \(h_{2}=0\). That is, \(1+N_{\mu}\) is the unique eigenvector (up to non-zero scalars) of \(\mathfrak{b}(\Pi_{\mu})\) to eigenvalue \(1\). Now let \(\lambda\in K(\mathcal{O}_{d})\) be another Cuntz state which peaks at \(\mathfrak{b}\), \(\lambda(\mathfrak{b})=1\). As before equality in the Cauchy-Schwarz inequality implies that \(\mathfrak{b}(\Pi_{\lambda})1+N_{\lambda}=1+N_{\lambda}\). By Lemma 3.4 and Lemma 3.2, there is a unique isometry, \(V:\mathbb{C}^{n}\to\mathbb{H}_{d}^{2}(\lambda)\) so that \[VZ^{\omega*}=\Pi_{\lambda}^{\omega*}V.\] Hence, \[V^{*}\Pi_{\lambda}^{\omega}V=Z^{\omega},\] and \(\Pi_{\lambda}\) is a row-isometric dilation of \(Z\), which must be minimal as \(\Pi_{\lambda}\) is irreducible. Hence, by uniqueness of the minimal dilation, \(\Pi_{\lambda}\simeq\Pi_{\mu}\). Let \(U:\mathbb{H}_{d}^{2}(\lambda)\to\mathbb{H}_{d}^{2}(\mu)\) be the unitary implementing this equivalence, \(U\Pi_{\lambda}^{\alpha}=\Pi_{\mu}^{\alpha}U\). Then, \[U1+N_{\lambda} = U\mathfrak{b}(\Pi_{\lambda})1+N_{\lambda}\] \[= \mathfrak{b}(\Pi_{\mu})U1+N_{\lambda},\] so that \(U1+N_{\lambda}=:h\) is a unit eigenvector of \(\mathfrak{b}(\Pi_{\mu})\) to eigenvalue \(1\) so that by the previous arguments, \(U1+N_{\lambda}=\zeta(1+N_{\mu})\) for some \(\zeta\in\partial\mathbb{D}\). This proves that \(\lambda=\mu\) so that \(\mu=\mu_{Z,y}\) is an \(\mathbb{A}_{d}-\)peak state. The following corollary follows immediately from the preceding theorem and Lemma 2.6. **Corollary 3.6**.: _Every finitely-correlated state \(\mu\) on \(\mathscr{A}_{d}\) that arises from an irreducible finite-dimensional row coisometry and a unit vector is an exposed extreme point of the state space of \(\mathscr{A}_{d}\)._ **Corollary 3.7**.: _If \(\mu=\mu_{T,x}\), for a finite irreducible row coisometry \(T\) and unit vector \(x\), let \(Z\) be the unique irreducible finite row co-isometry which is jointly similar to \(\operatorname{row}(T^{\mathfrak{t}})\) and let \(y\) be the eigenvector of \(\mathfrak{b}_{T,x}(Z)\) corresponding to the multiplicity one eigenvalue, \(1\). Then the finitely-correlated state \(\mu_{Z,y}\) peaks at the NC rational inner \(\mathfrak{b}_{T,x}\) and \(\mu_{T,x}\) peaks at the NC rational inner \(\mathfrak{b}_{Z,y}\)._ If we define the unital, completely positive map, \[\operatorname{Ad}_{T,T^{*}}(A):=\sum_{j=1}^{d}T_{j}AT_{j}^{*},\] then this map is a unital quantum channel, _i.e._\(\mathrm{Ad}_{T^{*},T}\) is also unital, if and only if \(\mathrm{row}(T^{*})\) is also a row coisometry, or, again equivalently, \(\mathrm{row}(T^{\mathrm{t}})\) is a row coisometry. We require another definition to describe the class of NC rational inner functions associated with unital quantum channels. Let \(\alpha=i_{1}\cdots i_{n}\) be a word in the alphabet \(\{1,\cdots,d\}\). We set \(\alpha^{t}=i_{n}i_{n-1}\cdots i_{1}\). Namely, \(\alpha^{t}\) is the reversal of \(\alpha\). We define a unitary on \(\mathbb{H}_{d}^{2}\) by \((\mathfrak{s}^{\alpha})^{t}=\mathfrak{s}^{\alpha^{t}}\). This unitary is important in realization theory of NC rational functions [31]. It is proved in [32, Lemma 2.2] that if \(\mathfrak{r}\) is an NC rational function, so is \(\mathfrak{r}^{\mathfrak{k}}\). However, it need not be the case that if \(\mathfrak{b}\) is an NC rational inner, that \(\mathfrak{b}^{t}\) is inner. A simple example is \(\mathfrak{b}(\mathfrak{z})=(1+\mathfrak{z}_{\mathfrak{z}})\mathfrak{z}_{ \mathfrak{z}}\). It is an immediate calculation, that \(\mathfrak{b}(L)^{*}\mathfrak{b}(L)=I\). However, \(\mathfrak{b}^{t}(\mathfrak{z})=\mathfrak{z}_{\mathfrak{z}}(1+\mathfrak{z}_{ \mathfrak{z}})\). Here the inner part of \(\mathfrak{b}^{t}(L)\) is \(L_{2}\), and the outer part is \(1+L_{1}\). In particular, it is easily checked that \(\|\mathfrak{b}^{t}(L)\|=\sqrt{2}\) so that \(\mathfrak{b}^{t}\in\mathbb{H}_{d}^{\infty}\) is not even contractive, see [29, Example 3.4]. The following theorem identifies the class of all NC rational functions \(\mathfrak{b}\), such that \(\mathfrak{b}^{t}\) is also inner, as precisely those that arise from quantum channels. **Theorem 3.8**.: _Let \(\mathfrak{b}:=\mathfrak{b}_{T,x}\) be the NC rational inner generated by the pair \((T,x)\), where \(T\) is a finite-dimensional row co-isometry on \(\mathcal{H}\) and \(x\in\mathcal{H}\) is both \(T\) and \(T^{*}-\)cyclic. Then, \(\mathrm{row}(T^{\mathfrak{t}})\) is also a row co-isometry if and only if \(\mathfrak{b}^{\mathfrak{k}}\) is also NC rational inner and in this case \(\mathfrak{b}^{\mathfrak{k}}=\mathfrak{b}_{\mathrm{row}(T^{\mathfrak{k}}), \overline{x}}\). If \(x\) is a unit vector and both \(T\) and \(T^{\mathfrak{t}}\) are irreducible row coisometries then the NC rational Clark states \(\mu_{T,x}\) and \(\mu_{T^{\mathfrak{t}},\bar{x}}\) peak at the NC rational inners \(\mathfrak{b}^{\mathfrak{t}}\) and \(\mathfrak{b}\), respectively._ Proof.: By [32, Theorem 3.2, Remark 3.4], \((T^{*},x,x)\) is a minimal descriptor realization of \(\mathfrak{G}:=(1-\mathfrak{b})^{-1}\), so that the Taylor coefficients of \(\mathfrak{G}\) at \(0\) are \(\mathfrak{G}_{\omega}=x^{*}T^{*\omega}x\). Hence the Taylor coefficients of \(\mathfrak{G}^{\mathfrak{t}}=(1-\mathfrak{b}^{\mathfrak{t}})^{-1}\) are equal to \[\mathfrak{G}_{\omega}^{\mathfrak{t}} = \hat{\mathfrak{G}}_{\omega^{\mathfrak{t}}}=x^{*}T^{*\omega^{ \mathfrak{t}}}x\] \[= x^{*}(T^{\omega})^{*}x\] \[= (T^{\omega}x)^{*}x\] \[= \overline{x}^{*}\overline{T}^{\omega}\overline{x}\] \[= \overline{x}^{*}T^{*\omega}\overline{x}.\] This shows that \(\mathfrak{G}^{\mathfrak{t}}\) has the minimal descriptor realization \((\overline{T},\overline{x},\overline{x})\), where \(T^{\mathfrak{t}*}=\overline{T}\). If \(\mathrm{row}(T^{\mathfrak{t}})\) is also finite row coisometry then by [32, Theorem 4.1], it follows that \(\mathfrak{b}^{\mathfrak{t}}\) is also NC rational inner with minimal FM realization: \[A_{j}:=\overline{T}_{j}(I-\overline{x}\overline{x}^{*}),\quad B_{j}:=\overline {T}_{j}\overline{x},\quad C:=\overline{x}^{*},\quad\text{and}\quad D:= \mathfrak{b}^{\mathfrak{t}}(0)=0.\] Conversely, as above, given any NC rational inner \(\mathfrak{b}=\mathfrak{b}_{T,x}\), a minimal descriptor realization of \((1-\mathfrak{b}^{\mathfrak{t}})^{-1}=\mathfrak{G}^{\mathfrak{t}}\) is given by \((\overline{T},\overline{x},\overline{x})\). Assuming that \(\mathfrak{b}^{\mathfrak{t}}\) is also NC rational inner, [32, Theorem 3.2, Remark 3.4] implies that there is a finite row co-isometry, \(W\), and a vector, \(y\), which is both \(W\) and \(W^{*}-\)cyclic so that \((W,y,y)\) is also minimal descriptor realization of \(\mathfrak{G}^{\mathfrak{t}}\) so that for any word, \(\omega\in\mathbb{F}^{d}\), \[y^{*}W^{*\omega}y=\overline{x}^{*}\overline{T}^{\omega}\overline{x}.\] Equivalently, if \(\|T^{\mathfrak{t}}\|_{row}\) is the row-norm of \(\mathrm{row}(T^{\mathfrak{t}})\), then for any word, \(\omega\in\mathbb{F}^{d}\), \[\frac{1}{\|T^{\mathfrak{t}}\|_{row}^{|\omega|}}y^{*}W^{*\omega}y=\frac{1}{\|T^ {\mathfrak{t}}\|_{row}^{|\omega|}}\overline{x}^{*}\overline{T}^{\omega}\overline {x}.\] If \(\|\mathrm{row}(T^{\mathfrak{t}})\|>1\), then \(\frac{1}{\|T^{\mathfrak{t}}\|_{row}}\mathrm{row}(T^{\mathfrak{t}})\) and \(\frac{1}{\|T^{\mathfrak{t}}\|_{row}}W\) are both row contractions. In either case, [32, Proposition 3.6, Lemma 3.9] implies that \(\mathrm{row}(T^{\mathfrak{t}})\) and \(W\) are jointly unitarily equivalent via a unitary \(U\) which sends \(y\) to \(\overline{x}\). Hence \(\mathrm{row}(T^{\mathfrak{t}})\) is a row coisometry. _Example 3.9_.: The examples [32, Example 4.4, Example 4.5] both give examples of NC rational inners arising from finite, irreducible row coisometries, \(T\). Namely, \[T=\left(\begin{pmatrix}0&1\\ 0&0\end{pmatrix},\begin{pmatrix}0&0\\ 1&0\end{pmatrix}\right)\] \[S=\frac{1}{\sqrt{2}}\left(\begin{pmatrix}1&0\\ 0&-1\end{pmatrix},\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\right).\] It is easily checked that \(\operatorname{row}(T^{\mathfrak{t}})\) and \(\operatorname{row}(S^{\mathfrak{t}})\) are both row coisometries so that, by the previous theorems, if \(x\) is any unit vector then \(\mathfrak{b}_{T,x},\mathfrak{b}_{T,x}^{\mathfrak{t}}=\mathfrak{b}_{ \operatorname{row}(T^{\mathfrak{t}}),\overline{x}}\) and \(\mathfrak{b}_{S,x},\mathfrak{b}_{S,x}^{\mathfrak{t}}\) are all NC rational inner, \(\mu_{T,x},\mu_{\operatorname{row}(T^{\mathfrak{t}}),\overline{x}}\) and \(\mu_{S,x},\mu_{\operatorname{row}(S^{\mathfrak{t}}),\overline{x}}\) are all Cuntz states which peak at \(\mathfrak{b}_{T,x}^{\mathfrak{t}},\mathfrak{b}_{T,x},\mathfrak{b}_{S,x}^{ \mathfrak{t}}\) and \(\mathfrak{b}_{S,x}\), respectively. The following example illustrates what happens if we drop the assumption that \(T\) is irreducible but require still that its minimal isometric dilation is irreducible. _Example 3.10_.: Consider the following coisometry: \[T_{1}=\frac{1}{2}\begin{pmatrix}-1&0&-1\\ -1&0&1\\ -1&0&-1\end{pmatrix},\quad T_{2}=\frac{1}{2}\begin{pmatrix}1&-1&0\\ -1&-1&0\\ -1&1&0\end{pmatrix}.\] Let us write \(e_{1}\), \(e_{2}\), and \(e_{3}\) for the vectors of the standard basis of \(\mathbb{C}^{3}\). I is now easy to check that \(T_{1}e_{1}=-\frac{1}{2}(e_{1}+e_{2}+e_{3})\), \(T_{2}e_{1}=\frac{1}{2}(e_{1}-e_{2}-e_{3})\), \(T_{2}T_{1}e_{1}=\frac{1}{2}e_{2}\). This implies that \(e_{1}\) is \(T\)-cyclic. Similarly, \(T_{1}^{*}e_{1}=-\frac{1}{2}(e_{1}+e_{3})\), \(T_{2}^{*}e_{1}=\frac{1}{2}(e_{1}-e_{2})\), and \(T_{1}^{*}T_{2}^{*}e_{1}=-\frac{1}{2}e_{3}\). This implies that \(e_{1}\) is also \(T^{*}\)-cyclic and, moreover, that \(\bigvee_{\alpha\neq\emptyset}T^{*\alpha}e_{1}=\mathbb{C}^{3}\). Therefore, by [32], the following NC rational function is an inner \[\mathfrak{r}(\mathfrak{z}_{1},\mathfrak{z}_{2})=e_{1}^{*}\left(I-\mathfrak{z} _{1}T_{1,0}^{*}-\mathfrak{z}_{2}T_{2,0}^{*}\right)^{-1}(\mathfrak{z}_{1}T_{1} ^{*}e_{1}+\mathfrak{z}_{2}T_{2}^{*}e_{1})\,.\] Here, \[T_{1,0}^{*}=T_{1}^{*}(I-e_{1}e_{1}^{*})\text{ and }T_{2,0}^{*}=T_{2}^{*}(I-e_{ 1}e_{1}^{*}).\] Therefore, we have the following expression for the pencil \[I-\mathfrak{z}_{1}T_{1,0}^{*}-\mathfrak{z}_{2}T_{2,0}^{*}=\begin{pmatrix}1& \frac{1}{2}(\mathfrak{z}_{1}+\mathfrak{z}_{2})&\frac{1}{2}(\mathfrak{z}_{1}+ \mathfrak{z}_{2})\\ 0&1+\frac{1}{2}\mathfrak{z}_{2}&-\frac{1}{2}\mathfrak{z}_{2}\\ 0&-\frac{1}{2}\mathfrak{z}_{1}&1+\frac{1}{2}\mathfrak{z}_{1}\end{pmatrix}.\] Since we know \(T_{1}^{*}e_{1}\) and \(T_{2}^{*}e_{1}\), we conclude that the expression for our function is \[\mathfrak{r}(\mathfrak{z}_{1},\mathfrak{z}_{2})=\frac{1}{2}(\mathfrak{z}_{2}- \mathfrak{z}_{1})+\frac{1}{4}(\mathfrak{z}_{1}+\mathfrak{z}_{2})\left(1&1 \right)\begin{pmatrix}1+\frac{1}{2}\mathfrak{z}_{2}&-\frac{1}{2}\mathfrak{z} _{2}\\ -\frac{1}{2}\mathfrak{z}_{1}&1+\frac{1}{2}\mathfrak{z}_{1}\end{pmatrix}^{-1} \begin{pmatrix}\mathfrak{z}_{2}\\ \mathfrak{z}_{1}\end{pmatrix}.\] In particular, we have that \(r(-1,0)=1\). Now we observe that \(T_{1}^{*}\) and \(T_{2}^{*}\) have a common eigenvector. In fact, set \[U=\begin{pmatrix}\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ 0&1&0\\ \frac{1}{\sqrt{2}}&0&-\frac{1}{\sqrt{2}}\end{pmatrix}.\] Then, \[UT_{1}U=\begin{pmatrix}-1&0&0\\ 0&0&-\frac{1}{\sqrt{2}}\\ 0&0&0\end{pmatrix}\text{ and }UT_{2}U=\begin{pmatrix}0&0&0\\ -\frac{1}{2\sqrt{2}}&-\frac{1}{2}&-\frac{1}{2\sqrt{2}}\\ \frac{1}{2}&-\frac{1}{\sqrt{2}}&\frac{1}{2}\end{pmatrix}.\] We note that \(\mathfrak{r}(T^{t})\) has eigenvalue \(1\) of multiplicity \(1\). The corresponding eigenvector is \(e_{1}+e_{3}\). The above calculation shows that only the semi-simple part of \(T^{t}\) is in the closed ball. The similarity orbit of \(T^{t}\) itself never intersects the closed ball.
2303.04299
Inertia induces strong orientation fluctuations of non-spherical atmospheric particles
The orientation of non-spherical particles in the atmosphere, such as volcanic ash and ice crystals, influences their residence times, and the radiative properties of the atmosphere. Here, we demonstrate experimentally that the orientation of heavy submillimeter spheroids settling in still air exhibits decaying oscillations, whereas it relaxes monotonically in liquids. Theoretical analysis shows that these oscillations are due to particle inertia, caused by the large particle-fluid mass-density ratio. This effect must be accounted for to model solid particles in the atmosphere.
T. Bhowmick, J. Seesing, K. Gustavsson, J. Guettler, Y. Wang, A. Pumir, B. Mehlig, G. Bagheri
2023-03-08T00:31:20Z
http://arxiv.org/abs/2303.04299v2
# Inertial angular dynamics of non-spherical atmospheric particles ###### Abstract Cloud-ice crystals, volcanic ash, and microplastic are ubiquitous in the atmosphere. These non-spherical particles are small, but their mass density is much greater than that of air. Little is known about their inertial dynamics, mainly because experiments with such heavy, sub-millimetre particles in air are difficult. We tracked the inertial dynamics of heavy sub-millimetre spheroids through still air and observed that their orientations fluctuate considerably, in stark contrast to the rapid alignment seen in high-density fluids such as water. A model, that quantitatively describes the resulting transient oscillations of the particle orientation, shows that the oscillations are due to particle inertia, and allows us to study the effect of particle shape and volume, beyond the parameters of the experiment. We discuss implications for the angular dynamics of such particles in turbulent air. We conclude that the particle inertia can significantly delay the alignment and increase angular fluctuations. This has significant implications for the statistics of particle orientation, affecting settling velocities and atmospheric residence times, collision/aggregation mechanism and how the particles scatter and absorb solar radiation. non-spherical atmospheric particles, settling, orientation, particle and fluid inertia The transport, dispersion, and settling of volcanic ash [1, 2], microplastic particles [3, 4], and ice crystals in cold atmospheric clouds [5, 6, 7, 8, 9] has significant environmental impact. The modelling of these processes calls for a precise understanding of the underlying physical processes. Particles in the atmosphere are subject to gravity, viscous and inertial hydrodynamic forces and torques, as well as possible particle-particle interactions [6]. An essential parameter determining the hydrodynamic forces and torques is the particle Reynolds number, defined by \(\mathrm{Re}_{p}=av_{g}/\nu\), where \(a\) is the size of the particle, \(v_{g}\) its settling velocity and \(\nu\) the kinematic viscosity of the fluid. Only for \(\mathrm{Re}_{p}\ll 1\) do we have a sufficient understanding of the inertial forces and torques, and only for a small number of specific shapes, such as spheroids, dumbbells, and slender bodies [10, 11, 12, 13, 14, 15, 16]. The transport of non-spherical particles in the atmosphere is significantly influenced by particle orientation [1, 10, 17, 18]. This directly affects the settling velocity of the particles [1, 6, 14, 19], which in turn determines residence times and dispersion ranges in the atmosphere. The settling velocity influences, for instance, how far microplastics, dust and volcanic ash can be transported away from the source, or how much time an ice crystal spends in a cloud [1, 2, 3, 20]. In addition, the orientation affects the volume swept out by the rotating particle, which together with the settling velocity, is a key parameter determining particle-particle collision rates [21], e.g. relevant for the formation of aggregates of ice particles in clouds [22, 23] or volcanic ash [2]. The orientation also has a direct impact on the optical cross-section of particles and thus on the albedo of ash, dust or atmospheric clouds [24, 25, 26]. There are numerous studies dealing with the drag and stable orientation of non-spherical particles in viscous fluids at rest [e.g. 1, 14, 16, 27, 28, 29, just to name a few], but measurements of the angular dynamics of particles settling in still air are scarce, e.g. for very slender fibres in still air [30] and in turbulence [31, 32, 33]. When the fluid is in motion, fluid-velocity gradients give rise to additional torques that affect how non-spherical particles explore the turbulent flow [see 19, and references therein]. However, most of the work reviewed by Voth and Soldati [19] concerns the motion of non-spherical particles with approximately the same mass density as of the fluid. In this case the angular dynamics is overdamped, i.e., the particle orientation relaxes almost instantaneously [13, 34, 35, 36, 14]. Very little is known about the angular dynamics of heavy, non-spherical particles in air, in part because it is very difficult to track sub-millimetre particles that settle rapidly in air. It is expected that the angular dynamics is underdamped in this case, that particle inertia plays an important role. It is well known, after all, that particle inertia has a significant effect on the translation of small spherical particles in turbulence [37, 38, 21], but the inertial dynamics of non-spherical particles in the atmosphere remains largely unexplored. For example, it is still highly disputed what proportion of the ice crystals align as they settle in the clouds of different types [39, 40, 41, 42, 43, 20]. In order to understand the fundamental principles of inertial-particle dynamics in the atmosphere, it is useful to simplify the problem by considering particles moving in still air. This is essentially impossible _in situ_, and also very difficult in the laboratory. We therefore focus on particles that settle freely in quiescent fluids or strictly controlled laminar flows. Apart from Ref. [30], all experiments in such flow conditions were conducted at low particle-to-fluid mass density ratios \(\mathcal{R}=\rho_{p}/\rho_{f}\), not exceeding \(\sim 15\), as shown in Fig. 1. Atmospheric particles such as ice crystals, microplastic particles, and volcanic ash, on the other hand, are in a different region of this phase diagram, at \(\mathcal{R}\sim 10^{3}\). Note that atmospheric dust particles or pollutants [45, 46], plant seeds [47], and pollen [48] also have \(\mathcal{R}\sim 10^{3}\), but correspond to small Reynolds numbers, so the inertial effects discussed here do not play a major role. Conducting laboratory experiments at high \(\mathcal{R}\) and \(\mathrm{Re}_{p}\sim 1\ldots 10\) is challenging, due to the small size of the particles and their fast dynamics. However, most atmospheric particles have large mass-density ratios, and their shapes vary greatly. Note that microplastic particles observed in the atmosphere are often fibres, but they can have other shapes too [3]. Therefore we examined the inertial angular dynamics of small, yet heavy, non-spherical particles with a range of different shapes. The experimental setup for studying the settling dynamics of heavy sub-millimetric particles in air requires (i) a well-controlled flow, (ii) a particle release mechanism that does not interfere with the flow, (iii) high magnification imaging for accurate resolution of particle dynamics, and (iv) tracking of settling particles over a sufficiently long period of time, but also far enough after release to observe dynamics unaffected by initial forces and torques. Most of these requirements are in direct conflict with each other, which makes the experiments very difficult. This may explain, at least in part, the scarcity of data on the angular dynamics of non-spherical atmospheric particles. For example, the high magnification required means that the depth of field is about a few millimetres, which makes it hard to keep the particle Figure 1: **Parameter plane – particle Reynolds number \(\mathrm{Re}_{p}\) versus mass-density ratio \(\mathcal{R}=\rho_{p}/\rho_{f}\).** The symbols indicate parameters for experiments studying the angular dynamics of particles settling in a fluid at rest and in laminar flow [13, 14, 16, 29, 30]. The rectangles indicate typical parameter values for volcanic ash (violet) [1, 2], ice crystals in clouds (blue) [44]), and microplastic particles in the atmosphere (orange) [3, 4]). in focus for long enough as it settles. We shall see below that the angular dynamics can exhibit long transients. To measure these requires to follow the particle for a long distance, which conflicts with the narrow field of view required for high resolution. In addition, producing small particles, with dimensions as small as \(50-1000\,\mathrm{\SIUnitSymbolMicro m}\) with precise shapes is difficult. Our newly developed experimental apparatus (the _Gottingen Turret_) allows to overcome the aforementioned challenges. It consists of a novel particle injector, an air-filled settling chamber, and four high-speed cameras synchronised together with a high-intensity LED array, as shown in Fig. 2**a**-**c**. The apparatus allows to measure the transient settling dynamics of solid particles in the size range of \(0.1-5\,\mathrm{mm}\) in quiescent air. The cameras are mounted so that two of them record one observation volume from the X and Y directions, perpendicular to each other (e.g., the top cameras TX and TY). The second perpendicular pair records a second observation volume below the first one (bottom cameras BX and BY). Each camera pair images a fall distance of \(30\,\mathrm{mm}\) at a nominal resolution of \(6.75\,\mathrm{\SIUnitSymbolMicro m}\). When the top and bottom camera tracks are combined, the total tracking distance is \(60\,\mathrm{mm}\) in each experiment. See _Methods_ for a complete description of the experimental setup. We used spheroidal particles because their resistance functions are known [11, 49]. The particles (with mass density \(\rho_{p}=1200\,\mathrm{kg}\,\mathrm{m}^{-3}\)[50]) were printed with a Photonic Professional GT 3D printer [51]. The particle dimensions are summarised in Table 1. Fig. 2**d** shows examples of some spheroids corresponding to \(2.2\leq\mathrm{Re}_{p}\leq 5\), where the particle volume remains constant. In total, we carried out between \(9\) and \(22\) measurements per particle shape and size, resulting in a total of \(170\) experimental runs where the particle was in focus for all four cameras. Figure 2: **Details of the experimental setup and the particles.****a** Optical table with top cameras (TX and TY), and bottom cameras (BX and BY), the settling chamber (SC), the pulsed LED unit (LED), and other components as detailed in _Methods_. **b** Schematic view of the setup showing the mirror (M) arrangements and illumination/imaging paths. **c** The particle-injector (PI) components are installed on the top of the SC. **d** Particle shapes corresponding to different aspect ratios \(\lambda\), keeping the particle volume \(V_{p}=1.44\times 10^{-3}\,\mathrm{mm}^{3}\) constant. See Table 1 for a summary of all particle shapes and volumes analysed. **e** Snapshots of a settling \(\lambda=5\) prolate spheroid as seen by the 4 cameras at 2932 frames per second. The presented snapshots are cropped to present a zoomed view for better visibility. The tilt angle \(\varphi\) (between the particle symmetry axis and gravity) is shown in \(5.1\mathrm{ms}\) intervals. Fig. 2e shows recorded images of a prolate spheroid (\(2a_{\parallel}=410\,\mathrm{\SIUnitSymbolMicro m}\), \(2a_{\perp}=82\,\mathrm{\SIUnitSymbolMicro m}\)) as it falls in the settling chamber. One observes that the particle exhibits a rich transient dynamics, especially in the tilt angle \(\varphi\), defined as the angle between the symmetry axis of the particle and gravity. As is well known [10, 11], the fluid-inertia torque tends to align spheroidal particles so that they settle with their broad sides down. The experimental results shown in Fig. 2e, however, indicate that this steady state is approached with decaying oscillations. We can explain these observations using a theoretical model, which rests on approximations for the hydrodynamic force and torque exerted by the fluid on the particle. These are very challenging to determine from first principles. Explicit expressions have been derived for small Reynolds numbers, and for simple shapes, such as slender rods [10, 17], and spheroids [11]. For particle Reynolds numbers up to \(\sim 50\), the quasi-steady hydrodynamic forces and torques can be reliably parameterised empirically [52, 53, 54, 15]. These parameterisations enter our model in the form of two scalar functions of the settling speed \(v_{g}\), \(C_{F}(v_{g})\) for the translational motion, and \(C_{T}(v_{g})\) for rotation (see _Methods_ for details). The theory uses a quasi-steady model for the torque. On the experimental time scales this is expected to be a good approximation, because history contributions to the torque decay rapidly [55]. The resulting model has three non-dimensional parameters: the aspect ratio of the particle, \(\lambda=a_{\parallel}/a_{\perp}\), the non-dimensional particle volume, \(\mathcal{V}=gV_{p}/\nu^{2}\), where \(V_{p}=\frac{4\pi}{3}a_{\perp}^{2}a_{\parallel}\) is the volume of the spheroid, and the mass-density ratio \(\mathcal{R}=\rho_{p}/\rho_{f}\). In terms of these parameters, the particle Reynolds number based on the Stokes settling speed is given by \(\mathrm{Re}_{p}\approx\frac{1}{6\pi}\mathcal{R}\mathcal{V}\), up to a \(\lambda\)-dependent factor of order unity (see _Methods_). The model predictions are compared with the experimental results in Fig. 3, assuming that the particles rotate in a plane, because in the experiment, most particles rotated approximately in a plane (the projection of the longest axis did not change by more than a few percent). Fig. 3 demonstrates that the model captures the observed settling dynamics very well. The predicted oscillation frequency based on terminal values is within 22% of the experimental mean (with a mean deviation of \(<7\)%), while the predicted decay rate based on terminal values is within 34% of the experiments (with a mean deviation of \(<13\)%) and the predicted terminal velocity is within 7% of the experimental mean (with a mean deviation of \(<3\)%). The deviations are largest for the decay rate, for several reasons. Experimentally, it was very challenging to determine the decay rates, as this required the longest tracks with sufficiently many oscillations, and optimal viewing angles (where maximum length of the particle was fully visible in at least one of cameras of each pair). Note that the angular dynamics of nearly spherical particles is most difficult to track, because the particle orientation is extracted from the two-dimensional projections of the particle shape upon the camera plane. Other sources of uncertainty are small irregularities in particle shape which could conceivably affect the angular dynamics [12, 14], and out-of-plane oscillations. Panel **d** shows data points manually selected for low noise and approximately planar motion. Their scatter is smaller than indicated by the error bars which comprise statistics over all experimental runs. We also investigated how sensitive the theoretical results are to changing the empirical functions \(C_{F}(v_{g})\) and \(C_{T}(v_{g})\). The small-\(\mathrm{Re}_{p}\) limit of the present model corresponds to setting \(C_{F}(v_{g})=C_{T}(v_{g})=1\). The results agree qualitatively \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Group & \(\lambda\) & \(2a_{\parallel}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(2a_{\perp}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(V_{p}\) [\(\mathrm{mm}^{3}\)] & \(\mathrm{Re}_{p}\) & \(\tau_{p}\) [\(\mathrm{ms}\)] \\ \hline I & 0.20 & 47.9 & 239.4 & \(1.44\times 10^{-3}\) & 2.8 & 42.0 \\ I & 0.50 & 88.2 & 176.4 & \(1.44\times 10^{-3}\) & 2.5 & 57.0 \\ I & 0.80 & 120.6 & 150.8 & \(1.44\times 10^{-3}\) & 2.4 & 66.7 \\ I & 1.00 & 140.0 & 140.0 & \(1.44\times 10^{-3}\) & 2.2 & 71.8 \\ I & 1.25 & 162.0 & 130.0 & \(1.44\times 10^{-3}\) & 2.6 & 77.2 \\ I & 2.00 & 222.2 & 111.0 & \(1.44\times 10^{-3}\) & 3.3 & 90.4 \\ I & 5.00 & 410.0 & 81.8 & \(1.44\times 10^{-3}\) & 5.0 & 122.9 \\ \hline II & 0.25 & 65.5 & 261.9 & \(2.35\times 10^{-3}\) & 3.8 & 62.9 \\ II & 4.00 & 399.4 & 99.9 & \(2.08\times 10^{-3}\) & 5.9 & 146.3 \\ \hline III & 0.25 & 150.0 & 600.0 & \(28.28\times 10^{-3}\) & 22.5 & 329.9 \\ III & 4.00 & 876.9 & 219.2 & \(22.07\times 10^{-3}\) & 34.3 & 704.6 \\ \hline \hline \end{tabular} \end{table} Table 1: **Characterisation of the 3D-printed particles**, based on the aspect ratio \(\lambda=a_{\parallel}/a_{\perp}\), where \(2a_{\parallel}\) is the length of the particle’s symmetry axis, and \(2a_{\perp}\) diameter of the particle perpendicular to the symmetry axis; \(V_{p}\) is the volume of the particle; \(\mathrm{Re}_{p}\) is the particle Reynolds number based on \(\mathrm{max}\{a_{\parallel},a_{\perp}\}\) and the observed steady-state settling speed; the Stokes time \(\tau_{p}=(2\rho_{p}/9\rho_{f})a_{\perp}a_{\parallel}/\nu\), where \(\rho_{p}\) and \(\rho_{f}\) are the mass densities of the particle and the fluid respectively, provides an estimate of the particle response time, which for some particles is much longer than the duration of the experiments presented here. The mass-density ratio \(\mathcal{R}\) remained almost constant during an experiment duration. Due to day-to-day changes in the ambient temperature (\(22.5\pm 0.5\,\mathrm{\SIUnitSymbolMicro C}\)), \(\rho_{f}\) changed. We estimate that \(\mathcal{R}=1004.75\pm 1.71\) for all data reported. with the experiments, but not quantitatively: the settling speed decreases by up to \(70\,\%\) for both oblate and prolate spheroids, the frequency decreases by up to \(20\,\%\) for oblate spheroids, and increases by up to \(10\,\%\) for prolate ones. The decay rate only changes by a few percent. The agreement between theory and experiment in Fig. 3 demonstrates that the theoretical model captures translational and angular dynamics very accurately. A crucial ingredient are the functions \(C_{T}(v_{g})\) and \(C_{F}(v_{g})\) that allowed us to extend the range of validity of the model from \(\text{Re}_{p}\ll 1\) to particle Reynolds numbers relevant for non-spherical particles in the atmosphere. For large settling speeds, the theoretical model simplifies. In this limit, the tilt angle \(\varphi\) obeys a damped-pendulum equation, \(\bar{\varphi}+\dot{\varphi}+v_{g}^{2}C_{T}(v_{g})g(\lambda)\mathscr{R}^{3} \mathscr{V}^{2}\sin(2\varphi)/2=0\) (see _Methods_ for a full description of all terms including \(g(\lambda)\)). For \(\text{Re}_{p}\ll 1\), this equation simplifies to the form given in Refs. [36, 43, 56]. Linear stability analysis of the pendulum equation shows that the particles approach alignment exponentially, with rate \(\lambda_{\pm}=-\frac{1}{2}+\frac{1}{2}\sqrt{\Delta}\) with Figure 3: **Comparison between experiments and theoretical model.****a** Time evolution of tilt angle for spheroids (group I in Table 1) with aspect ratios \(\lambda=0.2\) and \(\lambda=5\), each showing results from one experiment (blue) and model simulation using the same initial conditions (red). **b** Terminal velocity, **c** frequency and **d** amplitude decay rate against the aspect ratio \(\lambda\). Markers show averages obtained for all experiments with error bars indicating 95% confidence bounds for groups I (\(\circ\)), II (\(\square\)), and III (\(\diamond\)) in Table 1. White markers in panel **d** show the average decay rate for individual experiments with the lowest noise levels. They were manually selected by including the experiments with the largest number of oscillations observed, where the angular dynamics remained approximately planar, and with the highest correlation coefficient when oscillation peaks are fitted with an exponential function (see _Methods_). In panels **b–d**, solid lines show results of a linear-stability analysis of the model. The shaded regions indicate by how much the theoretical predictions change as the settling speed varies from its initial to its terminal value, i.e. the lower boundary of the shaded regions in **b** and **c**, and the upper boundary in **d**. Dashed lines show the results of linear-stability analysis of the simplified pendulum equation, in panels **b** and **c** it agrees with the linear stability analysis of the full model. discriminant \(\Delta=1-(v_{g}^{*})^{2}C_{T}(v_{g}^{*})g(\lambda)\mathscr{R}^{3}\mathscr{V}^{2}\), and dimensionless steady-state settling speed \(v_{g}^{*}\sim 1\). For all particles in our experiments (Table 1), the values of \(\mathscr{R}^{3}\mathscr{V}^{2}g(\lambda)\) were large enough to ensure that \(\Delta<0\) (Fig. 4). A bifurcation occurs when the discriminant becomes positive, in which case the particle orientation relaxes without oscillation. The small value of \(\mathscr{R}\) in water (Fig. 1) explains why no oscillations were observed for particles settling in water [14], with \(\mathscr{R}\sim 1\). In dimensional units, it is simply proportional to the inverse Stokes damping time, \(\tau_{p}^{-1}\), indicated in Table 1. We conclude that the decay rate tends to infinity in the overdamped limit. In air, by contrast the decay time is much smaller, of the order of \(25\) Hz. Now consider the very slender fibres used in the experiments reported in Ref. [30]. Since \(g(\lambda)\sim 10^{-4}\log(\lambda)^{2}/\lambda^{2}\) for large \(\lambda\), we conclude that the relevant parameter combination \(g(\lambda)\mathscr{R}^{3}\mathscr{V}^{2}\) depends on the geometrical parameters defining the spheroid as \(a_{\perp}^{6}\) (disregarding factors of \(\log\lambda\)). In particular, the model predicts that only fibres with \(a_{\perp}\) larger than approximately \(25\mu\mathrm{m}\) can oscillate. This explains why the fibres used in the experiments of Ref. [30], of diameter \(\sim 10\mu\mathrm{m}\), did not oscillate. We conclude that the angular dynamics of slender fibers in the atmosphere can be very different from that of particles of moderate aspect ratios. Similarly, in the case of very slender disks (\(\lambda\ll 1\)), \(g(\lambda)\approx 7\times 10^{-5}\lambda\). The combination of parameters \(g(\lambda)\mathscr{V}^{2}\mathscr{R}^{3}\) depends on the geometry of the particles via the product \((a_{\perp}\sqrt{\lambda})^{6}\). Our estimates indicate that oscillations are observable for thin disks when \(a_{\perp}\sqrt{\lambda}\) is larger than \(\sim 25\mu\mathrm{m}\). This condition is very well fulfilled for the oblate particles in Table 1, as well as for a large class of ice crystals. The potential shortcoming of this analysis is that the forces and torques acting on thin disks have not been fully tested. The effect of particle inertia on the angular dynamics is essentially the same in turbulence, which simply acts as a stochastic driving force of the angular dynamics [36, 43]. The resulting equation is that of an pendulum driven with noise \(\xi(t)\), and the tilt angle is given by \(\delta\phi(t)=\int_{0}^{t}\mathrm{d}t_{1}\mathscr{F}(t_{1}-t)\xi(t_{1})\), where \(\mathscr{F}(t)\) is the fundamental solution of the pendulum equation. In the overdamped limit, \(\mathscr{F}(t)\) decays rapidly as a function of time. But it decreases more slowly when the angular dynamics becomes underdamped, resulting in a significant increase of the tilt angle variance of non-spherical particles in turbulence [43]. We stress that, over the range of values of \(\lambda\) considered in this work, the experimental results provide a validation of the model used in Ref. [43], although further experimental work is necessary to study forces and torques acting on very slender disks. Klett [56] discussed the alignment of ice crystals in turbulent ice clouds, and their significance as light scatterers. He found that the settling crystals align with tilt angle variances \(\langle\delta\varphi^{2}\rangle\propto(a^{2}/\nu)\mathscr{E}/v_{g}^{2}\), with turbulent energy-dissipation rate \(\mathscr{E}\). Gustavsson _et al._[43] used the \(\mathrm{Re}_{p}\ll 1\)-limit of the present model to obtain a much larger variance, \(\langle\delta\varphi^{2}\rangle\propto\mathrm{Re}_{\lambda}\sqrt{\mathscr{E} }\nu/v_{g}^{2}\) with Taylor-scale Reynolds number \(\mathrm{Re}_{\lambda}\). The results summarised above show that this is the correct conclusion for \(\mathrm{Re}_{p}\) up to \(\sim 10\), which is the physically relevant range for ice crystals in cold atmospheric clouds, see Fig. 1. Sassen [57] hypothesised how \(\mathrm{Re}_{p}\) affects the tilt angle variance. Our analysis shows that the tilt angle variance depends sensitively on particle shape, not only upon \(\mathrm{Re}_{p}\) (and on the mass-density ratio \(\mathscr{R}\)). The present model Figure 4: **Bifurcation diagram**. Particles in Table 1 are shown as \(\circ\) (group I), \(\square\) (group II), and \(\diamond\) (group III). The coloured region is a map of the parameter space for the fibres from Ref. [30] with diameters between \(7\,\mathrm{\SIUnitSymbolMicro m}\) to \(13\,\mathrm{\SIUnitSymbolMicro m}\) and lengths approximately between \(0.1\,\mathrm{mm}\) to \(2\,\mathrm{mm}\). For the mapping we approximated the fibres as slender spheroids, disregarding possible shape singularities at the fibre ends, and we used \(C_{T}=C_{F}=1\) since the used parameterisations \(C_{F}\) and \(C_{T}\) do not cover very slender particles, but \(\mathrm{Re}_{p}\) is small. The bifurcation line distinguishes damped angular dynamics without oscillations (\(\Delta>0\)) from damped oscillations (\(\Delta<0\)). is expected to break down when the wake behind the falling particles becomes asymmetric and eventually unsteady. The corresponding bifurcation diagram remains to be explored. Model studies demonstrate that the dispersion in particle orientation induces differences in settling velocities, which in turn enhance the collision rate between settling crystals [23]. It would be interesting to consider the settling of more than one particle, and to extend the analysis of the angular dynamics to study collisions, including the effect of hydrodynamic repulsion or other molecular interactions when particles come to contact. The fluctuations of the angular degrees of freedom of anisotropic particles, induced by turbulence, will affect their transport properties, as the resistance (drag) of the particles depends on their orientation with respect to the slip velocity. In particular, the time particles remain suspended in the atmosphere should be decreased as a result of turbulence. The effect is the largest for particles exhibiting strong oscillations. Pollen particles, which may be too small to oscillate, have developed alternative strategies for enhancing their residence time in the atmosphere [47]. Our study opens the way to further investigations of non-spherical atmospheric particles. As an example, ice-crystals in clouds come with a wide variety of sizes and shapes. In the case of hollow crystals [58], it will be interesting to investigate to what extent can such particles oscillate, as the ratio \(\mathcal{R}\) between the densities of the particle and air is likely to be reduced. In a related spirit, one can ask about the angular dynamics of non-symmetric particles. This is in particular the case of volcanic ash [1], which appear to be more generally describable in terms of ellipsoids. ## Acknowledgements TB was funded by the German Research Foundation (DFG) Walter Benjamin Position (project no. 463393443). JG was supported by funding from the European Union Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Actions, Grant Agreement No.675675. KG was supported by a grant from Vetenskapsradet (no. 2018-03974). BM was supported by Vetenskapsradet (grant no. 2021-4452), and acknowledges a Mary Shepard B. Upson Visiting Professorship with the Sibley School of Mechanical and Aerospace Engineering at Cornell. Statistical-model simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC). We thank Eberhard Bodenschatz for providing resources and support. The Gottingen Turret is manufactured with the support from the Mechanical Workshop and Electronic Workshop of the Max Planck Institute for Dynamics and Self-Organisation. We thank Jean-Lou Pierson for pointing out Refs. [30, 31] to us. We also thank Antonio Ibanez Landeta, Augustinus Bertens and Jan Molacek for support and fruitful discussions. ## Author contributions GB conceptualised the study. JG and GB designed and constructed the first iteration of the experimental setup. JS and GB designed and developed the final experimental setup. JG investigated different methods for printing the particles. TB and JS printed the particles and performed the experiments. JG and GB developed the codes for image analysis. GB performed camera calibrations and analysed the particle tracks. KG, AP, and BM developed the theoretical model. TB, KG, and GB performed the data analysis, and all authors contributed to the interpretation of the data, writing the initial draft, and proofreading and editing the final version of the manuscript.
2307.01616
SageFormer: Series-Aware Framework for Long-term Multivariate Time Series Forecasting
In the burgeoning ecosystem of Internet of Things, multivariate time series (MTS) data has become ubiquitous, highlighting the fundamental role of time series forecasting across numerous applications. The crucial challenge of long-term MTS forecasting requires adept models capable of capturing both intra- and inter-series dependencies. Recent advancements in deep learning, notably Transformers, have shown promise. However, many prevailing methods either marginalize inter-series dependencies or overlook them entirely. To bridge this gap, this paper introduces a novel series-aware framework, explicitly designed to emphasize the significance of such dependencies. At the heart of this framework lies our specific implementation: the SageFormer. As a Series-aware Graph-enhanced Transformer model, SageFormer proficiently discerns and models the intricate relationships between series using graph structures. Beyond capturing diverse temporal patterns, it also curtails redundant information across series. Notably, the series-aware framework seamlessly integrates with existing Transformer-based models, enriching their ability to comprehend inter-series relationships. Extensive experiments on real-world and synthetic datasets validate the superior performance of SageFormer against contemporary state-of-the-art approaches.
Zhenwei Zhang, Linghang Meng, Yuantao Gu
2023-07-04T10:08:25Z
http://arxiv.org/abs/2307.01616v2
# SageFormer: Series-Aware Graph-Enhanced Transformers for Multivariate Time Series Forecasting ###### Abstract Multivariate time series forecasting plays a critical role in diverse domains. While recent advancements in deep learning methods, especially Transformers, have shown promise, there remains a gap in addressing the significance of inter-series dependencies. This paper introduces SageFormer, a Series-aware Graph-enhanced Transformer model designed to effectively capture and model dependencies between series using graph structures. SageFormer tackles two key challenges: effectively representing diverse temporal patterns across series and mitigating redundant information among series. Importantly, the proposed series-aware framework seamlessly integrates with existing Transformer-based models, augmenting their ability to model inter-series dependencies. Through extensive experiments on real-world and synthetic datasets, we showcase the superior performance of SageFormer compared to previous state-of-the-art approaches. ## 1 Introduction Multivariate Time Series (MTS) data, composed of multiple univariate series, serves as the foundation for MTS forecasting tasks that aim to predict future trends based on a fixed window of historical sequences. The value of MTS forecasting extends beyond academic interest, with essential applications in everyday life and numerous industries. It facilitates resource management and strategic planning across diverse domains such as energy [1], finance [2], and weather [3]. In recent years, deep learning methods [4; 5], particularly Transformer structures [6; 7; 8; 9], have achieved remarkable breakthroughs in MTS forecasting tasks compared to traditional methods (e.g., ARIMA, SSM [10]). In many Transformer-based studies, the dependencies between different series are often overlooked. These models typically amalgamate different series into hidden temporal embeddings through linear transformations (series-mixing framework, Figure 0(b)). These models primarily concentrate on temporal dependencies, largely overlooking the relationships between series[11]. Recently, some studies [12; 13] discovered that intentionally neglecting inter-series dependencies modeling (series-independent framework, Figure 0(c)) could improve prediction results due to its robustness towards distribution drifts [14]. However, the series-independent framework entirely disregards the dependencies between series, resulting in suboptimal results on specific datasets (see section 4.5). These findings highlight the challenges in modeling series dependencies, making accurate capture in MTS forecasting tasks a promising research direction. To address this research gap, we propose SageFormer in this paper. **Present work.** In this paper, we examine inter-series dependencies in long-term MTS forecasting problems. To accurately model the inter-series dependencies, we propose Series-Aware Graph-Enhanced Transformer (**SageFormer**, Figure 0(a)), a series-aware Transformer model enhanced with graph neural networks (GNN). By learning relationships between time series using graph structures, we aim to distinguish series using global tokens and improve the modeling ability for diverse temporal patterns across various series through graph aggregation. SageFormer can function as a universal extension for Transformer-based structures, better utilizing the dependencies between series and achieving superior performance without greatly affecting model complexity. We contend that the proposed SageFormer addresses two challenges in inter-series dependencies modeling: 1. How can diverse temporal patterns among series be effectively represented? We introduce a series-aware approach that extends the existing series-independent framework by incorporating several global tokens before input tokens. These tokens capture global information for each variable through self-attention and facilitate series interaction via graph aggregation. The addition of global tokens enables SageFormer to learn not only individual series' temporal patterns but also focus on dependencies between series, thereby enhancing diversity and overcoming series-independent limitations (see section 4.5). 2. How can the impact of redundant information across series be avoided? We propose using sparsely connected graph structures to reduce the impact of redundant information in unrelated series. In MTS forecasting, not all information is useful due to redundancy in time and series dimensions [15]. To evaluate model effectiveness with sparse data, we designed Low-rank datasets with varying series numbers (see Section 4.5). Our model's performance remains stable as series dimensions increase, utilizing low-rank properties effectively. In contrast, the series-mixing method suffers from prediction deterioration as series dimensions grow. Our contributions are threefold: * We introduce a novel series-aware framework that serves as a universal extension for Transformer-based models. It effectively utilizes graph structures to exploit the dependencies between series without noticeably increasing the model's complexity. Figure 1: Illustration of three different ways of modeling series dependencies: (a) The proposed series-aware framework. Prior to the original input tokens into the Transformer encoder, we incorporate learnable global tokens to capture the intrinsic features of each series. The embedding tokens are processed through multiple SageFormer layers, where temporal encoding and graph aggregation are performed iteratively. (b) The series-mixing framework combines all series at each timestamp into a single token vector. (c) The series-independent framework handles each series separately, improving the learning of unique temporal patterns for different series. * We propose SageFormer, a series-aware Transformer model for long-term MTS forecasting. By integrating GNN, SageFormer efficiently captures inter-series dependencies, transcending the limitations of existing Transformer-based models in modeling these dependencies. * Experimental results demonstrate that our model attains state-of-the-art performance on both real-world and synthetic datasets. ## 2 Related Works Multivariate Time Series Forecasting.MTS forecasting models can generally be categorized into statistical and deep models. Many forecasting methods begin with traditional tools such as the Vector Autoregressive model and the Vector Autoregressive Moving Average [16; 17]. These typical statistical MTS forecasting models assume linear dependencies between series and values. With the advancement of deep learning, various deep models have emerged and often demonstrate superior performance compared to their statistical counterparts. Temporal Convolutional Networks [5; 18] and DeepAR [19] consider MTS data as sequences of vectors, employing CNNs and RNNs to capture temporal dependencies. Transformers for MTS Forecasting.Recently, Transformer models with self-attention mechanisms have excelled in various fields [20; 21; 22; 23]. Numerous studies aim to enhance Transformers for MTS forecasting by addressing their quadratic complexity. Notable approaches include Informer [7], introducing ProbSparse self-attention and distilling techniques; Autoformer [8], incorporating decomposition and auto-correlation concepts; FEDformer [9], employing a Fourier-enhanced structure; and Pyraformer [24], implementing pyramidal attention modules. PatchTST [13] divides each series into patches and uses a series-independent Transformer to model temporal patterns. While these models primarily focus on reducing temporal dependencies modeling complexity, they often overlook crucial inter-series dependencies. Inter-series dependencies for MTS Forecasting.Numerous methods have been proposed to explicitly enhance inter-series dependencies in MTS forecasting. LSTnet [4] employs CNN for inter-series dependencies and RNN for temporal dependencies. GNN-based models [25; 26; 27; 28], such as MTGNN [28], utilize temporal and graph convolution layers to address both dependencies. STformer [29] flattens multivariate time series into a 1D sequence for Transformer input, while Crossformer [11] employs dimension-segment-wise embedding and a two-stage attention layer for efficient temporal and inter-series dependencies capture respectively. Most CNN and GNN-based models struggle to capture long-term temporal dependencies. STformer [29] and Crossformer [11] extend 1-D attention to 2-D, but they fail to reveal the relationships between series explicitly. Unlike the methods mentioned above, our proposed SageFormer serves as a general framework that can be applied to various Transformer-based models, utilizing graph structure learning to enhance their ability to capture inter-series dependencies. ## 3 Methodology ### Problem Definition In this paper, we concentrate on long-term MTS forecasting tasks. Let \(\mathbf{x}_{t}\in\mathbb{R}^{C}\) denote the value of \(C\) series at time step \(t\). Given a historical MTS instance \(\mathbf{X}_{t}=[\mathbf{x}_{t},\mathbf{x}_{t+1},\cdots,\mathbf{x}_{t+L-1}]\in \mathbb{R}^{C\times L}\) with length \(L\), the objective is to predict the next \(T\) steps of MTS values \(\mathbf{Y}_{t}=[\mathbf{x}_{t+L},\cdots,\mathbf{x}_{t+L+T-1}]\in\mathbb{R}^{C \times T}\). The aim is to learn a mapping \(f(\cdot):\mathbf{X}_{t}\rightarrow\mathbf{Y}_{t}\) using the proposed model (we omit the subscript \(t\) when it does not cause ambiguity). We employ graphs to represent inter-series dependencies in MTS and briefly overview relevant graph-related concepts. From a graph perspective, different series in MTS are considered nodes, and relationships among series are described using the graph adjacency matrix. Formally, the MTS data can be viewed as a signal set \(\mathcal{G}=\{\mathcal{V},\mathbf{X}_{t},\mathbf{A}\}\). The node set \(\mathcal{V}\) contains \(C\) series of MTS data and \(\mathbf{A}\in\mathbb{R}^{C\times C}\) is a weighted adjacency matrix. The entry \(a_{ij}\) indicates the dependencies between series \(i\) and \(j\). If they are not dependent, \(a_{ij}\) equals zero. The main symbols used in the paper and their meanings are detailed in Table 6 in the Appendix A. ### Overview SageFormer is designed to augment the capability of Transformer-based models in addressing inter-series dependencies. The overall architecture adheres to a Transformer encoder pipeline, conforming to the series-aware framework. The decoder portion of the Transformer is omitted and replaced with a linear decoder head (\(\mathrm{FlattenHead}\) in Algorithm 1), proving to be more efficient [12; 13; 30]. SageFormer's encoding workflow, summarized as Algorithm 1, encompasses three key components: (1) series-aware global tokens, (2) graph structure learning, and (3) iterative message passing. ``` Input: The input MTS history \(\mathbf{X}\) Output: The predicted MTS future \(\mathbf{Y}\). 1begin 2\(\mathcal{X}^{(0)}\leftarrow\)\(GlobalEembedding(\mathbf{X})\) ; /* series-aware global tokens */ \(A\leftarrow\) GraphLearning ; /* graph structure learning */ 3for\(c=1,\dots,C\)do 4\(\mathcal{X}^{1}_{:c}\leftarrow\mathrm{TEB}\left(\mathcal{X}^{0}_{:c}\right)\); 5for\(l=2,\dots,L-1\)do /* iterative message passing */ 6for\(m=1,...,M\)do 7\(\widehat{\mathcal{X}}^{(l)}_{:m}\leftarrow\mathrm{GNN}\left(\mathcal{X}^{(l)}_ {:m},\mathbf{A}\right)\) ; /* graph aggregation */ 8for\(c=1,\dots,C\)do 9\(\mathcal{X}^{l+1}_{c:}\leftarrow\mathrm{TEB}\left(\widehat{\mathcal{X}}^{(l)}_ {c:}\right)\) ; /* temporal encoding */ 10\(\widehat{\mathcal{X}}\leftarrow\left\{\mathcal{X}^{L}_{m:}|m>M\right\}\) 11 Return \(Y\leftarrow\mathrm{FlattenHead}\left(\hat{\mathcal{X}}\right)\) ``` **Algorithm 1**SageFormer's Workflow ### Series-aware Global Tokens Drawing inspiration from the application of the class token in natural language models [21] and Vision Transformer [31], we prepend learnable tokens for each series to encapsulate their corresponding global information. In Section 3.5, we employ these global tokens, rather than all tokens, to capture inter-series dependencies, thereby enhancing the series awareness of each sub-series. Following PatchTST [13], the input MTS \(\mathbf{X}\in\mathbb{R}^{C\times L}\) is reshaped into \(\mathcal{X}_{p}=\{\mathbf{X}_{1},\cdots,\mathbf{X}_{C}\}\in\mathbb{R}^{C\times N \times P}\), where P is the subsequence patch length, \(C\) is the number of time series, and \(N=\lfloor(L-P)/S\rfloor+2\) denotes the number of patches, \(S\) indicates the non-overlapping length between adjacent patches. \(\mathbf{X}_{c}=\{\mathbf{x}^{1}_{c},\cdots,\mathbf{x}^{N}_{c}\}\in\mathbb{R}^{ N\times P}\) represents the patched sequence for series \(c\). A consistent latent vector of size \(D\) is maintained across Transformer encoding blocks (TEB), with a trainable linear projection (\(\mathbf{E}\in\mathbb{R}^{P\times D}\)) mapping \(\mathcal{X}_{p}\) to the D-dimensional space (Equation 1). \(M\) learnable embeddings (global tokens) \(\mathbf{g}_{i}\in\mathbb{R}^{D}\) are added before the patched sequences, representing each series' global information after self-attention, resulting in an effective input sequence length of \(M+N\). The prepended global tokens are designed to facilitate interaction across series. Positional information is enhanced via 1D position embeddings \(\mathbf{E}_{pos}\). The final embedding of \(\mathbf{X}\) is \(\mathcal{X}^{(0)}\in\mathbb{R}^{C\times(N+M)\times D}\), where \[\mathcal{X}^{(0)}_{c:}=\left[\mathbf{g}_{1};\cdots;\mathbf{g}_{M};\mathbf{x}^{ 1}_{c}\mathbf{E};\cdots;\mathbf{x}^{N}_{c}\mathbf{E}\right]+\mathbf{E}_{pos}. \tag{1}\] ### Graph Structure Learning The adjacency matrix is learned end-to-end, capturing implicit relationships across series without requiring prior knowledge. In MTS forecasting tasks, we postulate that inter-series dependencies are unidirectional (e.g., power load affects oil temperature, but not vice versa), resulting in a directed relationship represented by the learned graph. Specifically, the entire graph structure learning module can be described by the following equations: \[\mathbf{M}_{1} =\mathrm{act}_{1}(\mathbf{E}\mathbf{\Theta}_{1});\;\mathbf{M}_{2}= \mathrm{act}_{1}(\mathbf{E}\mathbf{\Theta}_{2}) \tag{2}\] \[\mathbf{A}^{\prime} =\mathrm{Relu}(\mathbf{M}_{1}\mathbf{M}_{2}^{T}-\mathbf{M}_{2} \mathbf{M}_{1}^{T}) \tag{3}\] Node embeddings of series are learned through randomly initialized \(\mathbf{E}\in\mathbb{R}^{N\times C}\). Subsequently, \(\mathbf{E}\) is transformed into \(\mathbf{M}\in\mathbb{R}^{N\times C}\) using \(\mathbf{\Theta}\in\mathbb{R}^{C\times C}\), with the nonlinear \(\mathrm{act}_{1}\) (Equation 2). Following the approach of MTGNN [28], Equation 3 is employed to learn unidirectional dependencies. For each node, its top k nearest nodes are selected as neighbors, setting the weights of non-connected nodes to zero, yielding the final sparse adjacency matrix \(\mathbf{A}\in\mathbb{R}^{C\times C}\). ### Iterative Message Passing The embedding tokens (outlined in Section 3.3) are processed by SageFormer encoder layers, where temporal encoding and graph aggregation are conducted iteratively (Figure 2). This approach aims to disseminate the global information gathered during the GNN phase among all tokens within each series. As a result, the model captures inter-series dependencies through iterative message passing. Graph AggregationThe graph aggregation phase aims to fuse each series' information with its neighbors' information, thereby enhancing each series with related patterns. For each series in the \(l\)-th layer, we take the first \(M\) embeddings as the global tokens of layer \(l\): \(\mathbf{G}_{i}^{(l)}\leftarrow\mathcal{X}_{i:i}^{(l)}\in\mathbb{R}^{C\times D}, \;i\leq M\). The global tokens of layer \(l\) are gathered from all series and passed into the GNN for graph aggregation. For simplicity, we employ the same model as [25; 28] for graph aggregation: \[\mathbf{\widehat{G}}_{i}=\sum_{d=0}^{D}\mathbf{\tilde{A}}^{d}\mathbf{G}_{i} \mathbf{W}_{d},\;i\leq M \tag{4}\] Equation 4 represents multi-hop information fusion on the graph, where \(D\) denotes the depth of graph aggregation and \(\mathbf{\tilde{A}}\) is the graph Laplacian matrix. Each of the embeddings \(\mathbf{\widehat{G}}_{i}\) is dispatched to its original series and then concatenated with series tokens, resulting in graph-enhanced embeddings \(\mathcal{\tilde{X}}^{(l)}\). Temporal EncodingThe graph-enhanced embeddings can later be processed by any Transformer component ( Transformer[20], Informer [7], FEDformer [9], etc.). We choose the vanilla Transformer encoder [20] as our backbone. The output of the TEB functions as input token-level embeddings for the following encoding layer. Previously aggregated information from the GNN is disseminated to other tokens within each series via self-attention, enabling access to related series information. This process enhances the expressiveness of our model compared to series-independent models. Figure 2: Illustration of the iterative message-passing process in SageFormer. Each layer begins with graph aggregation, where global tokens from all series are gathered and processed by the multi-hop GNN component (leftmost rectangle). Graph-enhanced global tokens are then dispatched to their original series and encoded by TEB. The weights of each TEB are shared among all series. Experiments ### Experimental Setup Datasets.To evaluate our proposed SageFormer, extensive experiments have been conducted on eight mainstream real-world datasets, including Weather, Traffic, Electricity, ILI(Influenza-Like Illness), and four ETT(Electricity Transformer Temperature) datasets. Details of these multivariate datasets are shown in Table 1. Among all datasets, Traffic and Electricity have more series, which can better reflect the effectiveness of the proposed method. Baselines and Task Settings.We compare our proposed method with nine popular models for long-term MTS forecasting problems as baselines, including three models that explicitly utilize inter-series dependencies: Crossformer[11], MTGNN[28], and LSTnet[4]; two series-independent neural models: DLinear[12] and PatchTST[13]; and four series-mixing transformer-based models: Transformer[20], Informer[7], Autoformer[8], and Non-stationary Transformer[32]. Implementation details.For model training and evaluation, we adopt the same settings as in [30]. The entire dataset is rolled with a stride of 1 to generate different input-output pairs, and Train/Val/Test sets are zero-mean normalized using the mean and standard deviation of the training set. Performance is evaluated over varying future window sizes on each dataset. The past sequence length is set as 36 for ILI and 96 for the others. Mean Square Error (MSE) and Mean Absolute Error (MAE) serve as evaluation metrics. All experiments are conducted five times, and the mean of the metrics is reported. Details regarding datasets, baselines, implementation, and hyper-parameters can be found in Appendix A. ### Main Results \begin{table} \begin{tabular}{c|c c c c c c|c c c|c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c|}{**SageFormer**} & \multicolumn{3}{c|}{**Crosformer**} & \multicolumn{3}{c|}{**MTGNN**} & \multicolumn{3}{c|}{**LSTnet**} & \multicolumn{3}{c|}{**PatchTST**} & \multicolumn{3}{c|}{**DLinear**} & \multicolumn{3}{c|}{**Stationary**} & \multicolumn{3}{c|}{**Autoformer**} & \multicolumn{3}{c}{**Informer**} & \multicolumn{3}{c}{**Transformer**} \\ & **(Ours)** & [11] & [28] & [4] & [13] & [12] & [32] & [8] & [7] & [20] & [20] & \\ \cline{2-19} Metric & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Traffic & **0.436** & **0.285** & 0.570 & 0.312 & 0.592 & 0.317 & 0.736 & 0.450 & 0.471 & 0.298 & 0.625 & 0.383 & 0.624 & 0.340 & 0.628 & 0.379 & 0.854 & 0.416 & 0.661 & 0.363 \\ \hline Electricity & **0.175** & **0.273** & 0.314 & 0.366 & 0.333 & 0.378 & 0.440 & 0.494 & 0.200 & 0.288 & 0.212 & 0.300 & 0.193 & 0.296 & 0.227 & 0.338 & 0.311 & 0.397 & 0.272 & 0.367 \\ \hline Weather & **0.249** & **0.275** & 0.256 & 0.305 & 0.290 & 0.348 & 0.768 & 0.672 & 0.256 & 0.279 & 0.265 & 0.317 & 0.288 & 0.314 & 0.388 & 0.382 & 0.634 & 0.548 & 0.611 & 0.557 \\ \hline ETTm1 & **0.387** & **0.399** & 0.509 & 0.507 & 0.566 & 0.537 & 1.947 & 1.206 & 0.389 & 0.401 & 0.403 & 0.407 & 0.481 & 0.456 & 0.588 & 0.517 & 0.961 & 0.734 & 0.936 & 0.728 \\ \hline ETTm2 & **0.277** & **0.322** & 1.433 & 0.747 & 1.287 & 0.751 & 2.639 & 1.280 & 0.280 & 0.328 & 0.350 & 0.401 & 0.306 & 0.347 & 0.327 & 0.371 & 1.410 & 0.810 & 1.478 & 0.873 \\ \hline ETTh1 & **0.431** & **0.433** & 0.615 & 0.563 & 0.679 & 0.605 & 2.113 & 1.237 & 0.443 & 0.443 & 0.456 & 0.452 & 0.570 & 0.537 & 0.496 & 0.487 & 0.140 & 0.795 & 0.919 & 0.759 \\ \hline ETH2 & **0.374** & **0.403** & 2.170 & 1.175 & 2.618 & 1.308 & 4.382 & 2.008 & 0.381 & 0.404 & 0.559 & 0.515 & 0.526 & 0.516 & 0.450 & 0.459 & 4.431 & 1.729 & 4.492 & 1.691 \\ \hline Exchange & **0.354** & **0.400** & 0.756 & 0.645 & 0.786 & 0.674 & 1.681 & 0.197 & **0.354** & **0.400** & **0.354** & 0.414 & 0.461 & 0.454 & 0.613 & 0.539 & 1.550 & 0.998 & 1.386 & 0.898 \\ \hline ILI & 2.113 & **0.877** & 3.417 & 1.214 & 4.861 & 1.507 & 5.300 & 1.657 & 2.065 & 0.882 & 2.616 & 1.090 & **2.077** & 0.914 & 0.306 & 1.161 & 5.137 & 1.544 & 4.784 & 1.471 \\ \hline \hline \end{tabular} \end{table} Table 2: Long-term forecasting task. Bold/underline indicates the best/second. Blue background marks the models explicitly utilizing inter-series dependencies; green marks series-independent neural models; yellow marks series-mixing transformer-based models. All the results are averaged from 4 different prediction lengths, that is \(\{24,36,48,60\}\) for ILI and \(\{96,192,336,720\}\) for the others. See Table 8 in Appendix A for the full results. \begin{table} \begin{tabular}{c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c|}{**SageFormer**} & \multicolumn{3}{c|}{**MCGNN**} & \multicolumn{3}{c|}{**LSTnet**} & \multicolumn{3}{c|}{**PatchTST**} & \multicolumn{3}{c|}{**DLinear**} & \multicolumn{3}{c|}{**Stationary**} & \multicolumn{3}{c|}{**Autoformer**} & \multicolumn{3}{c}{**Informer**} & \multicolumn{3}{c}{**Transformer**} \\ & **(Ours)** & [11] & [28] & [4] & [13] & [12] & [32] & [8] & [7] & [20] & [20] \\ \cline{2-19} & \multicolumn{3}{c|}{MSE} & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Traffic & **0.436** & **0.285** & 0.570 & 0.312 & 0.592 & 0.317 & 0.736 & 0.450 & 0.471 & 0.298 & 0.625 & 0.383 & 0.624 & 0.340 & 0.628 & 0.379 & 0.854 & 0.416 & 0.661 & 0.363 \\ \hline Electricity & **0.175** & **0.273** & 0.314 & 0.366 & 0.333 & 0.378 & 0.440 & 0.494 & 0.200 & 0.288 & 0.212 & 0.300 & 0.193 & 0.296 & 0.227 & 0.338 & 0.311 & 0.397 & 0.272 & 0.367 \\ \hline Weather & **0.249** & **0.275** & 0.256 & 0.305 & 0.290 & 0.348 & 0.768 & 0.672 & 0.256 & 0.279 & 0.265 & 0.317 & 0.288 & 0.314 & 0.388 & 0.382 & 0.634 & 0.548 & 0.611 & 0.557 \\ \hline ETTm1 & **0.387** & **0.399** & 0.509 & 0.507 & 0.566 & 0.537 & 1.947 & 1.206 & 0.389 & 0.401 & 0.403 & 0.407 & 0.481 & 0.456 & 0.588 & 0.517 & 0.961 & 0.734 & 0.936 Long-term forecasting results.Table 2 presents the forecasting results for the proposed SageFormer and other baseline models. The table shows that the proposed model consistently achieves state-of-the-art performance across all benchmarks and prediction lengths. Notably, SageFormer significantly outperforms other deep models on datasets with a large number of series, attaining a **7.4%** average MSE reduction (\(0.471\to 0.436\)) on Traffic and a **9.3%** average MSE reduction (\(0.193\to 0.175\)) on Electricity compared to previous state-of-the-art results. Our model exhibits substantial improvement on every dataset, particularly compared to models that explicitly utilize inter-series dependencies. This indicates that our proposed method effectively enhances the model's ability to capture relationships among multiple series. Framework generality.Furthermore, our model serves as a versatile extension for Transformer-based architectures. To validate this, we apply the SageFormer framework to three prominent Transformers and report the performance enhancement of each model as Table 3. Our method consistently improves the forecasting ability of different models, demonstrating that SageFormer is an effective, universally applicable framework. By leveraging graph structures, it can better utilize the interdependencies among various sequences, ultimately achieving superior predictive performance. ### Ablation Study The ablation studies were conducted to address two primary concerns: 1) the impact of graph aggregation and 2) the impact of series-aware global tokens. We designate SageFormer variants without specific components as shown in Table 4. First, the experiments validated the effectiveness of the graph structure in our time series prediction model. Removing the graph aggregation module from each encoder layer resulted in a substantial decline in prediction accuracy. On the Traffic dataset, the average decrease was 7.3%, and on the seven-series ETTh1 dataset, it was 2.8%, showing that graph structures enhance performance more in datasets with numerous series. Second, series-aware global tokens enhanced the model's prediction accuracy while reducing computational overhead. If all tokens (not just global tokens) participated in graph propagation calculations, the model's performance would decline by 6.3% and 1.6% on the Traffic and ETTh1 datasets, respectively. Lastly, we discovered that techniques like sparse constraints and directed graphs in graph construction were more effective for larger datasets (e.g., Traffic). In comparison, they had little impact on smaller datasets' prediction results. This finding suggests that applying sparse constraints can mitigate the impact of variable redundancy on the model while conserving computational resources. ### Effect of Hyper-parameters In this section, we examine the impact of four hyperparameters on our proposed SageFormer model: global token length, depth of graph aggregation, the number of nearest neighbors, and the depth of encoder layers. We conduct a sensitivity analysis on the Traffic dataset (Figure 3). For each of the four tasks, SageFormer consistently delivers stable performance, regardless of the selected value. **Global token length** (Figure 2(a)): The model's performance remains consistent across all prediction windows, irrespective of the value of M. To optimize computational efficiency, we set M=1. **Depth of graph aggregation** (Figure 2(b)): The model demonstrates robust performance with varying graph aggregation depths. To balance accuracy and efficiency, we set d=3. **Number of nearest neighbors** (Figure 2(c)): Larger k values generally yield better results, but performance declines when a fully connected graph is utilized. This suggests sequence redundancy in MTS forecasting tasks, so we \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline Dataset & \multicolumn{2}{c}{Transformer} & \multicolumn{2}{c}{**+Ours**} & \multicolumn{2}{c}{Informer} & \multicolumn{2}{c}{**+Ours**} & \multicolumn{2}{c}{FEDformer} & \multicolumn{2}{c}{**+Ours**} \\ \cline{2-13} Metric & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Traffic & 0.661 & 0.362 & **0.549** & **0.293** & 0.610 & 0.376 & **0.578** & **0.318** & 0.764 & 0.416 & **0.707** & **0.396** \\ Electricity & 0.272 & 0.367 & **0.202** & **0.290** & 0.214 & 0.327 & **0.207** & **0.305** & 0.311 & 0.397 & **0.223** & **0.319** \\ Weather & 0.611 & 0.557 & **0.290** & **0.348** & 0.309 & 0.360 & **0.285** & **0.323** & 0.634 & 0.548 & **0.269** & **0.304** \\ ETTh1 & 0.919 & 0.759 & **0.459** & **0.456** & 0.440 & 0.460 & **0.433** & **0.442** & 1.040 & 0.795 & **0.673** & **0.577** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance promotion by applying the proposed framework to Transformer and its variants. We report all prediction lengths’ averaged MSE/MAE (same as Table 2). Full results (under all prediction lengths) see Table 7 in Appendix A select k=16. **SageFormer encoder layers** (Figure 2(d)): Increasing the number of encoding layers results in a higher parameter count for the model and its computational time. No significant reduction is observed after the model surpasses three layers, leading us to set the model's layers to 3. ### Synthetic Datasets Directed Cycle Graph Dataset.In this section, we investigate the adjacency matrices inferred by SageFormer using a synthetic dataset consisting of N=10 nodes. Each series value \(x_{i,t}\) is sampled from another series \((i-1\ mod\ N)\) with temporal lag \(\tau=10\), resulting in a directed cycle graph for the adjacency matrix. The dataset details are provided in Appendix A. Figure 3(a) represents the actual inter-series dependencies alongside our inferred results, effectively demonstrating that our method successfully recovers these dependencies. As Figure 3(b) indicates, our proposed series-aware framework outperforms the previous series-mixing and series-independent frameworks, achieving the lowest MAE and MSE test losses. Importantly, the series-independent framework's performance is notably poor in this context, with an MSE exceeding 1. This deficiency stems from its disregard for the significant inter-series dependencies inherent in this dataset, particularly given that each sub-series is nearly equivalent to white noise in this dataset. Low-rank Dataset.To assess the effectiveness of different models in handling sparse data, we designed multiple Low-rank MTS datasets with varying numbers of series. Inspired by the Discrete Sine Transformation, we generate arbitrary signals as the sum of distinct sinusoids combined with Gaussian noise. The same sinusoids are shared among different nodes, creating the low-rank property. The dataset details are provided in Appendix A. Figure 3(c) presents the prediction MAE results for datasets with varying numbers of series (\(N\)) using the series-mixing method and our approach. It can be observed that the prediction performance of the series-mixing method deteriorates rapidly as the number of series increases since it encodes all series information into the same token. In contrast, the MAE of our method does not increase with \begin{table} \begin{tabular}{c|c c c c|c|c c c|c c c|c} \hline \hline \multicolumn{2}{c|}{Datasets} & \multicolumn{4}{c|}{Traffic} & \multicolumn{4}{c}{AVG} & \multicolumn{4}{c}{ETTh1} & \multicolumn{1}{c}{AVG} \\ \cline{2-13} Prediction Lengths & 96 & 192 & 336 & 720 & & \multicolumn{2}{c|}{96} & 192 & 336 & 720 & \multicolumn{1}{c|}{} \\ \hline SageFormer & **0.408** & **0.421** & **0.438** & **0.477** & **0.436** & 0.377 & **0.423** & **0.459** & **0.465** & **0.431** \\ \hline - Graph Aggregation & 0.446 & 0.452 & 0.466 & 0.506 & 0.468 & **0.372** & 0.439 & 0.468 & 0.491 & 0.443 \\ - Global Tokens & 0.420 & 0.440 & 0.457 & 0.507 & 0.456 & 0.381 & 0.431 & 0.462 & 0.478 & 0.438 \\ - Sparse Graph & 0.416 & 0.441 & 0.445 & 0.484 & 0.447 & 0.377 & 0.423 & 0.459 & 0.467 & 0.432 \\ - Directed Graph & 0.415 & 0.442 & 0.449 & 0.494 & 0.450 & 0.379 & 0.424 & 0.461 & 0.468 & 0.433 \\ \hline \hline \end{tabular} \end{table} Table 4: Model architecture ablations (MSE metrics are reported). _-Graph Aggregation_: Elimination of graph aggregation in the encoder; _-Global Tokens_: Graph information propagation applied to all tokens without global tokens; _-Sparse Graph_: Removal of the k-nearest neighbor constraint in graph structure learning; _-Directed Graph_: Modification of graph structure learning from directed to undirected graphs. Figure 3: Evaluation on hyper-parameter impact. (a) MSE against the length of global tokens on the Traffic dataset. (b) MSE against the graph aggregation depth on the Traffic dataset. (c) MSE against the number of nearest neighbors on the Traffic dataset. (d) MSE against SageFormer encoder layers on the Traffic dataset. the growth in the number of series, indicating that our designed approach can effectively exploit the low-rank characteristics of the dataset. ### Computational Efficiency Analysis We compared the computational efficiency of our model, SageFormer, with other Transformer-based models (Table 5). Although SageFormer's complexity is theoretically quadratic to historical series length \(T\), a large patch length \(P\) in practice brings its runtime close to linear complexity models. An additional \(O(C^{2})\) complexity is due to standard graph convolution operations, but techniques exist to reduce this to linear complexity [33; 34]. In the decoder part, the complexity of SageFormer is simplified to linear, owing to the streamlined design of the linear decoder head. We also evaluated running time and memory consumption on the Traffic dataset, which has the most variables. SageFormer balances running time and memory usage well, achieving a running time of \(0.31\pm 0.03\) seconds per batch and consuming \(12.42\) GB of memory. This result is slightly slower compared to the PatchTST [13] model but is faster than the Crossformer [11] model. These outcomes suggest that our proposed SageFormer model presents a competitive trade-off between efficiency and prediction accuracy. ## 5 Conclusion and Future Work This paper presented SageFormer, a novel approach for modeling inter-series dependencies in long-term Multivariate Time Series (MTS) forecasting tasks. By amalgamating graph neural networks (GNN) with Transformer structures, SageFormer can effectively capture diverse temporal patterns and harness dependencies among various series. Our model has demonstrated impressive versatility through extensive experimentation, delivering state-of-the-art performance on real-world and synthetic datasets. SageFormer thus presents a promising solution to overcome the limitations of series Figure 4: Evaluation on synthetic datasets. (a) The left side displays the heat map of the actual adjacency matrix, while the right side presents the inferred adjacency matrix by SageFormer, illustrating the effectiveness of our proposed method in learning the inherent graph structure.; (b) Prediction results of three different methods on the Directed Cycle Graph dataset; (c) Prediction MAE results for low-rank datasets with varying numbers of series (\(N\)). We selected the Nonstationary Transformer for the series-mixing method, and for the series-independent method, we chose PatchTST as a representative. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & Encoder layer & Decoder layer & Time(s/batch) & Memory(GB) \\ \hline Transformer [20] & \(O(T^{2})\) & \(O\left(\tau(\tau+T)\right)\) & \(0.07\pm 0.01\) & \(2.39\) \\ Informer [7] & \(O(T\log T)\) & \(O\left(\tau(\tau+\log T)\right)\) & \(0.04\pm 0.04\) & \(2.31\) \\ FEDformer [9] & \(O(T)\) & \(O\left(\tau+T/2\right)\) & \(0.18\pm 0.03\) & \(3.13\) \\ PatchTST [13] & \(O(CT^{2}/P^{2})\) & \(O(\tau)\) & \(0.22\pm 0.04\) & \(11.44\) \\ Crossformer [11] & \(O(CT^{2}/P^{2})\) & \(O(C\tau(\tau+T)/P^{2})\) & \(0.82\pm 0.05\) & \(22.53\) \\ \hline SageFormer **(ours)** & \(O(CT^{2}/P^{2}+C^{2})\) & \(O(\tau)\) & \(0.31\pm 0.03\) & \(12.42\) \\ \hline \hline \end{tabular} \end{table} Table 5: Computational complexity per layer of Transformer-based models. \(T\) denotes the length of the historical series, \(\tau\) represents the length of the prediction window, \(C\) is the number of series, and \(P\) corresponds to the segment length of each patch. dependencies modeling in MTS forecasting tasks and exhibits potential for further advancements and applications in other domains involving inter-series dependencies. We also acknowledged the limitations of our work and briefly delineated potential avenues for future research. While SageFormer achieves exceptional performance in long-term MTS forecasting, the dependencies it captures do not strictly represent causality. As a result, some dependencies may prove unreliable in practical scenarios due to the non-stationary nature of time series. Our primary focus on enhancing long-term forecasting performance has led to some degree of overlooking the interpretability of the graph structure. Moving forward, our work's graph neural network component could be improved to learn causal relationships between variables and reduce its complexity. The framework proposed in this paper could also be applied to non-Transformer models in the future.
2301.02091
Fast-Scrambling and Operator Confinement Using an Auxiliary Qubit
We introduce a minimal model for realizing a fast-to-slow scrambling transition mediated by an auxiliary central qubit (c-qubit). The c-qubit is coupled to a spin-$1/2$ Ising model with local Ising interactions and tunable c-qubit-spin coupling. Each spin becomes next-nearest neighbor to all others through the c-qubit, which mediates effective all-to-all interactions. As the interaction with the c-spin increases, we find a surprising transition from super-ballistic scrambling and information growth to continuously restricted sub-ballistic entanglement and operator growth. This slow growth occurs on intermediate timescales that extend exponentially with increasing coupling and system size, indicative of logarithmic entanglement growth. We find that in the slow-scrambling regime, the c-qubit Ising interaction allows commuting operators to grow support on all sites rapidly, while operators orthogonal to the interaction become echoed out. This projects local operators to lie in a restricted subspace and prevents extensive operator entanglement growth. We provide exact dynamics of small systems working with non-equilibrium, effective infinite temperature states, and additionally contribute analytic early-time expansions that support the observed rapid scrambling to quantum Zeno-like crossover. Tracing out the central qubit provides a unique translation from the full, closed unitary dynamics to a simple open system construction consisting of a typical spin-chain with hidden qubit degree of freedom.
Joseph Szabo, Nandini Trivedi
2023-01-05T14:52:28Z
http://arxiv.org/abs/2301.02091v1
# Fast-Serambling and Operator Confinement Using an Auxiliary Qubit ###### Abstract We introduce a minimal model for realizing a fast-to-slow scrambling transition mediated by an auxiliary central qubit (c-qubit). The c-qubit is coupled to a spin-\(1/2\) Ising model with local Ising interactions and tunable c-qubit-spin coupling. Each spin becomes next-nearest neighbor to all others through the c-qubit, which mediates effective all-to-all interactions. As the interaction with the c-spin increases, we find a surprising transition from super-ballistic scrambling and information growth to continuously restricted sub-ballistic entanglement and operator growth. This slow growth occurs on intermediate timescales that extend exponentially with increasing coupling and system size, indicative of logarithmic entanglement growth. We find that in the slow-scrambling regime, the c-qubit Ising interaction allows commuting operators to grow support on all sites rapidly, while operators orthogonal to the interaction become echoed out. This projects local operators to lie in a restricted subspace and prevents extensive operator entanglement growth. We provide exact dynamics of small systems working with non-equilibrium, effective infinite temperature states, and additionally contribute analytic early-time expansions that support the observed rapid scrambling to quantum Zeno-like crossover. Tracing out the central qubit provides a unique translation from the full, closed unitary dynamics to a simple open system construction consisting of a typical spin-chain with hidden qubit degree of freedom. ## I Introduction Operator scrambling and entanglement entropy spreading are unambiguous discriminators of purely quantum mechanical nonequilibrium dynamics; fascinating properties underlying quantum thermalization, dynamical phase transitions, and topological order [1; 2; 3; 4; 5; 6; 7]. In the Heisenberg picture, quantum operator scrambling details how initially localized operators propagate over spatiotemporal degrees of freedom due to noncommutative many-body interactions. In the complementary Schrodinger picture, entanglement entropy captures growing information complexity: from initially classical states to those with nontrivial entangled structure. Quantum information dynamics bridge both theoretical and experimental communities as primary measures for quantum complexity and expressivity [8; 9; 10; 11; 12]. These concepts combined with quantum simulation/circuit devices have coalesced into many enriching, recent experiments [13; 14; 15; 10]. The accelerating pace of results and drive to continually advance the corresponding theory extend these successes to further research at the intersection of quantum chaos, thermalization, and computability, extending from qubits to black-holes and quantum gravity [16; 17; 18; 19; 20; 21; 22; 23; 24]. The primary research thrusts among the quantum information dynamics community fall along the lines of uncovering the minimal mechanisms behind myriad information dynamical phases and understanding the fate of quantum to classical thermalization. In studying generalized quantum information dynamics, there are typically two disparate perspectives: closed and open quantum systems. Closed quantum systems exhibit rich scrambling physics ranging from frozen [25] to fast [18] dynamics, with the typical questions relating to how well-preserved is such physics under driving and dissipation contributions from an external environment [26; 27; 28; 29]. The environment is oftentimes reduced to a memoryless, effective Markovian description, which hinges on assumptions including weak-coupling and a separation in the timescales associated with system and environment [30; 31]. Though solving for the exact dynamics for a full complex environment is beyond the capabilities of current devices, taking into account the structure and interaction with the environment poses interesting research questions: what is the fate of entangled information within the system, how can a structured environments drive effective interactions and information dynamics, and how does the environment serve as a probe in an information theoretic/entropic capacity? A simple avenue for exploring the impact of a structured quantum environment is by considering composite, unitary models. Focusing on a particular _subsystem_ of a full closed quantum system and tracing over the additional degrees of freedom (DoFs), then termed the _environment_, captures the subsystem's effective dynamics/interactions. This is a popular focus of study as it provides an open quantum system perspective for the subsystem and allows full consideration of the environment's structure and interaction topology. This construction allows us to specifically evaluate how variable structured environments impact the overall quantum information dynamical phase as expressed by the underlying subsystem. Previous work considered the validity of Markovian assumptions provided variable system-environment coupling, and here we are looking to add an information scrambling perspective. Considering a system-environment construction in this manner directly applies to those codes/models investigating the information physics of auxiliary bits or those systems with inherent auxiliary DoFs such as mechanical or optical modes [32; 33; 34; 35]. Studying composite systems in this manner provides an interesting framework extending current quantum infor mation dynamic research. Significant recent results focus on the range of interactions, the speed and nature of information propagation, the role of inherent symmetries, and the effect of emergent symmetries in the cases of many-body localization, Floquet periodic driving, etc. The same phenomena can be similarly cast as an environment mediated effect. The tunability of the system-environment network topology and the inherent environment structure and interaction symmetries then allows for systematic investigation into the particular contribution on the overall dynamics. In this paper we explore these aforementioned questions by considering the simplest environment extension; an auxiliary central qubit (c-qubit) coupled to 1-d chain system of interacting spin-1/2 (qubit) objects. Tracing out the c-qubit and considering the dynamics of the 1-d system provides a translation from a full, unitary model, to an effective long-range, non-Hermitian spin chain with a hidden qubit degree of freedom. This provides a single long-range quantum channel for transmitting information but at the same time imposes a shared two-fold DoF across all spins. Though only a small addition to well-understood nearest-neighbor qubit model, we observe an abundance of exciting repercussions. The system-environment coupling expresses various regimes: in the weakly coupled regime, the c-qubit provides little feedback and acts as a free channel for information to pass unimpeded; while when strongly coupled to the low dimensional qubit environment, the c-qubit acts as a strong drive and imposes an effective hidden symmetry on the underlying spin system and generates _disorder-free localization_. We liken the physics observed here to that seen in systems undergoing quantum measurement or strong Floquet driving. Considering the c-qubit as a hidden degree of freedom provides unique insight into how the quantum scrambling dynamics of the underlying spin chain maps to an extended unitary model provided one additional qubit. Central qubit or a higher dimensional qudit/cavity/register are popular theoretical and experimental tools for providing non-invasive many-body measurements [36; 37; 38; 39], evaluating Hermitian and non-Hermitian response [40; 38], generating effective interactions [41], and studying the fundamentals of decoherence and information transport [42; 43]. Here we particularly focus on the dual effect of a tunable central qubit by investigating the operator and entanglement growth in a nonintegrable, ring-star Ising model. The model includes homogeneous spin-spin interactions in a mixed magnetic field. We find an extremely surprising fast-to-slow quantum information spreading transition that occurs due to the nonlocal and coherent nature of the c-qubit. We summarize this result in Fig.1(d), where in the weakly coupled regime (regime I), the central qubit mediates rapid scrambling with a timescale that decreases with system size (green, upper curve in). In the strong coupling regime (regime II), the scrambling time increases exponentially with system size (red, lower curve). The mechanism behind this transition is the interplay between the noncommuting, extensive c-qubit Ising interactions and transverse field \(h_{c}\). The metric here presented for scrambling is \(e^{S_{N}(t)}/2^{L+1}\), which provides a measure of the span of the quantum wavefunction throughout the full Hilbert space. As we detail in what follows, in the strong coupling regime the central qubit rapidly saturates its entanglement with the surrounding spin-chain environment and becomes strongly driven by this extensive interaction. This strong interaction rapidly aliases operators orthogonal to the central qubit Ising interaction on the central qubit and even more surprisingly within the spin chain. The long lifetime of states and operators that commute with the central Ising interaction leads to slow multi-particle entanglement growth and operator complexity. Our work agrees with previous research that finds an extensively scaling, nonlocal interaction leads to rapid scrambling, where the rate increases with system size [44; 45; 46]. At the same time we find a surprising limit where the purely quantum nature of the c-qubit imparts a coherent effect that slows operator decoherence/entanglement and subsequent spreading. This phenomena mirrors what is seen in strongly driven Floquet systems, where periodic driving can impart an effective symmetry in all eigenstates and leads to prethermalization and correspondingly slow entanglement spreading. We liken the projective action of the central qubit in this time independent Hamiltonian to the quantum Zeno effect where quantum measurement leads to a ballistic to sub-ballistic entanglement growth transition. Here the c-qubit imparts a highly nonlocal effect on operator projection in contrast to a local purification/disorder network that redefines local spreading dynamics. We illuminate this c-qubit physics by examining the growth of out-of-time-order correlators (OTOCs) and the von Neumann entanglement entropy for sufficiently high-energy initial product states. ## II Scrambling metrics Many recent works have made significant progress on establishing the family of scrambling dynamics that occur in various lattice models and geometric random circuit designs, as characterized by the growth of OTOCs. The OTOC generically given as \[C_{VW}=\langle[\hat{W}(j,t),\hat{V}(i,0)]^{\dagger}[\hat{W}(j,t),\hat{V}(i,0) ]\rangle, \tag{1}\] examines how an initially prepared unitary operator \(\hat{V}\) on site-\(i\) commutes after Heisenberg evolution with operator \(\hat{W}\) after time \(t\) (here assumed a local operator on site-\(j\)). The operator spreading picture is unique to quantum systems, where in working with pure states, no information is truly lost but transforms into many-body degrees of freedom that become increasingly inaccessible provided control over an initial localized region. Studying OTOCs and the timescales associated with scrambling dynamics provides a conjugate perspective as compared to entanglement entropy measures and transitions. Where OTOCs and specifically infinite temperature OTOCs examine the light cone established by Heisenberg evolution and depend more strongly on the commutivity graph, entanglement entropy examines how the wavefunction over a bipartition of Hilbert space spreads throughout. Here we specifically focus on the von Neumann bipartite entanglement entropy given as \[S_{vN}=-\sum_{k}\lambda_{k}\log(\lambda_{k}), \tag{2}\] where \(\lambda_{k}\) are eigenvalues of the reduced density matrix \(\rho_{A|B}\) (RDM) obtained by integrating out subsystem \(\mathcal{A}\) or \(\mathcal{B}\) with corresponding Hilbert spaces \(\mathcal{H}_{A},\mathcal{H}_{B}\). OTOCs and entanglement identify similar physics and previous colloquial conceptions of the two established quantum scrambling as a unifying framework behind them; where, scrambling represents the time for an OTOC between arbitrary sites to become \(O(1)\) and entanglement entropy to become \(O(L)\). In the case of OTOCs, this limit is not rigorous enough and only provides a best-case scenario for operators traversing the system rather than providing a timescale for nontrivial operator strings to span the system [47] (extensive operator entanglement). Rigorous relationships between OTOCs and Renyi-2 entropy have been established [48; 49; 50] and special cases have been studied in particular optical Hamiltonians [51; 24; 52]. Unitary scrambling physics generally falls into two categories: systems that thermalize rapidly and those that fail to do so. The former are known as fast-scramblers; ergodic systems typically with variable all-to-all range interactions that spread quantum information throughout the full Hilbert space in \(t_{sc}\sim\log(N)\). Models such as Sachdev-Ye-Kitaev (SYK) and non-integrable infinite range Ising and XY models are known to exhibit fast-scrambling physics [53; 54; 45; 46; 55]. Systems that fail to thermalize with \(t_{sc}\sim e^{L}\) are slow-scramblers, non-ETH obeying systems and candidates for highly coherent quantum information storage. There are multiple vectors through which non-ETH physics occurs: integrability, disorder-free localization [56; 57], quantum scarring [58; 59; 60; 61], and/or higher order exact or proximate conservation laws [62; 63; 64]. The origins of much of this work stems from the dramatic quantum correlations observed in quantum simulation experiments. Typical models accessible to simulation and experiment are semiclassical in nature with infinite or long-range interactions. These models exhibit the characteristic unitary scrambling features we detailed previously, yet continue to enrich the discussion with new puzzling results. In Lipkin-Meshkov-Glick (LMG) model or the Dicke model exhibit strict conservation of the total spin moment \(\hat{S}^{2}\) such that the effective number of degrees of freedom is \(\mathcal{O}(L)\), compared to \(\mathcal{O}(2^{L})\)[65; 66; 24]. These systems have been observed to spread information rapidly, while the complexity saturation value remains low. This is in stark contrast to fast scrambling models like SYK where infinite-range connectivity allows for rapid and complex quantum information scrambling. One immediately puzzling question is: how do long-range interactions, tending toward generating semiclassical behavior, compete with local chaotic quantum dynamics to allow a fast-to-slow scrambling transition? A complete understanding of quantum information physics not only hinges on understanding the unitary dynamic contribution to experimental results, but similarly understanding non-Hermitian processes. These are inherent to quantum simulation platforms and represent the an exotic next frontier for theory and experiment as we move toward fully expressive quantum circuits and computation. More generalized quantum dynamical behavior has been explored in recent studies consisting of non-Hermitian operations: composite system-environment undergoing quantum measurement [67; 68; 69; 70; 71; 72], light-matter interactions [24], dissipative and driven systems [73; 74; 75; 76]. The most extraordinary findings reveal that these non-unitary dynamics generate effective inter-system interactions and impose effective static long-lived symmetries: Floquet periodically driven systems are akin to various unitary scrambling phases. ## III Model We consider the Hamiltonian for the c-qubit or ring-star Ising model: \[H=\sum_{i=0}^{L-1}\lambda\sigma_{i}^{z}\sigma_{c}^{z}-J\sigma_{i}^{z}\sigma_{ i+1}^{x}+h\sigma_{i}^{x}+g\sigma_{i}^{x}+h_{c}\sigma_{c}^{x}+g_{c}\sigma_{c}^{x}, \tag{3}\] where \(\lambda\) represents the uniform spin-c-qubit interaction, and \(J,h,g\) represent the much-studied nonintegrable mixed-field Ising model. Here we take \(J=1.0,h=h_{c}=1.05,g=g_{c}=0.45\) unless otherwise noted. This is a well characterized nonintegrable point for polarized state evolution and operator dynamics allowing us to benchmark the impact of the c-qubit [77; 46]. Previous works examined operator dynamics and entanglement growth in random unitary circuits (RUCs) applied in a star-graph network. RUCs on the star-graph generated OTOC dynamics that saturated at \(t\propto\log(L)\)[78; 47]. In contrast, an interesting section of [78], examines dynamics of time-independent, star-Ising Hamiltonians which consist of Ising interactions and a non-commuting field \(h_{c}\) strictly on the central qubit. In contrast to fast scrambling RUC networks, the time-independent c-qubit Hamiltonian dynamics generate novel operator confinement on the c-qubit with a coherent lifetime \(\tau\propto\frac{h^{2}}{\lambda L}\). This defines the time for significant operator weight to decohere into operator subspaces that no longer commute with the Ising interaction, subsequently allowing slow scrambling to all other sites. Provided this contrast between the fast-scrambling RUCs and the slow Hamiltonian dynamics, we ask: can the c-qubit coupling mediate rapid scrambling in a truly time-independent nonintegrable model and, if so, when/how does this picture break down? ## IV Analytic insights Before discussing the exact numerics on the full nonintegrable ring-star Ising model, we first motivate the reasons underlying a dynamical transition from fast-to-slow scrambling and the corresponding mechanism. We first summarize typical operator growth in local spin models and then review the Ising star graph dynamics. ### Local Ising Model In the completely local Ising chain limit \(\lambda=0\), we can gain a heuristic understanding of the characteristic light cone spreading by performing an early-time expansion of the dynamical correlator \((\sigma_{i}^{z}(t)\sigma_{j}^{z})\). Using the early-time expansion of the Heisenberg evolution we can write \(e^{i\hat{H}t}\sigma_{i}^{z}e^{-i\hat{H}t}\) using the Baker-Campbell-Hausdorff (BCH) expansion \[e^{-i\hat{H}t}\sigma_{i}^{z}e^{-i\hat{H}t}=\sum_{m}^{\infty} \frac{(it)^{m}}{m!}[H,S_{i}^{z}]_{m}\] \[[A,B]_{m}=[A,[A,B]_{m-1}];[A,B]_{0}=B\] \[[H,S_{i}^{z}]_{1}=it[hS_{i}^{x},S_{i}^{z}]=-ithS_{i}^{y}\] \[[H,S_{i}^{z}]_{2}=\frac{(it^{2})}{2}[H,S_{i}^{y}]\] \[=\frac{-t^{2}}{2}(-\lambda S_{0}^{x}S_{i}^{x}+hS_{i}^{z}-gS_{i}^{x }-J(S_{i-1}^{z}S_{i}^{x}+S_{i+1}^{z}S_{i}^{x})). \tag{4}\] Before continuing this expansion to higher orders we observe a trend in operator growth and can approximate the weight as \[[S_{i}^{z}(t),S_{j}^{z}]\propto\frac{t^{|i-j|}}{|i-j|!}\mathcal{O }(1) \tag{5}\] Naively, we then expect the OTOC, or squared commutator, to generically follow \[C_{zz}(i-j,t)\sim\frac{t^{2|i-j|}}{(|i-j|!)^{2}}\mathcal{O}(1) \tag{6}\] This approximation is valid for times \(t\sim|i-j|/v_{B}\), where \(v_{B}\) is the characteristic butterfly velocity. Before this time, operator growth is suppressed with an exponent that grows with distance, and allows for optimized simulations using matrix product operator dynamics (MPO) by just keeping track of the operator wavefront [79]. Though many extensive theoretical works provide rigorous estimates on the form of operator growth for translationally invariant models, here we simply examine the asymptotic form. This exponentially growing operator weight with exponent like \(r=|i-j|\) leads to the development of a linear light cone with butterfly velocity exactly calculated as \(2eJ\) (\(e\) being Euler's number) for \(h/J>1\)[80]. Following this light cone in the nonintegrable Ising model, any localized operator spans the full operator Hilbert space \(\{\hat{X},\hat{Y},\hat{Z},\hat{I}\}\) within region \(r\). An interesting extension of the OTOC is the integrated OTOC (iOTOC), which is the integral over \(r\) and the bipartite OTOC [81, 50]. Both provide a novel characterization of the operator complexity within a region \(r\). This also provides an insightful way to then understand the corresponding entanglement dynamics for pure state wavefunctions with energy density \(k_{B}T\) as in accordance with ETH the states and operators should exhibit ergodic equilibration. In the nonintegrable Ising model, an operator saturates the reduced Hilbert space \(4^{|i-j|}\) after a time Figure 1: (a) Illustration of the ring-star model achieved in (b) optically dressed trapped-ion experiments and (c) through nonlocal unitary gates in a circuit realization. (d) Central qubit mediated dynamics: local interaction (black), c-qubit induced fast scrambling (green), and c-qubit inhibited scrambling (red). Information scrambling approximated as wavefunction spread in Hilbert space or \(\sim e^{S_{iN}(t)}\) to show dramatic prethermal-like approach to full thermalization. (e) Qualitative depiction of (d) or how a well defined initial state grows to fill state/operator Hilbert space with a varying rate depending on dynamical phase. (f) von Neumann entanglement entropy \(S_{vN}(t)\) following a quench from polarized state \(|+\,y\rangle\) for the ring-star Ising model under varying \(\lambda\). \(t_{sat}=\frac{|i-j|}{v_{B}}\), so simply integrating over Eq.6 provides an exponentially growing iOTOC for all \(C_{vw}\). This operator growth then guarantees ballistically growing entanglement entropy \(S_{vN}\sim\sum_{v,w}\log[\text{iOTOC}(v,w)]\). ### Star-Ising model In the star limit, we set \(J=0\) and tune external fields \(h,g\) to allow more expansive operator evolution. First working with only \(\lambda\neq 0\), we then add complexity to arrive at the full, nonintegrable ring-star Ising model. For \(|\lambda|>0\), operator growth between leaves of the graph is trivial. The dynamics are exactly solvable as \([\sigma_{i}^{z},\hat{H}]=0\) for all sites \(i\). Similarly, the commutivity graph representing the Hamiltonian and how local operators propagate is completely disconnected with vertices \(\sigma_{i}^{z},\sigma_{L}^{z}\) for all \(i\in L\) and no bonds in between. Operators evolve simply under the central Ising interaction and the two-time correlator behaves as: \[\langle S_{i}^{x}(t)|S_{i}^{x}\rangle=\cos 2\lambda t. \tag{7}\] And, again, exactly in this case, the OTOC goes as: \[C_{xx}(i,i,t)=2\sin^{2}(2\lambda t), \tag{8}\] Though this limit admits a trivial result, it allows us to understand the key coherent property of the c-qubit qubit. We see that for all times, operators orthogonal to the Ising interaction \(\lambda\), initially prepared on the leaves, propagate to the c-qubit after a time \(t=\pi/\lambda\). Because all terms in the Hamiltonian commute with this interaction \([\sigma_{i}^{z}\sigma_{0}^{z},\sigma_{j}^{z}\sigma_{0}^{z}]=0\), the action of operator development from leaves to c-qubit does not decohere into nontrivial, orthogonal operators \(\{\hat{X},\hat{Y}\}\). Due to this coherence, operators oscillate between the initial node and the c-qubit. If the operator was initially prepared on the c-qubit, it fluctuates onto all nodes, but instead of becoming a many-body operator, it is a collective superposition of \(L\) unique two-body operators \(\sum_{j}\sigma_{0}^{x,y}\sigma_{j}^{x}\in\mathcal{H}(4^{\otimes L})\). This superposition, though nonlocal on a timescale \(\tau=\mathcal{O}(1)\), has minimal operator entanglement and the complexity of operators strings is fixed to be a maximum of 2. In this scenario, similarly, minimal entanglement entropy develops as the number of unique operator states is of \(\mathcal{L}\) in a Hilbert space of \(4^{L}\). Quench experiments on such systems with nonzero homogeneous magnetic fields will only permit half-system entanglement to grow like \(\log L\) since we have effectively a large semiclassical spin system coupled to a single two-level qubit [82]. The simplest extension we can make is to now introduce a transverse field on the c-qubit spin, \(h_{c}\neq 0\). This was similarly analyzed in a disordered star-Ising system for small values of \(h/\lambda L\)[78]. For \(|h_{c}|>0\), the model retains the same integrability, as any \(\{z_{1},...z_{L}\}\) is an eigenstate of the system and will not evolve under unitary dynamics. We can then reduce the problem to solving for the evolution of the central qubit in a mixed \(x-z\) field where the effective longitudinal strength given by \(\sum_{i}\sigma_{i}^{z}\). Though fully solvable, we find illuminating operator dynamics in the infinite temperature limit. Solving exactly for \(\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle\) as was first provided in [78], we observe how the coherent oscillation \(\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle=\cos(2\lambda t)\) for \(h_{c}=0\), evolves for increasing \(h_{c}\) and leads to operator decoherence. Operator decoherence, in this sense, is that as auxiliary operators such as \(\sigma_{e}^{x}\to\sigma_{e}^{x},\sigma_{e}^{y}\), they then no longer commute with the c-qubit Ising interaction and operator weight then grows on sites \(j+i\). The superposition of these growing dynamical correlations on sites \(j+i\) similarly decreases the probability of finding \(\sigma_{i}^{x}\) on site\(-i\) after time \(t\), leading to a decay in the autocorrelation function on site \(i\). For \(\lambda L>>h\), it was shown numerically that early time operator dynamics go as \[\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle\sim\cos(2\lambda t)e^{-\sqrt{ \hat{\pi}}\cdot\frac{h^{2}}{\lambda}t} \tag{9}\] using the memory matrix formalism [78]. Regardless of whether \(\lambda\) is a Gaussian random variable or homogeneous, we arrive at the same exponential dependence on system size. In the case of Gaussian random variables, this approximation was observed to be faster than the true decay rate. The key physics being that the \(L-\)site interactions with the c-qubit lead to an extensive coherence time, which we can evaluate explicitly by tracing over the set of eigenstates \(\mathbb{Z}\): \[\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle=\frac{1}{2^{L}}\text{Tr}_{Z}[ \langle z_{c},...z_{L}|e^{i\hat{H}t}\sigma_{i}^{x}e^{-i\hat{H}t}\sigma_{i}^{x} |z_{c},...z_{L}\rangle] \tag{10}\] \[=\langle z_{c}|\otimes\langle\mathbb{Z}_{m}|e^{i\hat{H}t}\sigma_{i}^{x}e^{-i \hat{H}t}|\mathbb{Z}_{m}\rangle\otimes|z_{c}\rangle \tag{11}\] \[\hat{H}|\mathbb{Z}_{m}\rangle\otimes|z_{c}\rangle=e^{i(h_{c}\sigma_{e}^{x}+ \lambda\sum_{i>0}z_{i}\sigma_{i}^{x})t}|z_{c}\rangle \tag{12}\] \[=\mathcal{I}\cos(2\omega_{Z_{m}}t)+\frac{(h_{c}\sigma_{c}^{x}+\lambda Z_{m} \sigma_{c}^{x})}{\omega_{Z_{m}}}\sin(2\omega_{Z_{m}}t)|z_{0}\rangle. \tag{13}\] We use same simplified notation as [78], where \(\omega_{Z}=\sqrt{h^{2}+\lambda^{2}Z_{m}^{2}}\), with \(Z_{m}=\sum_{i>0}z_{i}\) and \(\mathbb{Z}=\mathbb{Z}[m]\) is just \(z-\)eigenstate with magnetization \(m\in[-L,L]\). As the set of states \(\{z_{1}...z_{L}\}\) commute with the Hamiltonian, we reduce the Heisenberg evolution to that of a 2-level system and averaging over the respective density of states, which is simply a binomial distribution in the infinite temperature limit. Going back to the full evolution we then have \[\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle=\frac{1}{2^{L}}\sum_{ \begin{subarray}{c}2_{0}\\ |m|\end{subarray}}\sum_{\begin{subarray}{c}L\\ |m|\end{subarray}}\binom{L}{|m|}\langle z_{c}|[\mathcal{I}\cos(2\omega_{Z_{m}} t)+\frac{(h_{c}\sigma_{c}^{x}+\lambda Z_{+}\sigma_{c}^{x})}{\omega_{Z_{m}}}\sin(2 \omega_{Z_{m}}t)]\times\] \[\qquad\qquad\qquad\qquad\qquad\qquad[\mathcal{I}\cos(2\omega_{Z_{m }^{-}}t)+\frac{(h_{c}\sigma_{c}^{x}+\lambda Z_{-}\sigma_{c}^{x})}{\omega_{Z_{m }^{-}}}\sin(2\omega_{Z_{m}^{-}}]t)||z_{c}\rangle \tag{14}\] \[=\frac{1}{2^{L-1}}\sum_{m}\binom{L}{|m|}\cos(2\omega_{Z_{m}}t) \cos(2\omega_{Z_{m}^{-}}t)+\frac{(h_{c}^{2}+\lambda^{2}Z_{m}Z_{m}^{-})}{\omega _{Z_{m}^{-}}\omega_{Z_{m}^{-}}}\sin(2\omega_{Z_{m}}t)\sin(2\omega_{Z_{m}^{-}}t). \tag{15}\] \(\sigma_{i}^{x}\) flips the single spin state \(z_{i}\) and leads to two unique frequencies \(w_{Z_{m}}\) and \(w_{Z_{m}^{-}}\), separated like \(\lambda\) for \(h=0\), \(Z_{m}^{-}=\sum_{j\neq i>0}z_{j}-z_{i}\). \(h_{c}=0\) provides the exact, non-scrambling result \(\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle=\cos(2\lambda t)\), and for \(\lambda=0\) we have \(\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle=\cos(2\lambda t)^{2}+\sin(2 \lambda t)^{2}=1\). Starting in the \(h_{c}=0\) limit and moving toward high transverse magnetic field \(h_{0}>>\lambda L\), we numerically integrate Eq.15 and study the autocorrelation function on site-\(i\) (Fig.2). In the low field limit, we expect exponentially small modifications to pure cosine oscillations as we have recreated from [78], and in the high field limit, \(\sigma_{i}^{x}\) should completely break down as \(\sigma_{c}^{x}\) decoheres and operator weight spreads to all sites rapidly. As \(\sigma_{c}^{x}\to[\sigma_{0}^{y},\sigma_{0}^{x}]\), operator weight can be distributed to all sites \(j\neq i\), and the rapid rotation of operators on the c-qubit aliases away coherent growth on site-i. In Fig.2(a) we see that coherent oscillations are apparent for \(h/\lambda L\)\(\mathcal{O}(1)\) out to \(t\lambda\sim 30\) and slow decay with increasing \(h\). Above \(\log[h/\lambda L]>0\), oscillations are barely visible and dynamics are dominated by exponential decay with no revival even out to time \(t\lambda\in[0,200]\). This is more clearly depicted in Fig.2(b), where we take the long-time average of the autocorrelation function: \[A(i,t_{0})=\int_{t>0}^{t_{0}}\lvert\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle \rvert dt. \tag{16}\] The autocorrelation function transitions sharply at \(h/L\sim\mathcal{O}(10^{-1})\), where \(\sigma_{i}^{x}\) no longer has significant weight on the initial site. For small \(h\), the autocorrelation function is a nearly pure \(\cos 2\lambda t\) and decay that grows with \(h\). Near the transition, \(A(i,t_{0})\) sharply decays to \(\sim 1/L\) for \(h/\lambda L>\mathcal{O}(10^{-1})\). We provide a Fourier analysis of Fig.2(a) in (c), where the transition is more clearly resolved. The oscillatory part (\(\epsilon_{0}\)) decreases (elongates in time) above \(h/\lambda L=1\) with an exponential decay rate that peaks at the transition point. Above the transition point, the dynamics are no longer captured by a decaying cosine function, but crossover to exponentially damped operator dynamics. The large magnetic field on the central site initially prevents operator weight from leaving site-\(i\) as shown by the elongating decay in (a). At this point the magnetic field is dominating the operator dynamics and rapidly rotates \(\sigma_{c}^{x}\) into eventual two-body operators that live in a superposition on all sites. In this regime, operator weight on the initial site saturates with a scaling like \(1/L\). Adding a mix of on-site frustrating magnetic fields is not sufficient to fully scramble information. Applying fields along the \(x,z\)-direction, local operators do not spread throughout full operator Hilbert space as the system can still be described as a collective \(S=L/2\)-spin interacting with a qubit. Similarly the steady-state entanglement entropy remains independent of \(\lambda\) and system size, while the growth rate is determined by \(\min[1/\lambda,1/h_{c}]\) and does not slow when when deep into the coherent regime \(\lambda L>h_{c}\) [see Supplemental for greater details]. ## V Numerical results After discussing the nearest neighbor mixed-Ising chain and the star Ising model, we now seek to understand the dynamical behavior in the full, ring-star system. Here we study the dynamics of the star-local model using exact diagonalization for systems up to \(L+1=13\) and Krylov subspace expansion techniques for evaluating the Schrodinger ODE for sizes up to \(L+1=22\). We employ periodic boundary conditions \(i=L=0\). In the previous section we revealed that a dynamical operator growth transition occurs as a function of \(h_{c}/\lambda\) on the Ising star graph that allows for rapid operator growth from the central qubit or coherently protected operator dynamics on the central qubit. When the operator dynamics are coherently oscillating on the central site, this leads to slow two-time correlator growth. Here we investigate the fate of this transition, its effect on the secondary, local Ising channel for quantum scrambling, and whether fast-scrambling is achievable in simple local-nonlocal construction. We know that the mixed-field Ising model is nonintegrable and capable of scrambling information throughout the full operator Hilbert space, but does a simple nonlocal qubit rapidly enhance this process? Firstly, we calculate the average adjacency level ratio \(\langle\tilde{r}\rangle\) of the full Hamiltonian Eq. 3 [see Appendix Fig.S4]. We solve the full spectrum exactly considering the parity conserving and \(k=0\) sector of the Ising chain (\(L=15\)) and confirm that for the ring-star model and nonintegrable Ising model that all points in phase space we consider indeed follow GOE random matrix statistics with \(\langle\bar{r}\rangle\approx 0.53\). Though the level spacing provides a first check of nonintegrable dynamics, it is not sufficient in capturing strongly coherent effects inherent to fractionalized regions of the Hilbert space or rare states that exhibit confinement or slow growth [83]. ### Entanglement Growth The first test as to the general information scrambling capacity of this model is to understand the entanglement entropy dynamics. We numerically investigate entanglement growth \(S_{vN}(t)\) as a function of c-qubit coupling. To understand the infinite temperature information dynamics of the system, we work with a product state with energy density equivalent to that of an infinite temperature state \(\langle H_{i}\rangle=0\). As the system exhibits GOE random matrix statistics for all parameters of the Hamiltonian tuned here, we expect that any such effective infinite temperature pure state will indeed obey ETH. The state we work with here is the polarized \(|+y\rangle\) spin-state, which has been used to exemplify high-energy quench dynamics of the mixed-field Ising chain previously [46]. In the limit \(\lambda=0\), we have the local, nonintegrable Ising model which exhibits ballistic entanglement growth for times up to \(L/v_{k}\), where \(v_{k}\) is the maximal dispersion associated with quasiparticle momentum \(k\). This timescale is upper-bounded by \(L/(2J)\) which provides the characteristic timescale of nearest-neighbor interactions multiplied by the length of the system. The factor of two comes from periodic boundary conditions and simply captures the longest path between spins. In the \(\lambda=0\) curve in Fig.3(c) we see that for increasing system size, entanglement saturation occurs on a timescale that scales linearly with \(L\) and the maximal quasiparticle velocity remains independent of system size. Entanglement saturates like \(L\log(2)\) as the effective infinite temperature state explores the full Hilbert space of the system. With the structure of correlations reaching the scale of the Hilbert space \(O(2^{L})\) we then expect entanglement to be similar to the log of full operator space complexity. As we slowly increase the nonlocal coupling to the central c-qubit, we expect that nonintegrability is maintained and now the shortest path between sites is next-nearest neighbor, mediated by the central qubit. The timescale for the c-qubit mediated entanglement spread is \(2/\lambda\), where Figure 3: _Entanglement spreading in the ring-star model:_ (a) \(S_{vN}(t)\) as a function of \(\lambda\), (b) plotted on a semilog axis in time (\(tJ\)), and (c) \(\lambda\) reparameterized to \(\lambda/\sqrt{L}\), system size \(L\). (d) Polynomial fit of the intermediate time entanglement growth behavior \(S_{vN}\propto t^{\alpha}\). System size \(L+1=22\) and \(J,h,g=[1.0,1.05,0.45]\). Figure 2: _Autocorrelation_\(\langle\sigma_{i}^{x}(t)\sigma_{i}^{x}\rangle\): (a) Eq.15 vs. time (\(t\lambda\)) as a function of \(h,L=40,\lambda=1.0\). (b) Long-time average of (a) \(A(i,t_{0}=200)\) as a function of \(h\) and system size \(L\). (c) Curve fit of smoothed results in (a) consisting of coherent oscillating component with frequency \(\epsilon_{0}\) and exponential decay parameter \(\epsilon_{1}\). \(\epsilon_{0}\) shown in the top, solid color pallette remains equal to \(2\lambda\) roughly until reaching the critical point \(\lambda=h/L\). Vertical lines in (c) correspond to critical point \(h_{c}=L\lambda=[0.5,1.0,1.5,2.0]\). the factor of 2 comes from the second order interaction with the central c-qubit that allows correlations to develop between sites \(i,j\neq 0\). In the extremely weak regime \(\lambda<<J,h,g\), the c-qubit qubit can similarly be thought of as a cavity, where the finite size Hilbert space can be disregarded. Including the c-qubit modifies the early-time entanglement growth, as operators grow ballistically due to local transport and super-ballistically with an additional nearest-neighbor c-qubit mediated contribution. We see in Fig.3(a) that for \(L+1=20\) as \(\lambda\to h\) the linear coefficient of entanglement growth grows continuously and the saturation value is modified by the addition of a single qubit. In this enhanced rapid scrambling regime, if we reparameterize the central qubit coupling \(\lambda\rightarrow\lambda/\sqrt{L}\) such that the effective spin-spin interaction is not extensive \(J_{\text{eff}}\sim\lambda^{2}.0/L\), we find that the coefficient for linear growth is roughly independent of system size (Fig.3(b) \(\lambda=\frac{2.5}{\sqrt{L}}\)). Entanglement entropy growth rate that increases with system size is indicative of fast-scrambling behavior [46]. For \(\lambda>h\), we see a surprising entanglement transition. The early time growth defined as \(t<2\frac{1}{\lambda}\) exhibits rapid entanglement growth but becomes sub-ballistic at intermediate times \(2\frac{1}{\lambda}<t<t_{sat}\). In Fig.3(b) the width of this intermediate timescale grows exponentially with increasing \(\lambda\) as exhibited by the logarithmic scale in time, while \(S_{vN}(t\rightarrow\infty)\) remains largely unchanged. We perform a polynomial fit of the entanglement growth in Fig.3(d) over the region \(\log[2]<S_{vN}(t)<S_{vN\ast pat}\) and find that in the local and fast-scrambling regime, entanglement continues to grow ballistically with exponent \(\alpha\sim 1.0-1.1\). For \(\lambda>\lambda_{c}\), \(\alpha\) monotonically decreases with \(\lambda\) as expected if \(S_{vN}\propto\frac{1}{\lambda}\log[t]\). In order to determine the location of the phase transition and relevant scalings, we analyze the entanglement entropy at a fixed time \(t^{*}\). We choose a time such that the \(S_{vN}(t^{*},\lambda=0)\) is roughly \(\frac{1}{2}S_{vN}(t\rightarrow\infty)\). In Fig.4(a) we plot \(S_{vN}(t^{*})\) as a function of two magnetic fields \(h\) depicted by solid/dashed curves and as a function of system size \(L\). We extract the maxima of Fig.4(a) and perform a size and field scaling and find that the fast-to-slow scrambling transition value occurs like \(\lambda_{c}\propto L^{(-0.52\pm 0.05)}h_{c}^{(0.52\pm 0.02)}\), (b,c) respectively. In the high \(h_{c}\) limit, the transition becomes nearly independent of system size (\(h_{c}=2.5\) curve in Fig.4(c)) and in the low field limit, is dominated by system size: \(h_{c}=0.05-0.75\) curves in Fig.4(c). Greater details on identifying the entanglement transition are included in the Supplemental Information. This contrasts the dynamical transition observed in the star Ising model with a transition that occurs at \(\lambda_{c}=h_{c}/L\). The novel entanglement transition mediated by the c-qubit dynamics is extremely surprising in that not only does the nonlocal coupling provide a secondary channel for distributing entanglement, but in the strong coupling regime it inhibits growth of even the local Ising interactions with which it commutes. The problem is similarly interesting in that it is completely disorder-free, so the slow information growth in the system can be attributed to purely coherent effects. In order to shed more light on how this central coupling serves to rapidly/slowly scramble quantum information we examine the transverse and longitudinal OTOCs (\(C_{xx},C_{zz}\)). ### Operator Spreading Starting from an initial state selected from the Haar measure, as to approximate infinite temperature state, we examine how operators initially prepared on site \(i=0\) spread under the influence of Ising interactions, where site \(c\) represents the central qubit. When calculating the OTOC we simplify Eq.1, as the local \(\hat{X}\),\(\hat{Z}\) operators are Hermitian and initially commute on all sites at \(t=0\). We then explicitly evaluate \[C_{\text{VW}}=1-\langle\hat{W}(j,t)\hat{V}(i,0)\hat{W}(j,t)\hat{V}(i,0)\rangle \tag{17}\] In Fig.5(a), when considering only local interactions we see that operator weight develops according to a ballistic Figure 4: _Identifying entanglement dynamics crossover:_ (a) Entanglement entropy calculated at time \(t^{*}-3.5tJ\) plotted as a function of c-qubit coupling \(\lambda\) (\(S_{vN}(t^{*})/t^{*}-v_{b}\)) for three transverse magnetic fields and various system sizes (\(h_{c}\in[1.05,2.5];L\in[11-21,2]\). For nonzero \(\lambda\), entanglement growth rate varies as a function of \(h_{c}\),\(L\). Crossover point \(\lambda_{c}\) determined as maxima of the curves in (a) with \(\lambda_{c}(L)\) and \(\lambda_{c}(h_{c})\) plotted in (b,c), respectively. In the respective \(h_{c}\) dominant and \(L\) dominant regimes, the critical point find \(\lambda_{c}\propto L^{\gamma}h_{c}^{\kappa}\) with \(\gamma=-0.52\pm 0.02(1)\) and \(\kappa=0.52\pm 0.05(1)\). light cone with a bubble-like profile that saturates on all sites following the wavefront: \(C_{zz}(r,t)\propto\frac{t^{2\nu}}{t^{2}}\). With small, nonzero \(\lambda\) (b) the light cone remains apparent but now spreads on top of growing operator weight distributed by the central qubit. The central qubit super-ballistically distributes operator weight to all sites on a timescale like \(\frac{2}{\lambda}\). The growth timescale on the c-qubit is half that of the bulk spins, as the \(\sigma_{z}\) operator must decohere on the initial site and again on the central site and hence no longer commutes with the Ising interaction. For \(\lambda<<h_{c}\), \(1/\lambda\) sets the timescale for this process. In the strong coupling regime (c), a weak light cone remains but transfers a small fraction of the original operator weight. Operator spreading becomes increasingly restricted where the lightcone profile is only visible for few \(tJ\) and \(C_{z}z(j,t)\) grows increasingly slowly on sites far from the initialized operator. This behavior is puzzling; the highly nonlocal coupling c-qubit with increasing interaction leads to unintuitively slow local operator spreading. Here we can draw insight from the analytic results on the star-Ising model (Fig.2). The \(L\)-body interaction on the central qubit acts to coherently project operators into \(\sigma^{z}\), while rapidly aliasing orthogonal operators. Once \(\sigma^{z}\) grows on the central qubit, it develops a coherent lifetime with operator weight decaying like \(\sim e^{-1/\lambda}\). \(C_{zz}\) characterizes how operators no longer commute with \(\sigma^{z}\), and in the same vain as the star-Ising model, we expect operators on the central site to be strongly projected into the \(z\)-subspace and coherently oscillate like \(2\lambda\). As the central c-qubit is highly/fully entangled with the remaining chain, the coherent projection on the central site then similarly restricts how rapidly the many-body state of the \(L\)-spins decoheres from the \(z\)-subspace. We expect \(C_{zz}(t)\) to exhibit slow behavior as \(z\)-operators are strongly driven on all sites. We then examine \(C_{xx}(t)\), which captures the rapid growth of \(\hat{Z}\). In Fig.5(d) we see that \(C_{xx}(t)\) continues to grow like \(\mathcal{O}(1)\) across all sites, with strong, oscillatory behavior on the central qubit (\(i-j=c\)). \(C_{xx}(j,t)\) on all sites similarly oscillates at frequency \(2\lambda\) but shifted by half a period compared to \(C_{xx}(c,t)\). On the initial site \(i\), as \(\sigma_{i}^{z}\) decoheres under the transverse magnetic field it no longer commutes with itself nor the Ising interaction. This leads to a nonzero \(C_{zz}(t)\) on site-\(i\) and subsequent operator weight growing on nearest neighbors and the central qubit. Once operator weight grows on the c-qubit, the associated decoherence time of \(\sigma_{c}^{z}\) is exponential in \(\lambda\) and as weak operator weight leaks onto bulk spins, the same coherent oscillations and slow decay is imparted on the local spin-chain. We gain further insight by examining the individual curves that make up the density plot in Fig.5 as Fig.6. We see the polynomial growth associated with the light-cone spreading on top of the exponential c-qubit mediated growth (a-c). In the slow scrambling regime (Fig.6(d)), we find that for \(\lambda=3.0\) that \(C_{xx}(c,t)\) becomes \(\mathcal{O}(1)\) at \(t_{c}=\pi/2\lambda\) (dashed, red) and after time \(t_{\lambda}=\pi/\lambda\) the rate of change of \(C_{xx}(i-j,t)\) decreases dramatically (solid). Once \(\sigma^{z}\) exists on the central Figure 5: _Space-time OTOC spreading:_ (a-c) OTOC for longitudinal spin component \(C_{xx}\) for \(\lambda=0,0.6,2.6\), respectively. Initial linear light cone spreads on top of c-qubit mediated operator weight that spreads super-ballistically on all sites after an initial wait time that scales with \(min[1/h_{x},1/\lambda]\). For \((\lambda/\sqrt{L})/h\geq 1\), \(C_{zz}\) is suppressed on all sites beyond a weak initial light cone. (d) In contrast, for \(\lambda=2.6\), transverse OTOC \(C_{xx}\) rapidly fluctuates on the initial site-\(i\) and spreads to nearest neighbors \(i+1\) and the central site \(L\) on the order of \(t=1/\lambda\). The light cone profile is no longer visible and \(C_{xx}\) rapidly oscillates with frequency \(2\lambda\) for sites \(|i-j|>1\). System size \(L+1=14\) and \(J=1.0,h=1.05,g=0.45\). qubit, orthogonal operators on the c-qubit and bulk sites rapidly fluctuate and lead to an effective wait time, slowing the amount of operator growth under the action of the Ising-\(zz\) interaction. A complementary perspective for operators orthogonal to \(\hat{Z}\) is observed by \(C_{zz}\) (Fig.7. Looking at the OTOC in the slow scrambling regime, operator weight on the c-qubit and bulk sites grows like \(h^{2}t^{6}\), as captured by early-time expansion, up to time \(t_{\lambda}=\pi/2\lambda\) [see Supplemental for greater details]. For \(t>t_{\lambda}\), operators become essentially projected onto \(z\)-operator subspace due to the coherent lifetime of operators on the central qubit. Fitting the OTOC to \(at^{\beta}\) shows that operator weight decoheres on the c-qubit with exponent \(\beta_{c}\sim 0.45\), while on bulk sites it is nearly linear with \(\beta_{j}\sim 0.85\). On all sites, the coefficient of growth becomes exponentially suppressed in \(\lambda\) with \(\log[\alpha]\propto-\lambda\). This can be understood heuristically: once operator weight lies on the central site, it coherently oscillates between sites like \(2\lambda\) and leads to a waiting time during which little to no operator weight has decohered from the \(z\)-subspace as it oscillates between \(z-I\). The amount of operator weight that decoheres in-between these waiting periods is \(\int_{t_{\lambda}}^{t_{\lambda}}e^{-\lambda}\) so the rate of operator growth into this orthogonal subspace is independent of time and leads generically to \(C_{xx}(j!=c,t)\propto e^{-\lambda}t\). This slow growth for simple two-body body operators outside of the \(z-\)subspace continues to be exponentially suppressed for greater complexity many-body operators. The sublinear growth of OTOCs into the bulk of operator Hilbert space then generically provides \(\log[t]\) entanglement entropy growth, as we have observed for quantum quenches from the \(|+y\rangle\) state. ## VI Discussion Here we have shown how the collective interactions between many constituent spins and a central auxiliary qubit is both able to rapidly scramble information across all degrees of freedom and restrict state/operator growth from exploring the full Hilbert space. From an entanglement entropy picture, the central bit entangles rapidly with its surrounding environment, the state space of the qubit simply being \(q=2\). Treating it as a 2-level system in an effective bath, the extensive bath interactions and on-site magnetic field frustrate the central qubit and lead to a rapidly fluctuating spin-moment when \(N\lambda\sim h_{c}\). In the operator picture this leads to rapidly decohering operators that quickly lie in a superposition of states \(\{\tilde{X},\tilde{Y},\tilde{I},\tilde{Z}\}\). For \(\lambda>h_{c}\), \(\{\tilde{Z}\}\) operators are strongly driven to the qubit and therefore do not fully decohere while \(\{\tilde{X},\tilde{Y}\}\) become essentially echoed-out. This picture is more clearly developed when we again turn-off local interactions, \(J=0\). In this case only the central bit is able to mediate entanglement throughout the system. As a function of \(\lambda/h_{c}\) the half-chain entanglement entropy grows with a velocity like \(1/\lambda\) until \(\lambda\)\(h_{c}\) and surprisingly saturates with an entropy that is independent of system size. The entropy growth rate and saturation value do not change when crossing over into the strong-coupling regime. Though the half-system entropy value is identical, we know from an operator picture that scrambling occurs slowly on the leaves of the star graph, leaving the central qubit to scramble rapidly while other sites scramble slowly. From both the operator and entropy picture, we know that the central bit is highly entangled with the system and the saturation entropy remains fixed regardless of coupling; therefore once the central bit becomes highly entangled, it similarly cannot scramble quantum information throughout. The saturation in information capacity of the star-graph remains fixed as a function of \(\lambda\), so by tuning \(\lambda\) we tune whether entanglement is frozen in a highly entangled auxiliary bit or distributed globally. Understanding the information dynamics of the star-graph then informs the surprising behavior observed in the fully interacting star-local Ising system. Once the central qubit becomes highly entangled with it's environment, it similarly acts to project operators into the \(z-\)subspace, inhibiting the local, nonintegrable Ising interactions from fully scrambling the system and reaching full thermalization quickly. As the commutativity graph that translates operators from site-\(i\) to site-\(j\) requires operation of the Ising-\(zz\) interactions, which similarly commutes with the central spin coupling, local operator growth spreading is inhibited on the same decoherence timescale. The local channel is effectively turned off by the slow operator decoherence. If the local interactions were orthogonal to the central coupling, we expect different operator spreading results. Here we take a simple extension of our model and change the form of the auxiliary coupling to be Ising-\(xx\) which we term a compass, ring-star Ising model. Now the the auxiliary coupling projects the system into \(\hat{X}\) subspace, which no longer commutes with the local \(-zz\) interactions. This should allow many-body operators to propagate throughout but inhibit the growth of operator strings containing \(\hat{Y},\hat{Z}\). Translating the operator picture to the entropy picture, we are allowing higher order many-body operators to develop through local channels, so the intermediate time saturation value should be significantly larger and scale extensively with system size, the logarithmically slow growth of orthogonal operator strings should then extend to long times as operators can decohere to larger regions of the Hilbert space. In Fig. 8 we observe exactly this behavior. Noting that the magnitude of the orthogonal field on the c-qubit changes from \(h\to g=0.45\), the inhibited operator growth goes as \(\lambda_{x}/g^{2}\). The entanglement entropy (a) following a quench from \(|+y\rangle\) transitions from local spreading, fast-spreading, to confined for \(\lambda_{x}>0.5\). For times \(tJ<10\), the system exhibits ballistic operator growth regardless of coupling but reaches an intermediate saturation value \(S_{vN}\approx 3\) with slow growth extending out to late times. The OTOC agrees identically, where we have the same ballistic spreading lightcone with fully scrambled operators following the wavefront (b). With increasing coupling (c) we observe a small amount of operator weight that spreads super-ballistically across all sites, a well defined wavefront, and a suppression in the OTOC behind the wavefront. This becomes more dramatic deeper into the confined regime (d,e), where the wavefront becomes the only region where operators orthogonal to \(\hat{X}\) may be found. As the local Ising interaction is orthogonal to the operator subspace in which the c-qubit protects, the wavefront may freely propagate but operators behind the wavefront are aliased to predominantly span strings of \(\hat{X}\) and \(\hat{I}\) operators. ## VII Conclusion We have interrogated the quantum entanglement and operator dynamics in a central spin-bath design. The local Ising chain acts as a structured, thermalizing bath that when coupled to a nonlocal central qubit, is able to rapidly scramble quantum information across the system up until the information capacity of the c-qubit is quenched. When the central bit is rapidly quenched in the strong coupling regime, it leads to operator confinement in sectors of the Hilbert space spanned by \(\hat{I}\) and operators parallel to the central qubit coupling. Entanglement entropy grows slowly out to late times as observed in effective infinite temperature states. This model admits unique limits; where, for weak coupling it exhibits super-ballistic OTOC spreading and fast-scrambling as supported by previous works examining a similar star-graph structure, and a confined quantum Zeno-like limit in which the complexity of operator growth is inhibited by extensive, coherent interaction with the two level central mode. This confined regime mirrors a larger class of models which admit prethermal or localization physics: disorder driven many-body localization, projective or weak quantum measurement, floquet-driven prethermalization. Our star-local work is similar in that the c-qubit drives operator fluctuations on the leaves on a timescale like \(1/\lambda\) once \(\lambda\) is greater than noncommuting fields on the central site. In contrast, the qubit retains its infinite-range information sharing capacity compared to the external drive scenario in floquet physics. In the measurement theory sense, once the central qubit becomes nearly maximally entangled with the Ising spin chain bath, its as if the bath projects the central spin Figure 8: _Information spreading in the transverse ring-star model (\(\lambda\sum_{i}^{L-1}\sigma_{i}^{x}\sigma_{c}^{x},J=1.0\)): (a) Early-time entanglement entropy growth crosses over from ballistic to sub-ballistic with increasing \(\lambda\), plotted on a semi-log scale. Entropy depicts a prethermal phase with entanglement that saturates around \(t\), then exhibits characteristic MBL, logarithmic growth \(\propto\log(t)\) out to late times \(tJ=200\). Early-time entropy growth increases smoothly with \(\lambda\) until reaching a discontinuity at \(\lambda\sim 1\). (e, d, e, f) OTOC for transverse spin component for \(\lambda=0,0.84,2.0,3.0\), respectively. In contrast to the parallel ring-star Ising model, the light cone becomes increasingly visible as \(\lambda\) increases. (c) In the nonintegrable, purely local case, operator scrambling saturates following the light cone. As c-qubit coupling grows, operators orthogonal to \(\sigma^{x}\) rapidly fluctuate, so that following the light cone, \(\sigma^{x}\) operator weight dominates each site. (e) The light cone boundary eventually becomes the only region where operators are orthogonal to the c-qubit coupling and propagate under local Ising interactions. System size \(L=13\) and \(J=1.0,h=1.05,g=0.45\)._ which in turn freezes the information sharing dynamics with which the bath is entangled. And finally in the MBL case, instead of local integrals of motion that are nearly integrable with exponentially small overlap, here we have global operators orthogonal to the auxiliary coupling that have a logarithmically slow thermalization timescale. This work provides a novel mechanism for exploring a host of quantum information dynamics. Here we outline the existence of a dynamical transition that occurs under static Hamiltonian dynamics, and in future work it would be fruitful to investigate the nature of the transition and how similar physics can be observed in RUCs. Tracing out the central qubit to produce non-Hermitian physics may similarly illuminate how this model relates to strong periodic driving prethermalization. As the inverse problem, it would be interesting if exotic non-Hermitian physics has a hidden unitary quantum system analog, where multiple ancillary qubit degrees of freedom produce the observed non-Hermitian dynamics. Future work should also seek to understand how this transition persists with larger c-qudit towards an infinite bosonic fields or multiple dissipative modes. It would be an interesting engineering application if uniform coupling to a bosonic mode or central-qubit can protect against dephasing errors in the environmental qubit platform. ## VIII Acknowledgements J.C.S and N.T. would like to thank Sumilan Banerjee, Chandrasekhar Ramanathan, Brian Skinner, Xiaozhou Feng, Shi Feng, and Sayantan Roy for useful discussions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-FG02-07ER46423. Computations were done using the QuSpin python package [84] on the Unity cluster at the Ohio State University.
2306.06250
Strategic Apple Tasting
Algorithmic decision-making in high-stakes domains often involves assigning decisions to agents with incentives to strategically modify their input to the algorithm. In addition to dealing with incentives, in many domains of interest (e.g. lending and hiring) the decision-maker only observes feedback regarding their policy for rounds in which they assign a positive decision to the agent; this type of feedback is often referred to as apple tasting (or one-sided) feedback. We formalize this setting as an online learning problem with apple-tasting feedback where a principal makes decisions about a sequence of $T$ agents, each of which is represented by a context that may be strategically modified. Our goal is to achieve sublinear strategic regret, which compares the performance of the principal to that of the best fixed policy in hindsight, if the agents were truthful when revealing their contexts. Our main result is a learning algorithm which incurs $O (\sqrt{T})$ strategic regret when the sequence of agents is chosen stochastically. We also give an algorithm capable of handling adversarially-chosen agents, albeit at the cost of $O(T^{(d+1)/(d+2)})$ strategic regret (where $d$ is the dimension of the context). Our algorithms can be easily adapted to the setting where the principal receives bandit feedback -- this setting generalizes both the linear contextual bandit problem (by considering agents with incentives) and the strategic classification problem (by allowing for partial feedback).
Keegan Harris, Chara Podimata, Zhiwei Steven Wu
2023-06-09T20:46:31Z
http://arxiv.org/abs/2306.06250v2
# Strategic Apple Tasting ###### Abstract Algorithmic decision-making in high-stakes domains often involves assigning _decisions_ to agents with _incentives_ to strategically modify their input to the algorithm. In addition to dealing with incentives, in many domains of interest (e.g. lending and hiring) the decision-maker only observes feedback regarding their policy for rounds in which they assign a positive decision to the agent; this type of feedback is often referred to as _apple tasting_ (or _one-sided_) feedback. We formalize this setting as an online learning problem with apple-tasting feedback where a _principal_ makes decisions about a sequence of \(T\)_agents_, each of which is represented by a _context_ that may be strategically modified. Our goal is to achieve sublinear _strategic regret_, which compares the performance of the principal to that of the best fixed policy in hindsight, _if the agents were truthful when revealing their contexts_. Our main result is a learning algorithm which incurs \(\tilde{\mathcal{O}}(\sqrt{T})\) strategic regret when the sequence of agents is chosen _stochastically_. We also give an algorithm capable of handling _adversarially-chosen_ agents, albeit at the cost of \(\tilde{\mathcal{O}}(T^{(d+1)/(d+2)})\) strategic regret (where \(d\) is the dimension of the context). Our algorithms can be easily adapted to the setting where the principal receives _bandit_ feedback--this setting generalizes both the linear contextual bandit problem (by considering agents with incentives) and the strategic classification problem (by allowing for partial feedback). ## 1 Introduction Algorithmic systems have recently been used to aid in or automate decision-making in high-stakes domains (including lending and hiring) in order to, e.g., improve efficiency or reduce human bias [11, 1]. When subjugated to algorithmic decision-making in high-stakes settings, individuals have an incentive to _strategically_ modify their observable attributes to appear more qualified. Such behavior is often observed in practice. For example, credit scores are often used to predict the likelihood an individual will pay back a loan on time if given one. Online articles with titles like _"9 Ways to Build and Improve Your Credit Fast"_ are ubiquitous and offer advice such as "pay credit card balances strategically" in order to improve one's credit score with minimal effort [38]. In hiring, common advice ranges from curating a list of keywords to add to one's resume, to using white font in order to "trick" automated resume scanning software [18, 2]. If left unaccounted for, such strategic manipulations could result in individuals being awarded opportunities for which they are not qualified for, possibly at the expense of more deserving candidates. As a result, it is critical to keep individuals' incentives in mind when designing algorithms for learning and decision-making in high-stakes settings. In addition to dealing with incentives, another challenge of designing learning algorithms for high-stakes settings is the possible _selection bias_ introduced by the way decisions are made. In particular, decision-makers often only have access to feedback about the deployed policy from individuals that have received positive decisions (e.g., the applicant is given the loan, the candidate is hired to the job and then we can evaluate how good our decision was). In the language of online learning, this type of feedback is known as _apple tasting_ (or _one-sided_) feedback. _When combined, these two complications (incentives & one-sided feedback) have the potential to amplify one other, as algorithms can learn only when a positive decision is made, but individuals have an incentive to strategically modify their attributes in order to receive such positive decisions, which may interfere with the learning process._ ### Contributions We formalize our setting as a game between a _principal_ and a sequence of _\(T\) strategic agents_, each with an associated _context_\(\mathbf{x}_{t}\) which describes the agent. At every time \(t\in\{1,\dots,T\}\), the principal deploys a _policy_\(\pi_{t}\), a mapping from contexts to binary _decisions_ (e.g., whether to accept/reject a loan applicant). Given policy \(\pi_{t}\), agent \(t\) then presents a (possibly modified) context \(\mathbf{x}_{t}^{\prime}\) to the algorithm, and receives a decision \(a_{t}=\pi_{t}(\mathbf{x}_{t}^{\prime})\). If \(a_{t}=1\), the principal observes _reward_\(r_{t}(a_{t})=r_{t}(1)\); if \(a_{t}=0\) they receive no feedback. (\(r_{t}(0)\) is assumed to be known and constant across rounds.) Our metric of interest is _strategic regret_, i.e., regret with respect to the best fixed policy in hindsight, _if agents were truthful when reporting their contexts_. Our main result is an algorithm which achieves \(\tilde{O}(\sqrt{T})\) strategic regret (with polynomial per-round runtime) when there is sufficient randomness in the distribution over agents (Algorithm 1). At a high level, our algorithm deploys a linear policy at every round which is appropriately shifted to account for the agents' strategic behavior. We identify a _sufficient_ condition under which the data received by the algorithm at a given round is "clean", i.e. has not been strategically modified. Algorithm 1 then online-learns the relationship between contexts and rewards by only using data for which it is sure is clean. The regret of Algorithm 1 depends on an exponentially-large constant \(c(d,\delta)\approx(1-\delta)^{-d}\) due to the one-sided feedback available for learning, where \(d\) is the context dimension and \(\delta\in(0,1)\) is a parameter which represents the agents' ability to manipulate. While this dependence on \(c(d,\delta)\) is insignificant when the number of agents \(T\to\infty\) (i.e. is very large), it may be problematic for the principal whenever \(T\) is either small or unknown. To mitigate this issue, we show how to obtain \(\tilde{O}(d\cdot T^{2/3})\) strategic regret by playing a modified version of the well-known _explore-then-commit_ algorithm (Algorithm 4). At a high level, Algorithm 4 "explores" by always assigning action 1 for a fixed number of rounds (during which agents do not have an incentive to strategize) in order to collect sufficient information about the data-generating process. It then "exploits" by using this data learn a strategy-aware linear policy. Finally, we show how to combine Algorithm 1 and Algorithm 4 to achieve \(\tilde{O}(\min\{c(d,\delta)\cdot\sqrt{T},d\cdot T^{2/3}\})\) strategic regret whenever \(T\) is unknown. While the assumption of stochastically-chosen agents is well-motivated in general, it may be overly restrictive in some specific settings. Our next result is an algorithm which obtains \(\tilde{O}(T^{(d+1)/(d+2)})\) strategic regret when agents are chosen _adversarially_ (Algorithm 3). Algorithm 3 uses a variant of the popular Exp3 algorithm to trade off between a carefully constructed set of (exponentially-many) policies [5]. As a result, it achieves sublinear strategic regret when agents are chosen adversarially, but requires an exponentially-large amount of computation at every round. Finally, we note that while our primary setting of interest is that of one-sided feedback, all of our algorithms can be easily extended to the more general setting in which the principal receives _bandit feedback_ at each round, i.e. \(r_{t}(0)\) is not constant and must be learned from data. To the best of our knowledge, we are the first to consider strategic learning in the contextual bandit setting. ### Related work **Strategic responses to algorithmic decision-making** There is a growing line of work at the intersection of economics and computation on algorithmic decision-making with incentives, under the umbrella of _strategic classification_ or _strategic learning_[21, 16, 15, 32, 44, 3, 8, 10, 22, 25, 24, 20, 30, 34, 35, 23, 17, 27]. In its most basic form, a principal makes either a binary or real-valued pre diction about a strategic agent, and receives _full feedback_ (e.g., the agent's _label_) after the decision is made. While this setting is similar to ours, it crucially ignores the one-sided feedback structure present in many strategic settings of interest. In our running example of hiring, full feedback would correspond to a company not offering an applicant a job, and yet still getting to observe whether they would have been a good employee! As a result, such methods are not applicable in our setting. Concurrent work [14] studies the effects of bandit feedback in the related problem of _performance prediction_[39], which considers data distribution shifts at the _population level_ in response to the deployment of a machine learning model. In contrast, our focus is on strategic responses to machine learning models at the _individual level_ under apple tasting and bandit feedback. **Apple tasting and online learning** Helmbold et al. [26] introduce the notion of apple-tasting feedback for online learning. In particular, they study a binary prediction task over "instances" (e.g., fresh/rotten apples), in which a positive prediction is interpreted as accepting the instance (i.e. "tasting the apple") and a negative prediction is interpreted as rejecting the instance (i.e., _not_ tasting the apple). The learner only gets feedback when the instance is accepted (i.e., the apple is tasted). While we are the first to consider classification under incentives with apple tasting feedback, similar feedback models have been studied in the context of algorithmic fairness [8], partial-monitoring games [4], and recidivism prediction [19]. A related model of feedback is that of _contaminated controls_[33], which considers learning from (1) a treated group which contains only _treated_ members of the agent population and (2) a "contaminated" control group with samples from the _entire_ agent population (not just those under _control_). Technically, our results are also related to a line of work in contextual bandits which shows that greedy algorithms without explicit exploration can achieve sublinear regret as long as the underlying context distribution is sufficiently diverse [41, 7, 31, 45, 40]. **Bandits and agents** Finally, a complementary line of work to ours is that of _Bayesian incentive-compatible_ (BIC) exploration in multi-armed bandit problems [36, 28, 42, 29, 37]. Under such settings, the goal of the principal is to _persuade_ a sequence of \(T\) agents with incentives to explore across several different actions with bandit feedback. In contrast, in our setting it is the principal, not the agents, who is the one taking actions with partial feedback. As a result there is no need for persuasion, but the agents now have an incentive to strategically modify their behavior in order to receive a more desirable decision/action. ## 2 Setting and background We consider a game between a _principal_ and a sequence of \(T\)_agents_. Each agent is associated with a _context_\(\mathbf{x}_{t}\in\mathcal{X}\subseteq\mathbb{R}^{d}\), which characterizes their attributes (e.g., a loan applicant's credit history/report). At time \(t\), the principal commits to a _policy_\(\pi_{t}:\mathcal{X}\to\{1,0\}\), which maps from contexts to binary _decisions_ (e.g., whether to accept/reject the loan application). We use \(a_{t}=1\) to denote the the principal's positive decision at round \(t\) (e.g., agent \(t\)'s loan application is approved), and \(a_{t}=0\) to denote a negative decision (e.g., the loan application is rejected). Given \(\pi_{t}\), agent \(t\)_best-responds_ by strategically modifying their context within their _effort budget_ as follows: **Definition 2.1** (Agent best response; lazy tiebreaking).: _Agent \(t\) best-responds to policy \(\pi_{t}\) by modifying their context according to the following optimization program._ \[\mathbf{x}_{t}^{\prime}\in \arg\max_{\mathbf{x}^{\prime}\in\mathcal{X}}\ \mathbbm{1}\{\pi_{t}(\mathbf{x}^{\prime})=1\}\] \[s.t. \|\mathbf{x}^{\prime}-\mathbf{x}_{t}\|_{2}\leq\delta\] _Furthermore, we assume that if an agent is indifferent between two (modified) contexts, they choose the one which requires the least amount of effort to obtain (i.e., agents are lazy when tiebreaking)._ In other words, every agent wants to receive a positive decision, but has only a limited ability to modify their (initial) context (represented by \(\ell_{2}\) budget \(\delta\)). Such an effort budget may be induced by time or monetary constraints and is a ubiquitous model of agent behavior in the strategic learning literature (e.g., [32, 22, 15, 9]). We focus on _linear thresholding policies_ where the principal assigns action \(\pi(\mathbf{x}^{\prime})=1\), if and only if \(\langle\boldsymbol{\beta},\mathbf{x}^{\prime}\rangle\geq\gamma\) for some \(\boldsymbol{\beta}\in\mathbb{R}^{d}\), \(\gamma\in\mathbb{R}\). We refer to \(\langle\boldsymbol{\beta},\mathbf{x}^{\prime}_{t}\rangle=\gamma\) as the _decision boundary_. For linear thresholding policies, the agent's best-response according to Definition 2.1 is to modify their context in the direction of \(\boldsymbol{\beta}/\|\boldsymbol{\beta}\|_{2}\) until the decision-boundary is reached (if it can indeed be reached). While we present our results for _lazy tiebreaking_ for ease of exposition, all of our results can be readily extended to the setting in which agents best-respond with a "trembling hand", i.e. _trembling hand tiebreaking_. Under this setting, we allow agents who strategically modify their contexts to "overshoot" the decision boundary by some bounded amount, which can be either stochastic or adversarially-chosen. See Appendix D for more details. The principal observes \(\mathbf{x}^{\prime}_{t}\) and plays action \(a_{t}=\pi_{t}(\mathbf{x}^{\prime}_{t})\) according to policy \(\pi_{t}\). If \(a_{t}=0\), the principal receives some known, _constant_ reward \(r_{t}(0):=r_{0}\in\mathbb{R}\). On the other hand, if the principal assigns action \(a_{t}=1\), we assume that the reward the principal receives is linear in the agent's _unmodified_ context, i.e., \[r_{t}(1):=\langle\boldsymbol{\theta}^{(1)},\mathbf{x}_{t}\rangle+\epsilon_{t} \tag{1}\] for some _unknown_\(\boldsymbol{\theta}^{(1)}\in\mathbb{R}^{d}\), where \(\epsilon_{t}\) is i.i.d. zero-mean sub-Gaussian random noise with (known) variance \(\sigma^{2}\). Note that \(r_{t}(1)\) is observed _only_ when the principal assigns action \(a_{t}=1\), and _not_ when \(a_{t}=0\). Following Helmbold et al. [26], we refer to such feedback as _apple tasting_ (or _one-sided_) feedback. Mapping to our lending example, the reward a bank receives for rejecting a particular loan applicant is the same across all applicants, whereas their reward for a positive decision could be anywhere between a large, negative reward (e.g., if a loan is never repaid) to a large, positive reward (e.g., if the loan is repaid on time, with interest). The most natural measure of performance in our setting is that of _Stackelberg regret_, which compares the principal's reward over \(T\) rounds with that of the optimal policy _given that agents strategize_. **Definition 2.2** (Stackelberg regret).: _The Stackelberg regret of a sequence of policies \(\{\pi_{t}\}_{t\in[T]}\) on agents \(\{\mathbf{x}_{t}\}_{t\in[T]}\) is_ \[\operatorname{Reg}_{\mathtt{Stackel}}(T):=\sum_{t\in[T]}r_{t}(\tilde{\pi}^{*} (\tilde{\mathbf{x}}_{t}))-\sum_{t\in[T]}r_{t}(\pi_{t}(\mathbf{x}^{\prime}_{t}))\] _where \(\tilde{\mathbf{x}}_{t}\) is the best-response from agent \(t\) to policy \(\tilde{\pi}^{*}\) and \(\tilde{\pi}^{*}\) is the optimal-in-hindsight policy, given that agents best-respond according to Definition 2.1._ A stronger measure of performance is that of _strategic regret_, which compares the principal's reward over \(T\) rounds with that of the optimal policy _had agents reported their contexts truthfully_. **Definition 2.3** (Strategic regret).: _The strategic regret of a sequence of policies \(\{\pi_{t}\}_{t\in[T]}\) on Figure 1: Summary of our model. agents \(\{\mathbf{x}_{t}\}_{t\in[T]}\) is_ \[\operatorname{Reg}_{\mathtt{strat}}(T):=\sum_{t\in[T]}r_{t}(\pi^{*}(\mathbf{x}_ {t}))-\sum_{t\in[T]}r_{t}(\pi_{t}(\mathbf{x}_{t}^{\prime}))\] _where \(\pi^{*}(\mathbf{x}_{t})=1\) if \(\langle\boldsymbol{\theta}^{(1)},\mathbf{x}_{t}\rangle\geq r_{0}\) and \(\pi^{*}(\mathbf{x}_{t})=0\) otherwise._ **Proposition 2.4**.: _Strategic regret is a stronger performance notion compared to Stackelberg regret, i.e., \(\operatorname{Reg}_{\mathtt{Stackel}}(T)\leq\operatorname{Reg}_{\mathtt{strat }}(T)\)._ Proof.: The proof follows from the corresponding regret definitions and the fact that the principal's reward is determined by the original (unmodified) agent contexts. \[R_{\mathtt{Stackel}}(T) :=\sum_{t\in[T]}r_{t}(\tilde{\pi}^{*}(\tilde{\mathbf{x}}_{t}))- \sum_{t\in[T]}r_{t}(\pi_{t}(\mathbf{x}_{t}^{\prime}))\] \[=\sum_{t\in[T]}r_{t}(\tilde{\pi}^{*}(\tilde{\mathbf{x}}_{t}))- \sum_{t\in[T]}r_{t}(\pi^{*}(\mathbf{x}_{t}))+\sum_{t\in[T]}r_{t}(\pi^{*}( \mathbf{x}_{t}))-\sum_{t\in[T]}r_{t}(\pi_{t}(\mathbf{x}_{t}^{\prime}))\] \[\leq 0+R_{\mathtt{strat}}(T)\] Because of Proposition 2.4, we focus on strategic regret, and use the shorthand \(\operatorname{Reg}_{\mathtt{strat}}(T)=\operatorname{Reg}(T)\) for the remainder of the paper. Strategic regret is a strong notion of optimality, as we are comparing the principal's performance with that of the optimal policy for an easier setting, in which agents do not strategize. Moreover, the apple tasting feedback introduces additional challenges which require new algorithmic ideas to solve, since the principal needs to assign actions to both (1) learn about \(\boldsymbol{\theta}^{(1)}\) (which can only be done when action 1 is assigned) and (2) maximize rewards in order to achieve sublinear strategic regret. See Figure 1 for a summary of the setting we consider. We conclude this section by pointing out that our results also apply to the more challenging setting of _bandit feedback_, in which \(r_{t}(1)\) is defined as in Equation (1), \(r_{t}(0):=\langle\boldsymbol{\theta}^{(0)},\mathbf{x}_{t}\rangle+\epsilon_{t}\) and only \(r_{t}(a_{t})\) is observed at each time-step. We choose to highlight our results for apple tasting feedback since this is the type of feedback received by the principal in our motivating examples. Finally, we note that \(\widetilde{\mathcal{O}}(\cdot)\) hides polylogarithmic factors, and that all proofs can be found in the Appendix. ## 3 Strategic classification with apple tasting feedback In this section, we present our main results: provable guarantees for online classification of strategic agents under apple tasting feedback. Our results rely on the following assumption. **Assumption 3.1** (Bounded density ratio).: _Let \(f_{U^{d}}:\mathcal{X}\to\mathbb{R}_{\geq 0}\) denote the density function of the uniform distribution over the \(d\)-dimensional unit sphere. We assume that agent contexts \(\{\mathbf{x}_{t}\}_{t\in[T]}\) are drawn i.i.d. from a distribution over the \(d\)-dimensional unit sphere with density function \(f:\mathcal{X}\to\mathbb{R}_{\geq 0}\) such that \(\frac{f(\mathbf{x})}{f_{U^{d}}(\mathbf{x})}\geq c_{0}>0\), \(\forall\mathbf{x}\in\mathcal{X}\).1_ Footnote 1: Our restriction to the _unit_ sphere is without loss of generality. All of our results and analysis extend readily to the setting where contexts are drawn from a distribution over the \(d\)-dimensional sphere with radius \(R>0\). Assumption 3.1 is a condition on the _initial_ agent contexts \(\{\mathbf{x}_{t}\}_{t\in[T]}\), _before_ they are strategically modified. Indeed, one would expect the distribution over _modified_ agent contexts to be highly discontinuous in a way that depends on the sequence of policies deployed by the principal. Furthermore, none of our algorithms need to know the value of \(c_{0}\). As we will see in the sequel, this assumption allows us to handle apple tasting feedback by _relying on the inherent diversity in the agent population for exploration_; a growing area of interest in the online learning literature (see references in Section 1.2). Moreover, such assumptions often hold in practice. For example, in the related problem of (non-strategic) contextual bandits (we will later show how our results extend to the strategic version of this problem), Bietti et al. [12] find that a greedy algorithm with no explicit exploration achieved the second-best empirical performance across a large number of datasets when compared to many popular contextual bandit algorithms. In our settings of interest (e.g. lending, hiring), such an assumption is reasonable if there is sufficient diversity in the applicant pool. In Section 4 we show how to remove this assumption, albeit at the cost of worse regret rates and exponential computational complexity. At a high level, our algorithm (formally stated in Algorithm 1) relies on three key ingredients to achieve sublinear strategic regret: 1. A running estimate of \(\mathbf{\theta}^{(1)}\) is used to compute a linear policy, which separates agents who receive action \(1\) from those who receive action \(0\). Before deploying, we shift the decision boundary by the effort budget \(\delta\) to account for the agents strategizing. 2. We maintain an estimate of \(\mathbf{\theta}^{(1)}\) (denoted by \(\widehat{\mathbf{\theta}}^{(1)}\)) and only updating it when \(a_{t}=1\) and we can ensure that \(\mathbf{x}^{\prime}_{t}=\mathbf{x}_{t}\). 3. We assign actions "greedily" (i.e. using no explicit exploration) w.r.t. the shifted linear policy. Shifted linear policyIf agents were _not_ strategic, assigning action \(1\) if \(\langle\widehat{\mathbf{\theta}}^{(1)}_{t},\mathbf{x}_{t}\rangle\geq r_{0}\) and action \(0\) otherwise would be a reasonable strategy to deploy, given that \(\widehat{\mathbf{\theta}}^{(1)}_{t}\) is our "best estimate" of \(\mathbf{\theta}^{(1)}\) so far. Recall that the strategically modified context \(\mathbf{x}^{\prime}_{t}\) is s.t., \(\|\mathbf{x}^{\prime}_{t}-\mathbf{x}_{t}\|\leq\delta\). Hence, in Algorithm 1, we shift the linear policy by \(\delta\|\widehat{\mathbf{\theta}}^{(1)}\|_{2}\) to account for strategically modified contexts. Now, action \(1\) is only assigned if \(\langle\widehat{\mathbf{\theta}}^{(1)}_{t},\mathbf{x}_{t}\rangle\geq\delta\| \widehat{\mathbf{\theta}}^{(1)}\|_{2}+r_{0}\). This serves two purposes: (1) It makes it so that any agent with unmodified context \(\mathbf{x}\) such that \(\langle\widehat{\mathbf{\theta}}^{(1)}_{t},\mathbf{x}\rangle<r_{0}\) cannot receive action \(1\), no matter how they strategize. (2) It forces some agents with contexts in the band \(r_{0}\leq\langle\widehat{\mathbf{\theta}}^{(1)}_{t},\mathbf{x}\rangle<\delta\| \widehat{\mathbf{\theta}}^{(1)}\|_{2}+r_{0}\) to strategize in order to receive action \(1\). This is the type of strategizing we want to incentivize. Estimating \(\mathbf{\theta}^{(1)}\)After playing action \(1\) for the first \(d\) rounds, Algorithm 1 forms an initial estimate of \(\mathbf{\theta}^{(1)}\) via ordinary least squares (OLS). Note that since the first \(d\) agents will receive action \(1\) regardless of their context, they have no incentive to modify and thus \(\mathbf{x}^{\prime}_{t}=\mathbf{x}_{t}\) for \(t\leq d\). In future rounds, the algorithm's estimate of \(\mathbf{\theta}^{(1)}\) is only updated whenever \(\mathbf{x}^{\prime}_{t}\) lies _strictly_ on the positive side of the linear decision boundary. We call these contexts _clean_, and can infer that \(\mathbf{x}_{t}^{\prime}=\mathbf{x}_{t}\) due to the lazy tiebreaking assumption in Definition 2.1. **Condition 3.2** (Sufficient condition for \(\mathbf{x}^{\prime}=\mathbf{x}\)).: _Given a shifted linear policy parameterized by \(\boldsymbol{\beta}^{(1)}\in\mathbb{R}^{d}\), we say that a context \(\mathbf{x}^{\prime}\) is clean if \(\langle\boldsymbol{\beta}^{(1)},\mathbf{x}^{\prime}\rangle>\delta\|\boldsymbol{ \beta}^{(1)}\|_{2}+r_{0}\)._ Greedy action assignmentBy assigning actions greedily according to the current (shifted) linear policy, we are relying on the diversity in the agent population for implicit exploration (i.e., to collect more datapoints to update our estimate of \(\boldsymbol{\theta}^{(1)}\)). As we will show, this implicit exploration is sufficient to achieve \(\widetilde{\mathcal{O}}(\sqrt{T})\) strategic regret under Assumption 3.1, albeit at the cost of an exponentially-large (in \(d\)) constant which depends on the agents' ability to manipulate (\(\delta\)). We are now ready to present our main result: strategic regret guarantees for Algorithm 1 under apple tasting feedback. **Theorem 3.3** (Informal; detailed version in Theorem B.1).: _With probability \(1-\gamma\), Algorithm 1 achieves the following performance guarantee:_ \[\operatorname{Reg}(T)\leq\widetilde{\mathcal{O}}\left(\frac{1}{c_{0}\cdot c_ {1}(d,\delta)\cdot c_{2}(d,\delta)}\sqrt{d\sigma^{2}T\log(4dT/\gamma)}\right)\] _where \(c_{1}(d,\delta):=\mathbb{P}_{\mathbf{x}\sim U^{d}}(\mathbf{x}[1]\geq\delta) \geq\Theta\left(\frac{(1-\delta)^{d/2}}{d^{2}}\right)\) for sufficiently large \(d\) and \(c_{2}(d,\delta):=\mathbb{E}_{\mathbf{x}\sim U^{d}}[\mathbf{x}[2]^{2}|\mathbf{ x}[1]\geq\delta]\geq\left(\frac{3}{4}-\frac{1}{2}\delta-\frac{1}{4}\delta^{2} \right)^{3}\), where \(\mathbf{x}[i]\) denotes the \(i\)-th coordinate of a vector \(\mathbf{x}\)._ Proof sketch.: Our analysis begins by using properties of the strategic agents and shifted linear decision boundary to upper-bound the per-round strategic regret for rounds \(t>d\) by a term proportional to \(\|\widehat{\boldsymbol{\theta}}_{t}^{(1)}-\boldsymbol{\theta}^{(1)}\|_{2}\), i.e., our instantaneous estimation error for \(\boldsymbol{\theta}^{(1)}\). Next we show that \[\|\widehat{\boldsymbol{\theta}}_{t}^{(1)}-\boldsymbol{\theta}^{(1)}\|_{2}\leq \frac{\left\|\sum_{s=1}^{t}\mathbf{x}_{s}\epsilon_{s}\mathbb{1}\{\mathcal{I}_{ s}^{(1)}\}\right\|_{2}}{\lambda_{min}(\sum_{s=1}^{t}\mathbf{x}_{s}\mathbf{x}_{s}^{ \top}\mathbb{1}\{\mathcal{I}_{s}^{(1)}\})}\] where \(\lambda_{min}(M)\) is the minimum eigenvalue of (symmetric) matrix \(M\), and \(\mathcal{I}_{s}^{(1)}=\{\widehat{\boldsymbol{\theta}}_{s}^{(1)},\mathbf{x}_{s }\}\geq\delta\|\widehat{\boldsymbol{\theta}}_{s}^{(1)}\|_{2}+r_{0}\}\) is the event that Algorithm 1 assigns action \(a_{s}=1\) and can verify that \(\mathbf{x}_{s}^{\prime}=\mathbf{x}_{s}\). We upper-bound the numerator using a variant of Azuma's inequality for martingales with subgaussian tails. Next, we use properties of Hermitian matrices to show that \(\lambda_{min}(\sum_{s=1}^{t}\mathbf{x}_{s}\mathbf{x}_{s}^{\top}\mathbb{1}\{ \mathcal{I}_{s}^{(1)}\})\) is lower-bounded by two terms: one which may be bounded w.h.p. by using the extension of Azuma's inequality for matrices, and one of the form \(\sum_{s=1}^{t}\lambda_{min}(\mathbb{E}_{s-1}[\mathbf{x}_{s}\mathbf{x}_{s}^{ \top}\mathbb{1}\{\mathcal{I}_{s}^{(1)}\}])\), where \(\mathbb{E}_{s-1}\) denotes the expected value conditioned on the filtration up to time \(s\). Note that up until this point, we have only used the fact that contexts are drawn i.i.d. from a _bounded_ distribution. Using Assumption 3.1 on the bounded density ratio, we can lower bound \(\lambda_{min}(\mathbb{E}_{s-1}[\mathbf{x}_{s}\mathbf{x}_{s}^{\top}\mathbb{1}\{ \mathcal{I}_{s}^{(1)}\}])\) by \(\lambda_{min}(\mathbb{E}_{U^{d},s-1}[\mathbf{x}_{s}\mathbf{x}_{s}^{\top} \mathbb{1}\{\mathcal{I}_{s}^{(1)}\}])\), _where the expectation is taken with respect to the uniform distribution over the \(d\)-dimensional ball_. We then use properties of the uniform distribution to show that \(\lambda_{min}(\mathbb{E}_{U^{d},s-1}[\mathbf{x}_{s}\mathbf{x}_{s}^{\top} \mathbb{1}\{\mathcal{I}_{s}^{(1)}\}])\geq\mathcal{O}(c_{0}\cdot c(d,\delta))\). Putting everything together, we get that \(\|\widehat{\boldsymbol{\theta}}_{t}^{(1)}-\boldsymbol{\theta}^{(1)}\|_{2}\leq(c _{0}\cdot c(d,\delta)\cdot\sqrt{t})^{-1}\) with high probability. Via a union bound and the fact that \(\sum_{t\in[T]}\frac{1}{\sqrt{t}}\leq 2T\), we get that \(\operatorname{Reg}(T)\leq\widetilde{\mathcal{O}}(\frac{1}{c_{0}\cdot c(d,\delta) }\sqrt{T})\). Finally, we use tools from high-dimensional geometry to lower bound the volume of a spherical cap and we show that for sufficiently large \(d\), \(c_{1}(d,\delta)\geq\Theta\left(\frac{(1-\delta)^{d/2}}{d^{2}}\right).\) ### High-dimensional contexts While we typically think of the number of agents \(T\) as growing and the context dimension \(d\) as constant in our applications of interest, there may be situations in which \(T\) is either unknown or small. Under such settings, the \(\nicefrac{{1}}{{c}(d,\delta)}\) dependence in the regret bound (where \(c(d,\delta)=c_{1}(d,\delta)\cdot c_{2}(d,\delta)\)) may become problematic if \(\delta\) is close to \(1\). This begs the question: "Why restrict the OLS estimator in Algorithm 1 to use only clean contexts (as defined in Condition 3.2)?" Perhaps unsurprisingly, we show in Appendix B that the estimate \(\widehat{\boldsymbol{\theta}}^{(1)}\) given by OLS will be inconsistent if even a constant fraction of agents strategically modify their contexts. Given the above, it seems reasonable to restrict ourselves to learning procedures which only use data from agents for which the principal can be sure that \(\mathbf{x}^{\prime}=\mathbf{x}\). Under such a restriction, it is natural to ask whether there exists some sequence of linear polices which maximizes the number of points of the form \((\mathbf{x}^{\prime}_{t},r_{t}(1))\) for which the principal can be sure that \(\mathbf{x}^{\prime}_{t}=\mathbf{x}_{t}\). Again, the answer is no: **Proposition 3.4**.: _For any sequence of linear policies \(\{\boldsymbol{\beta}_{t}\}_{t}\), the expected number of clean points is:_ \[\mathbb{E}_{\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\sim U^{d}}\left[\sum_{t\in[T] }\mathbbm{1}\left\{\langle\mathbf{x}_{t},\boldsymbol{\beta}_{t}\rangle> \delta\|\boldsymbol{\beta}_{t}\|_{2}\right\}\right]=c_{1}(d,\delta)\cdot T\] _when (initial) contexts are drawn uniformly from the \(d\)-dimensional unit sphere._ The proof follows from the rotational invariance of the uniform distribution over the unit sphere. Intuitively, Proposition 3.4 implies that any algorithm which wishes to learn \(\boldsymbol{\theta}^{(1)}\) using clean samples will only have \(c_{1}(d,\delta)\cdot T\) datapoints in expectation. Observe that this dependence on \(c_{1}(d,\delta)\) arises as a direct result of the agents' ability to strategize. We remark that a similar constant often appears in the regret analysis of BIC bandit algorithms (see Section 1.2). Much like our work, [36] find that their regret rates depend on a constant which may be arbitrarily large, depending on how hard it is to persuade agents to take the principal's desired action in their setting. The authors conjecture that this dependence is an inevitable "price of incentive-compatibility". While our results do not rule out better strategic regret rates in \(d\) for more complicated algorithms (e.g., those which deploy non-linear policies), it is often unclear how strategic agents would behave in such settings, both in theory (Definition 2.1 would require agents to solve a non-convex optimization with potentially no closed-form solution) and in practice, making the analysis of such nonlinear policies difficult in strategic settings. We conclude this section by showing that polynomial dependence on \(d\) is possible, at the cost of \(\widetilde{\mathcal{O}}(T^{2/3})\) strategic regret. Specifically, we provide an algorithm (Algorithm 2) which obtains the following regret guarantee whenever \(T\) is small or unknown, which uses Algorithm 1 and a variant of the explore-then-commit algorithm (Algorithm 4) as subroutines: **Theorem 3.5** (Informal; details in Theorem B.13).: _Algorithm 2 incurs expected strategic regret_ \[\mathbb{E}[\mathrm{Reg}(T)]=\widetilde{\mathcal{O}}\left(\min\left\{\frac{d^{ 5/2}}{(1-\delta)^{d/2}}\cdot\sqrt{T},d\cdot T^{2/3}\right\}\right),\] _where the expectation is taken with respect to the sequence of contexts \(\{\mathbf{x}_{t}\}_{t\in[T]}\) and random noise \(\{\epsilon_{t}\}_{t\in[T]}\)._ The algorithm proceeds by playing a "strategy-aware" variant of explore-then-commit (Algorithm 4) with a doubling trick until the switching time \(\tau^{*}=g(d,\delta)\) is reached. Note that \(g(d,\delta)\) is a function of both \(d\) and \(\delta\), _not_\(c_{0}\). If round \(\tau^{*}\) is indeed reached, the algorithm switches over to Algorithm 1 for the remaining rounds. Extension to bandit feedbackAlgorithm 1 can be extended to handle bandit feedback by explicitly keeping track of an estimate \(\widehat{\boldsymbol{\theta}}^{(0)}\) of \(\boldsymbol{\theta}^{(0)}\) via OLS, assigning action \(a_{t}=1\) if and only if \(\langle\widehat{\boldsymbol{\theta}}^{(1)}_{t}-\widehat{\boldsymbol{\theta}}^ {(0)}_{t},\mathbf{x}^{\prime}_{t}\rangle\geq\delta\cdot\|\widehat{\boldsymbol {\theta}}^{(1)}_{t}-\widehat{\boldsymbol{\theta}}^{(0)}_{t}\|_{2}\), and updating the OLS estimate of \(\widehat{\boldsymbol{\theta}}^{(0)}\) whenever \(a_{t}=0\) (since agents will not strategize to receive action \(0\)). Algorithm 2 may be extended to bandit feedback by "exploring" for twice as long in Algorithm 4, in addition to using the above modifications. In both cases, the strategic regret rates are withing a constant factor of the rates obtained in Theorem 3.3 and Theorem 3.5. ## 4 Beyond stochastic contexts In this section, we allow the sequence of initial agent contexts to be chosen by an (oblivious) _adversary_. This requires new algorithmic ideas, as the regression-based algorithms of Section 3 suffer _linear_ strategic regret under this adversarial setting. Our algorithm (Algorithm 3) is based on the popular EXP3 algorithm [6]. At a high level, Algorithm 3 maintains a probability distribution over "experts", i.e., a discretized grid \(\mathcal{E}\) over carefully-selected policies. In particular, each grid point \(\mathbf{e}\in\mathcal{E}\subseteq\mathbb{R}^{d}\) represents an "estimate" of \(\boldsymbol{\theta}^{(1)}\), and corresponds to a slope vector which parameterizes a (shifted) linear threshold threshold policy, like the ones considered in Section 3. We use \(a_{t,\mathbf{e}}\) to refer to the action played by the principal at time \(t\), had they used the linear threshold policy parameterized by expert \(\mathbf{e}\). At every time-step, (1) the adversary chooses an agent \(\mathbf{x}_{t}\), (2) a slope vector \(\mathbf{e}_{t}\in\mathcal{E}\) is selected according to the current distribution, (3) the principal commits to assigning action \(1\) if and only if \(\langle\mathbf{e}_{t},\mathbf{x}^{\prime}_{t}\rangle\geq\delta\|\mathbf{e}_{t }\|_{2}\), (4) the agent strategically modifies their context \(\mathbf{x}_{t}\rightarrow\mathbf{x}^{\prime}_{t}\), and (5) the principal assigns an action \(a_{t}\) according to the policy and receives the associated reward \(r_{t}(a_{t})\) (under apple tasting feedback). Algorithm EXP4, which maintains a distribution over experts and updates the loss of _all_ experts based on the current action taken, is not directly applicable in our setting as the strategic behavior of the agents prevents us from inferring the loss of each expert at every time-step [5]. This is because if \(\mathbf{x}^{\prime}_{t}\neq\mathbf{x}_{t}\) under the thresholding policy associated with expert \(\mathbf{e}\)), it is generally not possible to "back out" \(\mathbf{x}_{t}\) given \(\mathbf{x}^{\prime}_{t}\), which prevents us from predicting the counterfactual context the agent would have modified to had the principal been using expert \(\mathbf{e}^{\prime}\) instead. As a result, we use a modification of the standard importance-weighted loss estimator to update the loss of _only the policy played by the algorithm_ (and therefore the distribution over policies). Our regret guarantees for Algorithm 3 are as follows: **Theorem 4.1** (Informal; detailed version in Theorem C.1).: _Algorithm 3 incurs expected strategic regret \(\mathbb{E}[\mathrm{Reg}(T)]=\widetilde{\mathcal{O}}(T^{(d+1)/(d+2)})\)._ Proof sketch.: The analysis is broken down into two parts. In the first part, we bound the regret w.r.t. the best policy on the grid. In the second, we bound the error incurred for playing policies on the grid, rather than the continuous space of policies. We refer to this error as the _Strategic Discretization Error_ (\(SDE(T)\)). The analysis of the regret on the grid mostly follows similar steps to the analysis of EXP3 / EXP4. The important difference is that we shift the reward obtained by \(a_{t}\), by a factor of \(1+\lambda\), where \(\lambda\) is a (tunable) parameter of the algorithm. This shifting (which does not affect the regret, since all the losses are shifted by the same fixed amount) guarantees that the losses at each round are non-negative and bounded with high probability. Technically, this requires bounding the tails of the subgaussian of the noise parameters \(\epsilon_{t}\). We now shift our attention to bounding \(SDE(T)\). The standard analysis of the discretization error in the non-strategic setting does not go through for our setting, since an agent may strategize very differently with respect to two policies which are "close together" in \(\ell_{2}\) distance, depending on the agent's initial context. Our analysis proceeds with a case-by-case basis. Consider the best expert \(\mathbf{e}^{*}\) in the grid. If \(a_{t,\mathbf{e}^{*}}=\pi^{*}(\mathbf{x}_{t})\) (i.e., the action of the best expert matches that of the optimal policy), there is no discretization error in round \(t\). Otherwise, if \(a_{t,\mathbf{e}^{*}}\neq\pi^{*}(\mathbf{x}_{t})\), we show that the per-round \(SDE\) is upper-bounded by a term which looks like twice the discretization upper-bound for the non-strategic setting, plus an additional term. We show that this additional term must always be non-positive by considering two subcases (\(a_{t,\mathbf{e}^{*}}=1\), \(\pi^{*}(\mathbf{x}_{t})=0\) and \(a_{t,\mathbf{e}^{*}}=0\), \(\pi^{*}(\mathbf{x}_{t})=1\)) and using properties about how agents strategize against the deployed algorithmic policies. Computational complexityWhile both Algorithm 1 and Algorithm 2 have \(\mathcal{O}(d^{3})\) per-iteration computational complexity, Algorithm 3 must maintain and update a probability distribution over a grid of size exponential in \(d\) at every time-step, making it hard to use in practice if \(d\) is large. We view the design of computationally efficient algorithms for adversarially-chosen contexts as an important direction for future research. Extension to bandit feedbackAlgorithm 3 may be extended to the bandit feedback setting by maintaining a grid over estimates of \(\boldsymbol{\theta}^{(1)}-\boldsymbol{\theta}^{(0)}\) (instead of over \(\boldsymbol{\theta}^{(1)}\)). No further changes are required. ## 5 Conclusion We study the problem of classification under incentives with apple tasting feedback. Such one-sided feedback is often what is observed in real-world strategic settings including lending and hiring. Our main result is a "greedy" algorithm (Algorithm 1) which achieves \(\widetilde{\mathcal{O}}(\sqrt{T})\) strategic regret when the initial agent contexts are generated _stochastically_. The regret of Algorithm 1 depends on a constant \(c_{1}(d,\delta)\) which scales exponentially in the context dimension, which may be problematic in settings for which the number of agents is small or unknown. To address this, we provide an algorithm (Algorithm 2) which combines Algorithm 1 with a strategy-aware version of the explore-then-commit algorithm using a doubling trick to achieve \(\widetilde{\mathcal{O}}(\min\{\frac{\sqrt{dT}}{c_{1}(d,\delta)},d\cdot T^{2/3}\})\) expected strategic regret whenever \(T\) is unknown. Finally, we relax the assumption of stochastic contexts and allow for contexts to be generated adversarially. Algorithm 3 achieves \(\widetilde{\mathcal{O}}(T^{\frac{d+1}{d+2}})\) expected strategic regret whenever agent contexts are generated adversarially by running EXP3 over a discretized grid of strategy-aware policies, but has exponential-in-\(d\) per-round computational complexity. All of our results also apply to the more general setting of bandit feedback, under slight modifications to the algorithms. There are several directions for future work: Unclean dataThe regret of Algorithm 1 depends on a constant which is exponentially large in \(d\), due to the fact that it only learns using clean data (Condition 3.2). While learning using unclean data will generally produce an inconsistent estimator, it would be interesting to see if the principal could leverage this data to remove the dependence on this constant. Alternatively, lower bounds which show that using unclean data will not improve regret would also be interesting. Efficient algorithms for adversarial contextsOur algorithm for adversarially-chosen agent contexts suffers exponential-in-\(d\) per-round computational complexity, which makes it unsuitable for use in settings with high-dimensional contexts. Deriving polynomial-time algorithms with sub-linear strategic regret for this setting is an exciting (but challenging) direction for future research. More than two actionsFinally, it would be interesting to extend our algorithms for strategic learning under bandit feedback to the setting in which the principal has _three or more_ actions at their disposal. While prior work [23] implies an impossibility result for strategic regret minimization with three or more actions, other (relaxed) notions of optimality (e.g., sublinear _Stackelberg_ regret; recall Definition 2.2) may still be possible.
2310.18788
PrObeD: Proactive Object Detection Wrapper
Previous research in $2D$ object detection focuses on various tasks, including detecting objects in generic and camouflaged images. These works are regarded as passive works for object detection as they take the input image as is. However, convergence to global minima is not guaranteed to be optimal in neural networks; therefore, we argue that the trained weights in the object detector are not optimal. To rectify this problem, we propose a wrapper based on proactive schemes, PrObeD, which enhances the performance of these object detectors by learning a signal. PrObeD consists of an encoder-decoder architecture, where the encoder network generates an image-dependent signal termed templates to encrypt the input images, and the decoder recovers this template from the encrypted images. We propose that learning the optimum template results in an object detector with an improved detection performance. The template acts as a mask to the input images to highlight semantics useful for the object detector. Finetuning the object detector with these encrypted images enhances the detection performance for both generic and camouflaged. Our experiments on MS-COCO, CAMO, COD$10$K, and NC$4$K datasets show improvement over different detectors after applying PrObeD. Our models/codes are available at https://github.com/vishal3477/Proactive-Object-Detection.
Vishal Asnani, Abhinav Kumar, Suya You, Xiaoming Liu
2023-10-28T19:25:01Z
http://arxiv.org/abs/2310.18788v1
# PrObeD: Proactive Object Detection Wrapper ###### Abstract Previous research in \(2D\) object detection focuses on various tasks, including detecting objects in generic and camouflaged images. These works are regarded as passive works for object detection as they take the input image as is. However, convergence to global minima is not guaranteed to be optimal in neural networks; therefore, we argue that the trained weights in the object detector are not optimal. To rectify this problem, we propose a wrapper based on proactive schemes, PrObeD, which enhances the performance of these object detectors by learning a signal. PrObeD consists of an encoder-decoder architecture, where the encoder network generates an image-dependent signal termed templates to encrypt the input images, and the decoder recovers this template from the encrypted images. We propose that learning the optimum template results in an object detector with an improved detection performance. The template acts as a mask to the input images to highlight semantics useful for the object detector. Finetuning the object detector with these encrypted images enhances the detection performance for both generic and camouflaged. Our experiments on MS-COCO, CAMO, COD\(10\)K, and NC\(4\)K datasets show improvement over different detectors after applying PrObeD. Our models/codes are available at [https://github.com/vishal3477/Proactive-Object-Detection](https://github.com/vishal3477/Proactive-Object-Detection). ## 1 Introduction Generic \(2D\) object detection (GOD) has improved from earlier traditional detectors [15, 20, 64, 65] to the deep-learning-based object detectors [8, 10, 26, 52, 58, 10]. Advancements in deep-learning-based methods underwent many architectural change over recent years, including one-stage [54, 54, 43, 52, 53, 46], two-stage [58, 23, 24, 23], CNN-based [54, 14, 16, 21, 22, 5, 12], transformer-based [74, 8], and diffusion-based [10] methods. All these methods aim to predict the \(2D\) bounding box of the objects in the images and their category class. Another emerging area related to generic object detection is camouflaged object detection [17, 18, 27, 28, 29, 34, 40] (COD). COD aims to detect and segment objects blended with the background [17, 18] via object-level mask supervision. Applications of COD include medical [19, 45], surveillance [11] and autonomous driving [69]. Early COD detectors exploit hand-crafted features [61, 50] and optical flow [33], while current methods are deep-learning-based. These methods utilize attention [9, 63], joint learning [40], image gradient [34], and transformers [70, 48]. All these methods take input images as is for the detection task and hence are called passive methods. However, there is a line of research on proactive methods for a wide range of vision tasks such as disruption [59, 60], tagging [68], manipulation detection [1], and localization [2]. Proactive methods use signals, called templates, to encrypt the input images and pass the encrypted images as the input to the network. These are trained in an end-to-end manner by using either a fixed [68] or learnable template [1; 2; 59; 60] to improve the performance. A major advantage of proactive schemes is that such methods generalize better on unseen data/models [1; 2]. Motivated by this, we propose a plug-and-play Proactive Object Detection wrapper, PrObeD, to improve GOD and COD detectors. Designing PrObeD as a proactive scheme involves several challenges and key factors. First, the proactive wrapper needs to be a plug-and-play module that can be applied to both GOD and COD detectors. Secondly, the encryption process should be intuitive to benefit the object detection task. _e.g._, an ideal template for detection should highlight the foreground objects in the input image. Lastly, the choice of supervision to estimate the template for encryption is hard to formulate. Previous proactive methods [1; 2] use learnable but image-independent templates for manipulation and localization tasks. However, the object detection task is scene-specific; therefore, the ideal template should be image-dependent. Based on this key insight, we propose a novel plug-and-play proactive wrapper in which we apply object detectors to enhance detection performance. The PrObeD wrapper utilizes an encoder network to learn an image-dependent template. The learned template encrypts the input images by applying a transformation, defined as an element-wise multiplication between the template and the input image. The decoder network recovers the templates from the encrypted images. We utilize regression losses for supervision and leverage the ground-truth object map to guide the learning process, thereby imparting valuable object semantics to be integrated into the template. We then fine-tune the proactive wrapper with the GOD and COD detectors to improve their detection performance. Extensive experiments on MS-COCO, CAMO, COD\(10\)K, and NC\(4\)K datasets show that PrObeD improves the detection performance for both GOD and COD detectors. In summary, the contributions of this work include: * We propose a novel proactive approach \(PrObeD\) for the object detection task. To the best of our knowledge, this is the first work to develop a proactive approach to \(2D\) object detection. * We mathematically prove that the proactive method results in a better-converged model than the passive detector under assumptions and, consequently, a better object detector. * PrObeD wraps around both GOD and COD detectors and improves detection performance on MS-COCO, CAMO, COD10K, and NC\(4\)K datasets ## 2 Related works Proactive Schemes.Earlier works adopt to add signals like perturbation [60], adversarial noise [59], and one-hot encoding [68] messages while focusing on tasks like disruption [59; 60] and deepfake tagging [68]. Asnani _et al_. [1] propose to learn an optimized template for binary detection by unseen generative models. Recently, MaLP [2] adds the learnable template to perform generalized Figure 1: **(a) Passive _vs._ Proactive object detection. A learnable template encrypts the input images, which are further used to train the object detector. (b) PrObeD serves as a wrapper on both generic and camouflaged object detectors, enhancing the detection performance. (c) For the linear regression model under additive noise and other assumptions, the converged weights of the proactive detector are closer to the optimal weights as compared to the converged weights of the passive detector. See Sec. 3.2 for details and proof.** manipulation localization for unknown generative models. Unlike these works, PrObeD uses image-dependent templates and is a plug-and-play wrapper for a different task of object detection. Generic Object DetectionDetection of generic objects, instead of specific object categories such as pedestrians [7], apples [13], and others [37, 4, 38], has been a long-standing objective of computer vision. RCNN [24, 25] employs the extraction of object proposals. He _et al_. [31] propose a spatial pooling layer to extract a fixed-length representation of all the objects. Modifications of RCNN [23, 41, 58, 72] increase the inference speed. Feature pyramid network [42] detects objects with a wide variety of scales. The above methods are mostly two-stage, so inference is an issue. Single-stage detectors like YOLO [5, 52, 53, 66, 54], SSD [46], HRNet [67] and RetinaNet [43] increase the speed and simplicity of the framework compared to the two-stage detector. Recently, transformer-based methods [74, 8] use a global-scale receptive field. Chen _et al_. [10] use diffusion models to denoise noisy boxes at every forward step. PrObeD functions as a wrapper around the pre-existing object detector, facilitating its transformation into an enhanced object detector. The comparison of PrObeD with prior works is summarized in Tab. 1. Camouflaged Object DetectionEarly COD works rely on hand-crafted features like co-occurrence matrices [61], \(3D\) convexity [50], optical flow [33], covariance matrix [35], and multivariate calibration components [57]. Later on, [9, 63] incorporate an attention-based cross-level fusion of multi-scale features to recover contextual information. Mei _et al_. [49] take motivation by predators to identify camouflaged objects using a position and focus ideology. SINet [18] uses a search and identification module to perform localization. SINET-v2 [17] uses group-reversal attention to extract the camouflaged maps. [36] explores uncertainty maps and [75] utilizes cube-like architecture to integrate multi-layer features. ANet [39], LSR [47], and JCSOD [40] employ joint learning with different tasks to improve COD. Lately, [12, 48, 70] apply a transformer-based architecture for difficult-aware learning, uncertainty modeling, and temporal consistency. Zhai _et al_. [73] use a graph learning model to disentangle input into different features for localization. DGNet [34] uses image gradients to exploit intensity changes in the camouflaged object from the background. Unlike these methods, PrObeD uses proactive methods to improve camouflaged object detection. ## 3 Proposed Approach Our method originates from understanding what makes proactive schemes effective. We first overview the two detection problems: GOD and COD in Sec. 3.1. We next derive Lemma 1, where we show that the proactive schemes with the multiplicative transformation of images are better than passive schemes by comparing the deviation of trained network weights from the optimal. Based on this result, we derive that Average Precision (AP) from the proactive model is better than AP from the passive model in Theorem 1. At last, we present our proactive scheme-based wrapper, PrObeD, in Sec. 3.3, which builds upon the Theorem 1 to improve generic 2D objects and camouflaged detection. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Proactive} & \multirow{2}{*}{Task} & \multicolumn{2}{c|}{Template} & \multirow{2}{*}{COD} & \multirow{2}{*}{GOD} & \multirow{2}{*}{Plug-Play} \\ \cline{1-1} \cline{6-7} Faster R-CNN [58] & \(\bigtimes\) & & Object Detection & - & & \(\bigtimes\) & ✓ & \(\bigtimes\) \\ YOLO [52] & \(\bigtimes\) & Object Detection & - & - & \(\bigtimes\) & ✓ & \(\bigtimes\) \\ DeTR [8] & \(\bigtimes\) & Object Detection & - & - & \(\bigtimes\) & ✓ & \(\bigtimes\) \\ DGNet [34] & \(\bigtimes\) & Object Detection & - & - & ✓ & \(\bigtimes\) & \(\bigtimes\) \\ SINet-v2 [17] & \(\bigtimes\) & Object Detection & - & - & ✓ & \(\bigtimes\) & \(\bigtimes\) \\ JCSOD [40] & \(\bigtimes\) & Object Detection & - & - & ✓ & \(\bigtimes\) & \(\bigtimes\) \\ OGAN [60] & ✓ & Disrupt & \(1\) & Learnable & - & - & \(\bigtimes\) \\ Ruiz _et al_. [59] & ✓ & Disrupt & \(1\) & Learnable & - & - & \(\bigtimes\) \\ Yeh _et al_. [71] & ✓ & Disrupt & \(1\) & Learnable & - & - & \(\bigtimes\) \\ FakeTagger [68] & ✓ & Tagging & \(\geq 1\) & Fixed, Id-dependent & - & - & \(\bigtimes\) \\ Asnani _et al_. [1] & ✓ & Manipulation Detection & \(\geq 1\) & Learnable set, Image-independent & - & - & ✓ \\ MaLP [2] & ✓ & Manipulation Localization & \(\geq 1\) & Learnable set, Image-independent & - & - & ✓ \\ PrObeD (Ours) & ✓ & Object Detection & \(\geq 1\) & Learnable, Image-dependent & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of PrObeD with prior works. ### Background #### 3.1.1 Passive Object Detection Although generic \(2D\) object detection and camouflage detection are similar problems, they have different objective functions. Therefore, we treat them as two different problems and define their objectives separately. **Generic 2D Object Detection.** Let \(\mathbf{I}_{j}\) be the set of input images given to the generic 2D object detector \(\mathcal{O}\) with trainable parameters \(\theta\). Most of these detectors output two sets of predictions per image: (1) bounding box coordinates, \(\mathcal{O}(\mathbf{I}_{j})_{1}=\hat{T}\in\mathbb{R}^{4}\), (2) class logits, \(\mathcal{O}(\mathbf{I}_{j})_{2}=\hat{C}\in\mathbb{R}^{C}\), where \(N\) is the number of foreground object categories. If the ground-truth bounding box coordinates are \(T_{j}\), and the ground-truth category label is \(C\), the objective function of such detector is: \[\min_{\theta}\bigg{\{}\sum_{j}\Big{(}||\mathcal{O}(\mathbf{I}_{j};\theta)_{1}-T_{j }||_{2}\Big{)}-\sum_{j}\sum_{i=1}^{N}\Big{(}C_{j}^{i}\cdot\log(\mathcal{O}( \mathbf{I}_{j};\theta)_{2}))\Big{)}\bigg{\}}. \tag{1}\] **Camouflaged Object Detection.** Let \(\mathbf{I}_{j}\) be the input image set given to the camouflaged object detector \(\mathcal{O}\) with trainable parameters \(\theta\), and \(\mathbf{G}_{j}\) be the ground-truth segmentation map. Prior passive works predict a segmentation map with the following objective: \[\min_{\theta}\bigg{\{}\sum_{j}\Big{(}\Big{|}\Big{|}\mathcal{O}(\mathbf{I}_{j}; \theta)-\mathbf{G}_{j}\Big{|}\Big{|}_{2}\Big{)}\bigg{\}}. \tag{2}\] #### 3.1.2 Proactive Object Detection Proactive schemes [1, 2] encrypt the input images with the template to aid manipulation detection/localization. Such schemes take an input image \(\mathbf{I}_{j}\in\mathbb{R}^{H\times W\times 3}\) and learns a template \(\mathbf{S}_{j}\in\mathbb{R}^{H\times W}\). PrObeD uses image-dependent templates to improve object detection. Given an input image \(\mathbf{I}_{j}\in\mathbb{R}^{H\times W\times 3}\), PrObeD learns to output a template \(\mathbf{S}_{j}\in\mathbb{R}^{H\times W}\), which can be used by a transformation \(\mathcal{T}\) resulting in encrypted images \(\mathcal{T}(\mathbf{I}_{j})\). PrObeD uses element-wise multiplication as the transformation \(\mathcal{T}\), which is defined as: \[\mathcal{T}(\mathbf{I}_{j})=\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j})=\mathbf{I}_{j}\odot\bm {S}_{j}. \tag{3}\] ### Mathematical Analysis of Passive and Proactive Detectors PrObeD optimizes the template to improve the performance of the object detector. We argue that this template helps arrive at a better global minima representing the optimal parameters \(\theta\). We now define the following lemma to support our argument: **Lemma 1**.: _Converged weights of proactive and passive detectors. Consider a linear regression model that regresses an input image \(\mathbf{I}_{j}\) under an additive noise setup to obtain the 2D coordinates. Assume the noise under consideration \(e\) is a normal random variable \(\mathcal{N}(0,\sigma^{2})\). Let \(\mathbf{w}\) and \(\mathbf{w}^{*}\) denote the trained weights of the pretrained linear regression model and the optimal weights of the linear regression model. Also, assume SGD optimizes the model parameters with decreasing step size \(s\) such that the steps are square summable i.e., \(\mathcal{S}=\lim\limits_{t\rightarrow\infty}\sum\limits_{k=1}^{t}s_{k}^{2}\) exist, and the noise is independent of the image. Then, there exists a template \(\mathbf{S}_{j}\in[0,1]\) for the image \(\mathbf{I}_{j}\) such that the multiplicative transformation of images as the input results in a trained weight \(\mathbf{w}^{\prime}\) closer to the optimal weight than the originally trained weight \(\mathbf{w}\). In other words,_ \[\mathbb{E}(||\mathbf{w}^{\prime}-\mathbf{w}^{*}||_{2})<\mathbb{E}(||\mathbf{w}-\mathbf{w}^{*} ||_{2}). \tag{4}\] The proof of Lemma 1 is in supplementary. We use the variance of the gradient of the encrypted images to arrive at this lemma. We next use Lemma 1 to derive the following theorem: **Theorem 1**.: _AP comparison of proactive and passive detectors. Consider a linear regression model that regresses an input image \(\mathbf{I}_{j}\) under an additive noise setup to obtain the 2D coordinates. Assume the noise under consideration \(e\) is a normal random variable \(\mathcal{N}(0,\sigma^{2})\). Let \(\mathbf{w}\) and \(\mathbf{w}^{*}\) denote the trained weights of the pretrained linear regression model and the optimal weights of the linear regression model. Also, assume SGD optimizes the model parameters with decreasing step size \(s\) such that the steps are square summable i.e., \(\mathcal{S}=\lim\limits_{t\rightarrow\infty}\sum\limits_{k=1}^{t}s_{k}^{2}\) exist, and the noise is independent of the image. Then, the AP of the proactive detector is better than the AP of the passive detector._ The proof of Theorem 1 is in the supplementary. We use the Lemma 1 and the non-decreasing nature of AP w.r.t. IoU to arrive at this theorem. Next, we adapt the objectives of Eqs. (1) and (2) to incorporate the proactive methods as follows: \[\min_{\theta,\mathbf{S}_{j}}\bigg{\{}\sum_{j}\Big{(}||\mathcal{O}( \mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j});\theta)_{1}-T_{j}||_{2}\Big{)}-\sum_{j}\sum _{i=1}^{N}\Big{(}C_{j}^{i}\cdot\text{log}(\mathcal{O}(\mathcal{T}(\mathbf{I}_{j}; \mathbf{S}_{j});\theta)_{2})\Big{)}\bigg{\}}, \tag{5}\] \[\min_{\theta,\mathbf{S}_{j}}\bigg{\{}\sum_{j}\Big{(}\Big{|}\Big{|} \mathcal{O}(\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j});\theta)-\mathbf{G}_{j}\Big{|}\Big{|} _{2}\Big{)}\bigg{\}}. \tag{6}\] ### PrObeD Our proposed approach comprises of three stages: template generation, template recovery, and detector fine-tuning. First, we use an encoder network to generate an image-dependent template for image encryption. This encrypted image is further used to recover the template through a decoder network. Finally, the object detector is fine-tuned using the encrypted images. All three stages are trained in an end-to-end fashion. While all the stages are used for training PrObeD, we specifically use only stages \(1\) and \(3\) for inference. We will now describe each stage in detail. #### 3.3.1 Proactive Wrapper Our proposed approach consists of three stages, as shown in Fig. 2. However, only the first two stages are part of our proposed proactive wrapper, which can be applied to object detector to improve its performance. **Stage 1: Template Generation.** Prior works learn a set of templates [1; 2] in their proactive schemes. This set of templates is enough to perform the respective downstream tasks as the generative model manipulates the template, which is easy to capture with a set of learnable templates. However, for object detection tasks, every image has unique object characteristics such as size, appearance, and color that can vary significantly. This variability present in the images may exceed the descriptive capacity of a finite set of templates, thereby necessitating the use of image-specific templates to Figure 2: **Overview of PrObeD. PrObeD consists of three stages: (1) template generation, (2) template recovery, and (3) detector fine-tuning. The templates are generated by encoder network \(\mathcal{E}\) to encrypt the input images. The decoder network \(\mathcal{D}\) is used to recover the template from the encrypted images. Finally, the encrypted images are used to fine-tune the object detector to perform detection. We train all the stages in an end-to-end manner. However, for inference, we only use stages \(1\) and \(3\). Best viewed in color.** accurately represent the range of object features present in each image. In other words, a fixed set of templates may not be sufficiently flexible to capture the diversity of visual features across the given set of input images, thus demanding more adaptable, image-dependent templates. Motivated by the above argument, we propose to generate the template \(\mathbf{S}_{j}\) for every image using an encoder network. We hypothesize that highlighting the area of the key foreground objects would be beneficial for object detection. Therefore, for GOD, we use the ground-truth bounding boxes \(T^{G}\) to generate the pseudo ground-truth segmentation map. Specifically, for any image \(\mathbf{I}_{j}\), if the bounding box coordinates are \(T^{G}_{j}=\{x_{1},x_{2},y_{1},y_{2}\}\), we define the pseudo ground-truth segmentation map as: \[\forall m\in[0,H],n\in[0,W],\text{ we have}\] \[\mathbf{G}_{j}(m,n)=1\text{ if }x_{1}\leq m\leq x_{2}\text{ and }y_{1}\leq n\leq y_{2},\text{ otherwise }0\] However, for COD, the dataset already has the ground-truth segmentation map \(\mathbf{G}_{j}\), which we use as the supervision for the encoder to output the templates with semantic information of the image to be restricted only in the region of interest for the detector. For both GOD and COD, we minimize the cosine similarity (Cos) between \(\mathbf{S}_{j}\) and \(\mathbf{G}_{j}\) as the supervision for the encoder network. The encoder loss \(J_{E}\) is as follows: \[J_{E}=1-\text{Cos}(\mathbf{S}_{j},\mathbf{G}_{j})=1-\text{Cos}(\mathcal{E}(\mathbf{I}_{j} ),\mathbf{G}_{j}). \tag{7}\] This generated template acts as a mask for the input image to highlight the object region of interest for the detector. We use this template with the transformation \(\mathcal{T}\) to encrypt the input image as \(\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j})=\mathbf{I}_{j}\odot\mathbf{S}_{j}\). As we start from the pretrained model of object detector \(\mathcal{O}\), we initialize the bias of the last layer of the encoder as 0 so that for the first few iterations, \(\mathbf{S}_{j}\approx\mathbf{1}\). This is to ensure that the distribution of \(\mathbf{I}_{j}\) and \(\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j})\) remains similar for the first few iterations, and \(\mathcal{O}\) doesn't encounter a sudden change in its input distribution. Stage 2: Template Recovery.So far, we have discussed the generation of template \(\mathbf{S}_{j}\) using \(\mathcal{E}\), which will be used as a mask to encrypt the input image. The encrypted images are used for two purposes: (1) recovery of templates and (2) fine-tuning of the object detector. The main intuition of recovering the templates is from the prior works on image steganalysis [55, 56] and proactive schemes [1, 2]. Motivated by these works, we draw the following insight: _"To properly learn the optimal template and embed it onto the input images, it is beneficial to recover the template from encrypted images."_ To perform recovery, we exploit an encoder-decoder approach. Using this approach leverages the strengths of the encoder network \(\mathcal{E}\) for feature extraction, capturing the most useful salient details, and the decoder network \(\mathcal{D}\) for information recovery, allowing for efficient and effective encryption and decryption of the template. We also empirically show that not using the decoder to recover the templates harms the object detection performance. To supervise \(\mathcal{D}\) in recovering \(\mathbf{S}_{j}\) from \(\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j})\), we propose to maximize the cosine similarity between the recovered template, \(\mathbf{S}^{{}^{\prime}}_{j}\) and \(\mathbf{S}_{j}\). The decoder loss is as follows: \[J_{D}=1-\text{Cos}(\mathbf{S}^{{}^{\prime}}_{j},\mathbf{S}_{j})=1-\text{Cos}(\mathcal{ D}(\mathcal{T}(\mathbf{I}_{j};\mathbf{S}_{j})),\mathbf{S}_{j}). \tag{8}\] Stage 3: Detector Fine-tuning.Due to our encryption, the distribution of the images input to the pretrained \(\mathcal{O}\) changes. Thus, we fine-tune \(\mathcal{O}\) on the encrypted images \(\mathcal{T}(\mathbf{I}_{j};\mathbf{S})\). As proposed in Theorem 1, given the encrypted images \(\mathcal{T}(\mathbf{I}_{j};\mathbf{S})\), we use the pretrained detector \(\mathcal{O}\) with parameters \(\theta\) to arrive at a better local minima. Therefore, the general objective of GOD and COD in Eq. (5) and Eq. (6) change to as follows: \[\min_{\theta,\theta_{\mathcal{E}},\theta_{\mathcal{D}}}\bigg{\{} \sum_{j}\Big{(}\|\mathcal{O}(\mathcal{T}(\mathbf{I}_{j};\mathcal{E}(\mathbf{I}_{j}; \theta_{\mathcal{E}}));\theta,\theta_{\mathcal{D}})_{1}-T_{j}\|_{2}-\sum_{i=1} ^{N}\big{(}C^{i}_{j}\text{log}(\mathcal{O}(\mathcal{T}(\mathbf{I}_{j};\mathcal{E}( \mathbf{I}_{j};\theta_{\mathcal{E}}));\theta,\theta_{\mathcal{D}})_{2})\big{)} \Big{\}}, \tag{9}\] \[\min_{\theta,\theta_{\mathcal{E}},\theta_{\mathcal{D}}}\bigg{\{} \sum_{j}\Big{(}\Big{\|}\mathcal{O}(\mathcal{T}(\mathbf{I}_{j};\mathcal{E}(\mathbf{I}_{j };\theta_{\mathcal{E}}));\theta,\theta_{\mathcal{D}})-\mathbf{G}_{j}\Big{\|}_{2} \Big{)}\bigg{\}}. \tag{10}\] We use the detector-specific loss function \(J_{OBJ}\) of \(\mathcal{O}\) along with the encoder and decoder loss in Eq. (7) and Eq. (8) to train all the three stages. The overall loss function \(J\) to train PrObeD is as follows: \[J=\lambda_{OBJ}J_{OBJ}+\lambda_{E}J_{E}+\lambda_{D}J_{D}. \tag{11}\] ## 4 Experiments We apply PrObeD for two categories of object detectors: GOD and COD. **GOD Baselines.** For GOD, we apply PrObeD on four detectors with varied architectures: two-stage, one-stage, and transformer-based detectors, namely, Faster R-CNN [58], YOLO [52], Sparse R-CNN, and DeTR [8]. We use these works as baselines for three reasons: (1) varied architecture types, (2) their increased prevalence in the community, and (3) varied timelines (from earlier to recent detectors). We use the PyTorch [51] code of the respective detectors for our GOD experiments and use the corresponding GODs as our baseline. For YOLOv5 and DeTR, we use the official repositories released by the authors; for Faster R-CNN, we use the public repository "Faster R-CNN.pytorch". For other GOD detectors, we use Detectron2 library as the pre-trained detector. We use the ResNet101 backbone for Faster R-CNN, Sparse R-CNN and DeTR, and CSPDarknet53 for YOLOv5. **COD Baselines.** For COD, we apply PrObeD on the current SoTA camouflage detector DGNet [34] and use DGNet as our baseline. For all object detectors, we use the pretrained model released by the authors and fine-tune them with PrObeD. Please see the supplementary for more details. **Datasets.** Our experiments use the MS-COCO \(2017\)[44] dataset for GOD, while we use CAMO [39], COD\(10\)K [17], and NC4K [47] datasets for COD. We use the following splits of these datasets: * MS-COCO \(2017\) Val Split [44]: It includes \(118{,}287\) images for training and \(5K\) for testing. * COD\(10\)K Val Split [17]: It includes \(4{,}046\) camouflaged images for training and \(2{,}026\) for testing. * CAMO Val Split [39]: It includes \(1K\) camouflaged images for training and \(250\) for testing. * NC4K Val [47]: It includes \(4{,}121\) NC4K images. We use it for generalization testing as in [34]. **Evaluation Metrics.** We use mean average precision average at multiple thresholds in \([0.5,0.95]\) (AP) for GOD as in [44]. We also report results at threshold of \(0.5\) (AP\({}_{50}\)), threshold of \(0.75\) (AP\({}_{75}\)) and at different object sizes: small (AP\({}_{S}\)), medium (AP\({}_{M}\)), and large (AP\({}_{L}\)). For COD, we use E-measure \(E_{m}\), S-measure \(S_{m}\), weighted F1 score \(wF_{\beta}\) and mean absolute error \(MAE\) as [34]. ### GOD Results **Quantitative Results.** Tab. 2 shows the results of applying PrObeD on GOD networks. PrObeD improves the average precision of all three detectors. The performance gain is significant for Faster R-CNN. As Faster R-CNN is an older detector, it was at a worse minima to start with. PrObeD improves the convergence weight of Faster R-CNN by a significant margin, thereby improving the performance. We further experiment with two variations of Faster R-CNN, namely, Faster R-CNN + \begin{table} \begin{tabular}{l|c c c|c c c} \hline Method & AP \(\uparrow\) & AP\({}_{50}\) \(\uparrow\) & AP\({}_{75}\) \(\uparrow\) & AP\({}_{S}\) \(\uparrow\) & AP\({}_{M}\) \(\uparrow\) & AP\({}_{L}\) \(\uparrow\) \\ \hline Faster R-CNN [58] & \(19.3\) & \(42.5\) & \(16.9\) & \(1.8\) & \(17.9\) & \(39.3\) \\ Faster R-CNN [58]\(+\)PrObeD & \(\mathbf{31.7}\) & \(\mathbf{52.6}\) & \(\mathbf{33.3}\) & \(\mathbf{11.0}\) & \(\mathbf{35.5}\) & \(\mathbf{51.1}\) \\ \hline Faster R-CNN \(+\) FPN [42] & \(37.3\) & \(58.0\) & \(40.6\) & \(21.4\) & \(41.0\) & \(48.4\) \\ Faster R-CNN \(+\) FPN [42] \(+\) Seg. Mask [30] & \(38.2\) & \(60.3\) & \(41.7\) & \(22.1\) & \(43.2\) & \(\mathbf{51.2}\) \\ Faster R-CNN \(+\) FPN [42] \(+\) PrObeD & \(\mathbf{38.5}\) & \(\mathbf{60.4}\) & \(\mathbf{41.9}\) & \(\mathbf{22.5}\) & \(\mathbf{43.4}\) & \(49.8\) \\ \hline Sparse R-CNN [62] & \(37.6\) & \(55.6\) & \(40.2\) & \(20.5\) & \(39.6\) & \(52.9\) \\ Sparse R-CNN [62]\(+\) PrObeD & \(\mathbf{39.2}\) & \(\mathbf{57.5}\) & \(\mathbf{41.5}\) & \(\mathbf{21.7}\) & \(\mathbf{40.1}\) & \(\mathbf{53.6}\) \\ \hline YOLOv5 [52] & \(48.9\) & \(67.6\) & \(53.1\) & \(31.8\) & \(54.4\) & \(62.3\) \\ YOLOv5 [52]\(+\) PrObeD & \(\mathbf{49.4}\) & \(\mathbf{67.9}\) & \(\mathbf{53.5}\) & \(\mathbf{32.0}\) & \(\mathbf{55.1}\) & \(\mathbf{62.6}\) \\ \hline DeTR [8] & \(41.9\) & \(62.3\) & \(44.1\) & \(20.3\) & \(45.8\) & \(61.0\) \\ DeTR [8]\(+\) PrObeD & \(\mathbf{42.1}\) & \(\mathbf{62.6}\) & \(\mathbf{44.4}\) & \(\mathbf{20.4}\) & \(\mathbf{46.0}\) & \(\mathbf{61.3}\) \\ \hline \end{tabular} \end{table} Table 2: GOD results on MS-COCO val split. PrObeD improves the performance of all GOD at all thresholds and across all categories. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{CAMO} & \multicolumn{4}{c|}{COD\(10\)K} & \multicolumn{4}{c}{NC4K} \\ \cline{2-13} & E\({}_{m}\)\(\uparrow\) & S\({}_{m}\)\(\uparrow\) & wF\({}_{\beta}\)\(\uparrow\) & MAE\(\downarrow\) & E\({}_{m}\)\(\uparrow\) & S\({}_{m}\)\(\uparrow\) & wF\({}_{\beta}\)\(\uparrow\) & MAE\(\downarrow\) & E\({}_{m}\)\(\uparrow\) & S\({}_{m}\)\(\uparrow\) & wF\({}_{\beta}\)\(\uparrow\) & MAE\(\downarrow\) \\ \hline DGNet [34] & \(0.859\) & \(0.791\) & \(0.681\) & \(0.079\) & \(0.833\) & \(0.776\) & \(0.603\) & \(0.046\) & \(0.876\) & \(0.815\) & \(0.710\) & \(0.059\) \\ \(\pm\) PrObeD & \(\mathbf{0.871}\) & \(\mathbf{0.797}\) & \(\mathbf{0.702}\) & \(\mathbf{0.071}\) & \(\mathbf{0.869}\) & \(\mathbf{0.803}\) & \(\mathbf{0.661}\) & \(\mathbf{0.037}\) & \(\mathbf{0.900}\) & \(\mathbf{0.838}\) & \(\mathbf{0.755}\) & \(\mathbf{0.049}\) \\ \hline \end{tabular} \end{table} Table 3: COD results on CAMO, COD\(10\)K and NC4K datasets. PrObeD outperforms DGNet on all datasets and metrics. FPN and Sparse-RCNN. We observe an increase in the performance of both detectors. PrObeD also improves newer detectors like YOLOv5 and DeTR, although the gains are smaller compared to Faster R-CNN. We believe this happens because the newer detectors leave little room for improvement due to which PrObeD improves the performance slightly. We next compare PrObeD with a work that leverage segmentation map as a mask for object detection. We compare our performance with Mask R-CNN [30], which uses an image segmentation branch to help with object detection. Tab. 2 shows that the gains using Mask R-CNN are lower than using our proactive wrapper. **Qualitative Results.** Fig. 3 shows qualitative results for the MS-COCO \(2017\) dataset. PrObeD clearly improves the performance of pretrained Faster R-CNN for three types of errors: Missed predictions, false negatives, and localization errors. PrObeD has a lower number of missed predictions, fewer false positives, and better bounding box localization. We also visualize the generated and recovered templates. We see that the template has object semantics of the input images. When the template is multiplied with the input image, it highlights the foreground objects, thereby making the task of object detector easier. **Error Analysis.** We show the error analysis [6] for GOD section \(4\) of the supplementary. We observe that all GOD detectors make mistakes mainly due to five types of errors: classification, localization, duplicate detection, background detection, and missed detection. The main reason for the degraded performance is the errors in which the foreground-background boundary is missed. These errors include localization, background detection, and missed detection. Our proactive wrapper significantly corrects these errors, as the template has object semantics, which, when multiplied with the input image, highlights the foreground objects, consequently simplifying the task of object detection. Figure 4: **Qualitative COD Results** on CAMO, COD10K, and NC4K datasets from top to bottom, after applying PrObeD. (a) input images, (b) ground-truth camouflaged map, (c) DGNet [34] predictions, (d) DGNet [34]+ PrObeD predictions, (e) generated PrObeD template, and (f) recovered PrObeD template. PrObeD template has the semantics of the camouflaged object, which aids DGNet in detection. Figure 3: **Qualitative GOD Results** on MS-COCO \(2017\) dataset. (a) ground-truth annotations, (b) Faster R-CNN [58] predictions, (c) Faster R-CNN [58]+ PrObeD predictions, (d) generated template, and (e) recovered template. We highlight the objects responsible for improvement in (c) as compared to (b). The yellow box represents better localization, the blue box represents false positives, and the red box represents missed predictions. PrObeD improves on all these errors made by (b). ### COD Results **Quantitative Results.** Tab. 3 shows the result of applying PrObeD to DGNet [34] on three different datasets. PrObeD, when applied on top of DGNet, outperforms DGNet on all four metrics for all datasets. The biggest gain appears in COD\(10\)K and NC\(4\)K datasets. This is impressive as these datasets have more diverse testing images than CAMO. As NC\(4\)K is only a testing set, the higher performance of PrObeD demonstrates its superior generalizability as compared to DGNet [34]. This result agrees with the observation in [1; 2], where proactive-based approaches exhibit improved generalization on manipulation detection and localization tasks. **Qualitative Results.** Fig. 4 visualizes the predicted camouflaged map for DGNet before and after applying PrObeD on testing samples of all three datasets. PrObeD improves the predicted camouflaged map, with less blurriness along the boundaries and better localization of the camouflaged object. As observed before for GOD, the generated and recovered template has the semantics of the camouflaged objects, which after multiplication intensifies the foreground object, resulting in better segmentation by DGNet. ### Ablation Study **Comparison with Proactive Works.** The prior proactive works perform a different task of image manipulation detection and localization. Therefore, these works are not directly comparable to our proposed proactive wrapper, which performs a different task of object detection as described in Tab. 1. However, manipulation localization and COD both involve a prediction of a localization map, segmentation, and fakeness map, respectively. This inspires us to experiment with MaLP [2] for the task of COD. We train the localization module of MaLP supervised with the COD datasets. The results are shown in Tab. 4. We see that MaLP is not able to perform well for all three datasets. MaLP is designed for estimating universal templates rather than templates tailored to specific images. It shows the significance of image-specific templates in object detection. While MaLP's design with image-independent templates is effective for localizing image manipulation, applying it to object detection has a negative impact on performance. **Framework Design.** PrObeD consists of blocks to improve the object detector. Tab. 5 ablates different versions of PrObeD to highlight the importance of each block in our design. PrObeD utilizes an encoder network \(\mathcal{E}\) to learn image-dependent templates aiding the detector. We remove the encoder \(\mathcal{E}\) from our network, replacing it with a fixed template. We observe that the performance deteriorates by a large margin. Next, we make this template learnable as proposed in PrObeD, but only a single template would be used for all the input images. This choice also results in worse performance, highlighting that image-dependent templates are necessary for object detection. Finally, we remove the decoder network \(\mathcal{D}\), which is used to recover the template from the encrypted images. Although this results in a better performance than the pretrained Faster R-CNN, we observe a drop as compared to PrObeD. Therefore, as discussed in Sec. 3.3, the recovery of templates is indeed a necessary and beneficial step for boosting the performance of the proactive schemes. \begin{table} \begin{tabular}{l|l l|l l|l l|l l l|l l} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{CAMO} & \multicolumn{4}{c|}{COD\(10\)K} & \multicolumn{4}{c}{NC\(4\)K} \\ \cline{2-13} & \(\text{E}_{m}\uparrow\) & \(\text{S}_{m}\uparrow\) & \(\text{wF}_{\beta}\uparrow\) & MAE\(\downarrow\) & \(\text{E}_{m}\uparrow\) & \(\text{S}_{m}\uparrow\) & \(\text{wF}_{\beta}\uparrow\) & MAE\(\downarrow\) & \(\text{E}_{m}\uparrow\) & \(\text{S}_{m}\uparrow\) & \(\text{wF}_{\beta}\uparrow\) & MAE\(\downarrow\) \\ \hline MaLP [2] & \(0.474\) & \(0.514\) & \(0.218\) & \(0.254\) & \(0.491\) & \(0.520\) & \(0.150\) & \(0.202\) & \(0.503\) & \(0.548\) & \(0.228\) & \(0.222\) \\ PrObeD & \(\mathbf{0.871}\) & \(\mathbf{0.797}\) & \(\mathbf{0.702}\) & \(\mathbf{0.071}\) & \(\mathbf{0.869}\) & \(\mathbf{0.803}\) & \(\mathbf{0.661}\) & \(\mathbf{0.037}\) & \(\mathbf{0.900}\) & \(\mathbf{0.838}\) & \(\mathbf{0.755}\) & \(\mathbf{0.049}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison with proactive works. MaLP [2] has a significantly deteriorated performance than PrObeD. \begin{table} \begin{tabular}{l|l|l l l|l l l l} \hline \hline \multicolumn{1}{c|}{Changed} & \multicolumn{1}{c|}{From\(\blackminus\)To} & \multicolumn{1}{c|}{AP} & \(\uparrow\) & \(\text{AP}_{50}\uparrow\) & \(\text{AP}_{75}\uparrow\) & \(\text{AP}_{S}\uparrow\) & \(\text{AP}_{M}\uparrow\) & \(\text{AP}_{L}\uparrow\) \\ \hline \multirow{2}{*}{Template} & \multicolumn{1}{c|}{Image Dependent\(\blackminus\)Fixed} & \(17.6\) & \(37.9\) & \(15.1\) & \(1.3\) & \(15.4\) & \(39.5\) \\ & \multicolumn{1}{c|}{Image Dependent\(\blackminus\)Universal} & \(19.4\) & \(42.6\) & \(17.1\) & \(1.9\) & \(18.0\) & \(39.4\) \\ \hline Decoder & Yes\(\blackminus\)No & \(25.2\) & \(46.1\) & \(26.2\) & \(5.3\) & \(26.6\) & \(24.1\) \\ \hline Transformation & Multiply–\(\blackminus\)Add & \(19.2\) & \(42.3\) & \(20.1\) & \(1.7\) & \(17.9\) & \(39.1\) \\ \hline PrObeD & - & & \(\mathbf{31.7}\) & \(\mathbf{52.6}\) & \(\mathbf{33.3}\) & \(\mathbf{11.0}\) & \(\mathbf{35.5}\) & \(\mathbf{51.1}\) \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation studies** of PrObeD using Faster R-CNN GOD on MS-COCO \(2017\) dataset. Removing the encoder/decoder network or adding the template results in degraded performance. **Encryption Process.** PrObeD includes an encryption process as described in Eq. (3), which involves multiplying the template with the input image. This process makes the template act as a mask, highlighting the foreground for better detection. However, prior proactive works [1, 2] consider adding templates to achieve better results. Thus, we ablate by changing the encryption process to template addition. Tab. 5 shows that template addition degrades performance by a significant margin w.r.t. our multiplication scheme. This shows that encryption is a key step in formulating proactive schemes, and the same encryption process may not work for all tasks. **More Training Time.** We perform an ablation to show that the performance gain of the detector is due to our proactive wrapper instead of training for more iterations of the pretrained object detector. Results in Tab. 6 show that although more training iterations for the detector has a performance gain, it's not enough to get the significant margin in performance as achieved by PrObeD. This shows that extra training can help, but only up to a certain extent. **Inference Time.** We evaluate the overhead computational cost after applying PrObeD on different object detectors are shown in Tab. 6, averaged across \(1,000\) images, on a NVIDIA \(V100\) GPU. Our encoder network has \(17\) layers, which adds extra cost for inference. For detectors with bulky architectures like Faster R-CNN (ResNet101) and DeTR (transformer), the overhead computational cost is quite small, \(8.7\%\) and \(7.2\%\), respectively. This additional cost is minor compared to the performance gain of detectors, especially Faster R-CNN. For a lighter detector like YOLOv5, our overhead computational cost increases to \(29.1\%\). So, there is a trade-off of applying PrObeD to different detectors with varied architectures. PrObeD is more beneficial to bulky detectors like two-staged/transformer-based as compared to one-stage detectors. ## 5 Conclusion We mathematically prove that the proactive method results in a better-converged model than the passive detector under assumptions and, consequently, a better 2D object detector. Based on this finding, we propose a proactive scheme wrapper, PrObeD, which enhances the performance of camouflaged and generic object detectors. The wrapper outputs an image-dependent template using an encoder network, which encrypts the input images. These encrypted images are then used to fine-tune the object detector. Extensive experiments on MS-COCO, CAMO, COD\(10\)K, and NC\(4\)K datasets show that PrObeD improves the overall object detection performance for both GOD and COD detectors. **Limitations.** Our proposed scheme has the following limitations. First, PrObeD does not provide a significant gain for recent object detectors such as YOLO and DeTR. Second, the proactive wrapper should be thoroughly tested on other object detectors to show the generalizability of PrObeD. Finally, we only experiment with simple multiplication and addition as the encryption scheme. A more sophisticated encryption process might further improve the object detectors' performance. We leave these for our future avenues. \begin{table} \begin{tabular}{l|c|c c|c c|c|c} \hline \hline Method & Iterations & \(\text{AP}\uparrow\) & \(\text{AP}_{50}\uparrow\) & \(\text{AP}_{75}\uparrow\) & \(\text{AP}_{S}\uparrow\) & \(\text{AP}_{M}\uparrow\) & \(\text{AP}_{L}\uparrow\) & Time (\(ms\)) \\ \hline Faster R-CNN [58] & \(1\times\) & \(19.3\) & \(42.5\) & \(16.9\) & \(1.8\) & \(17.9\) & \(39.3\) & \(161.1\) \\ Faster R-CNN [58] & \(2\times\) & \(20.1\) & \(46.6\) & \(21.5\) & \(3.3\) & \(20.3\) & \(41.2\) & \\ Faster R-CNN [58] \(+\)PrObeD & \(2\times\) & \(\mathbf{31.7}\) & \(\mathbf{52.6}\) & \(\mathbf{33.3}\) & \(\mathbf{11.0}\) & \(\mathbf{35.5}\) & \(\mathbf{51.1}\) & \(175.3\) (\(\uparrow\) 8.7\%) \\ \hline YOLOv5 [52] & \(1\times\) & \(48.9\) & \(67.6\) & \(53.1\) & \(31.8\) & \(54.4\) & \(62.3\) & \\ YOLOv5 [52] & \(2\times\) & \(48.8\) & \(67.7\) & \(53.0\) & \(31.8\) & \(54.7\) & \(62.4\) & \\ YOLOv5 [52] \(+\)PrObeD & \(2\times\) & \(\mathbf{49.4}\) & \(\mathbf{67.9}\) & \(\mathbf{53.5}\) & \(\mathbf{32.0}\) & \(\mathbf{55.1}\) & \(\mathbf{62.6}\) & \(62.7\) (\(\uparrow\) 29.1\%) \\ \hline DeTR [8] & \(1\times\) & \(41.9\) & \(62.3\) & \(44.1\) & \(20.3\) & \(45.8\) & \(61.0\) & \(194.2\) \\ DeTR [8] & \(2\times\) & \(41.9\) & \(62.4\) & \(44.0\) & \(20.1\) & \(45.9\) & \(61.1\) & \(194.2\) \\ DeTR [8] \(+\)PrObeD & \(2\times\) & \(\mathbf{42.1}\) & \(\mathbf{62.6}\) & \(\mathbf{44.4}\) & \(\mathbf{20.4}\) & \(\mathbf{46.0}\) & \(\mathbf{61.3}\) & \(208.4\) (\(\uparrow\) 7.2\%) \\ \hline \hline \end{tabular} \end{table} Table 6: **Ablation of training iterations** on Faster R-CNN. YOLOv5, and DeTR for more iterations similar to after applying PrObeD. We also report the inference time for all the detectors before and after applying PrObeD. Training object detectors proactively with PrObeD results in more performance gain compared to training passively for more iterations. PrObeD adds an overhead cost on top of the inference cost of detectors.
2304.13742
TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation
We propose TR0N, a highly general framework to turn pre-trained unconditional generative models, such as GANs and VAEs, into conditional models. The conditioning can be highly arbitrary, and requires only a pre-trained auxiliary model. For example, we show how to turn unconditional models into class-conditional ones with the help of a classifier, and also into text-to-image models by leveraging CLIP. TR0N learns a lightweight stochastic mapping which "translates" between the space of conditions and the latent space of the generative model, in such a way that the generated latent corresponds to a data sample satisfying the desired condition. The translated latent samples are then further improved upon through Langevin dynamics, enabling us to obtain higher-quality data samples. TR0N requires no training data nor fine-tuning, yet can achieve a zero-shot FID of 10.9 on MS-COCO, outperforming competing alternatives not only on this metric, but also in sampling speed -- all while retaining a much higher level of generality. Our code is available at https://github.com/layer6ai-labs/tr0n.
Zhaoyan Liu, Noel Vouitsis, Satya Krishna Gorti, Jimmy Ba, Gabriel Loaiza-Ganem
2023-04-26T18:00:00Z
http://arxiv.org/abs/2304.13742v1
# TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation ###### Abstract We propose TR0N, a highly general framework to turn pre-trained unconditional generative models, such as GANs and VAEs, into conditional models. The conditioning can be highly arbitrary, and requires only a pre-trained auxiliary model. For example, we show how to turn unconditional models into class-conditional ones with the help of a classifier, and also into text-to-image models by leveraging CLIP. TR0N learns a lightweight stochastic mapping which "translates" between the space of conditions and the latent space of the generative model, in such a way that the generated latent corresponds to a data sample satisfying the desired condition. The translated latent samples are then further improved upon through Langevin dynamics, enabling us to obtain higher-quality data samples. TR0N requires no training data nor fine-tuning, yet can achieve a zero-shot FID of \(10.9\) on MS-COCO, outperforming competing alternatives not only on this metric, but also in sampling speed - all while retaining a much higher level of generality. Our code is available at [https://github.com/layer6ai-labs/tr0n](https://github.com/layer6ai-labs/tr0n). Machine Learning, ICML ## 1 Introduction Large machine learning models have recently achieved remarkable success across various tasks (Brown et al., 2020; Jia et al., 2021; Nichol et al., 2022; Chowdhery et al., 2022; Rombach et al., 2022; Yu et al., 2022; Ramesh et al., 2022; Saharia et al., 2022; Reed et al., 2022). Nonetheless, training such models requires massive computational resources. Properly and efficiently leveraging existing large pre-trained models is thus of paramount importance. Yet, tractably combining the capabilities of these models in a plug-and-play manner remains a generally open problem. Mechanisms to achieve this task should ideally be modular and model-agnostic, such that one can easily swap out a model component for one of its counterparts (e.g. interchanging a GAN (Goodfellow et al., 2014) for a VAE (Kingma and Welling, 2014; Rezende et al., 2014), or swapping CLIP (Radford et al., 2021) for a new state-of-the-art text/image model). In this work, we study conditional generation through the lens of combining pre-trained models. Conditional generative models aim to learn a conditional distribution of data given some conditioning variable \(c\). They are typically trained from scratch on pairs of data with corresponding \(c\) (e.g. images \(x\), with corresponding class labels or text prompts fed through a language model \(c\)) (Mirza and Osindero, 2014; Sohn et al., 2015). Our goal is to take an arbitrary pre-trained unconditional pushforward generative model (Salmona et al., 2022; Ross et al., 2022) - i.e. a model \(G\) which transforms latent variables \(z\) sampled from a prior \(p(z)\) to data samples \(x=G(z)\) - and turn it into a conditional model. To this end, we propose TR0N, a Figure 1: Images generated by TR0N from corresponding text captions, obtained by finding adequate points on the latent space of a pre-trained GAN. Neither fine-tuning nor training data are used. **Top row**: BigGAN pre-trained on ImageNet. **Bottom row**: StyleGAN2 pre-trained on FFHQ. highly general framework to make pre-trained unconditional generative models conditional. TR0N assumes access to a pre-trained auxiliary model \(f\) that maps each data point \(x\) to its corresponding condition \(c=f(x)\), e.g. \(f\) could be a classifier, or a CLIP encoder. TR0N also assumes access to a function \(E(z,c)\) such that latents \(z\) for which \(G(z)\) "better satisfies" a condition \(c\) are assigned smaller values. Using this function, for a given \(c\), TR0N performs \(T\) steps of gradient minimization of \(E(z,c)\) over \(z\) to find latents that, after applying \(G\), will generate desired conditional data samples. However, we show that naively initializing the optimization of \(E\) is highly suboptimal. With this in mind, TR0N starts by learning a network that we use to better initialize the optimization process. We refer to this network as the translator network since it "translates" from a condition \(c\) to a corresponding latent \(z\) such that \(E(z,c)\) is small, essentially amortizing the optimization problem. Importantly, the translator network is trained _without fine-tuning \(G\) or \(f\) nor using a provided dataset_. In this sense, TR0N is a zero-shot method wherein the only trainable component is a lightweight translator network. Importantly, TR0N avoids the highly expensive training of a conditional model from scratch and is model-agnostic: we can use any \(G\) and any \(f\), which also makes it straightforward to update any of these components whenever a newer state-of-the-art version becomes available. We outline the procedure to train the translator network on the left panel of Figure 2. Once the translator network is trained, we use its output to initialize the optimization of \(E\). This reclaims any performance lost due to the amortization gap (Cremer et al., 2018; Kim et al., 2018), resulting in better local optima and faster convergence than naive initialization. In reality, TR0N is a stochastic method: the translator network is a conditional distribution \(q_{\theta}(z|c)\) that assigns high density to latents \(z\) such that \(E(z,c)\) is small, and we add noise during the gradient optimization of \(E\), which allows us to interpret TR0N as sampling with Langevin dynamics (Welling and Teh, 2011) using an efficient initialization scheme. We exemplify how to sample with TR0N on the right panel of Figure 2. Our contributions are: \((i)\) introducing translator networks and a particularly efficient parameterization of them, allowing for various ways to initialize Langevin dynamics; \((ii)\) framing TR0N as a highly general framework, whereas previous related works mostly focus on a single task with specific choices of \(G\) and \(f\); and \((iii)\) showing that TR0N empirically outperforms competing alternatives across tasks in image quality and computational tractability, while producing diverse samples; and that it can achieve an FID (Heusel et al., 2017) of \(10.9\) on MS-COCO (Lin et al., 2014). ## 2 Background Joint text/image modelsIn this work, we leverage pre-trained joint text/image models as a particular choice for both the auxiliary model \(f\) and to construct \(E\), enabling TR0N to be conditioned on either free-form text prompts or on image semantics. Recent joint text/image models such as CLIP learn a joint representation space \(\mathcal{C}_{\text{CLIP}}\) for images and texts. CLIP includes an image encoder \(f^{\text{img}}:\mathcal{X}\rightarrow\mathcal{C}_{\text{CLIP}}\) and a text encoder \(f^{\text{txt}}:\mathcal{T}\rightarrow\mathcal{C}_{\text{CLIP}}\), where \(\mathcal{X}\) is the space of images and \(\mathcal{T}\) is the space of text prompts, which are trained in such a way that images and texts that are semantically aligned are mapped to similar representations. More specifically, CLIP is such that the negative cosine similarity \(U_{\text{sim}}(f^{\text{img}}(x),f^{\text{txt}}(y))\) is small for semantically aligned image/text pairs \((x,y)\in\mathcal{X}\times\mathcal{T}\), and large for semantically unaligned pairs, where \(U_{\text{sim}}(c^{\prime},c)=-c^{\top}c^{\prime}/(\|c^{\prime}\|_{2}\|c\|_{2})\). Pushforward modelsWe use the term _pushforward model_ to refer to any generative model whose samples Figure 2: **Left panel**: The stochastic translator network learns to recover \(z\) from \(c=f(G(z))\). **Right panel**: The (stochastic) output \(z^{(0)}\) of the trained translator – which is such that \(G(z^{(0)})\) “roughly satisfies” condition \(c\) – initializes Langevin dynamics over \(E(z,c)\) which further improves the sample so as to better match \(c\). In the depicted example, \(G\) is a GAN trained on ImageNet, \(f\) the CLIP image encoder, \(c\) the CLIP text embedding corresponding to the given text prompt, and \(E(z,c)\) the negative cosine similarity between \(f(G(z))\) and \(c\). \(x\in\mathcal{X}\) can be obtained as \(x=G(z)\), where \(z\in\mathcal{Z}\) is a latent variable sampled from some (typically not trainable) prior \(p(z)\), and \(G:\mathcal{Z}\rightarrow\mathcal{X}\) is a neural network. Many models fall into this category, including generative adversarial networks (GANs), variational autoencoders (VAEs), normalizing flows Dinh et al. (2017); Durkan et al. (2019) and variants thereof Brehmer and Cranmer (2020); Caterini et al. (2021); Ross and Cresswell (2021), and more Tolstikhin et al. (2018); Loaiza-Ganem et al. (2022). We focus on GANs and VAEs since they use a low-dimensional latent space \(\mathcal{Z}\), which will later make the translator network's task easier. Our main goal is to turn a pre-trained unconditional pushforward model \((p(z),G)\) into a conditional model \((p(z|c),G)\). EBMs and Langevin dynamicsWe will later formalize the goal of TR0N as sampling from a distribution \(p(z|c)\) defined only up to proportionality, i.e. \(p(z|c)\propto e^{-\beta E(z,c)}\), where \(E:\mathcal{Z}\times\mathcal{C}\rightarrow\mathbb{R}\) is called the energy function, and the hyperparameter \(\beta>0\) controls the degree to which small values of \(E(z,c)\) correspond to large values of \(p(z|c)\), and vice-versa. We hereafter refer to this formulation as an energy-based model (EBM). While the energy function in EBMs is typically learnable Xie et al. (2016); Du and Mordatch (2019), in our work we define and fix an energy function that allows us to enforce the requirement that "applying \(G\) to a sample from \(p(z|c)\) satisfies condition \(c\)". Langevin dynamics is a method that allows us to sample from EBMs by constructing a Markov chain \((z^{(0)},z^{(1)},\dots)\) given by \[z^{(t+1)}=z^{(t)}-\frac{\beta\lambda^{(t)}}{2}\nabla_{z}E\left(z^{(t)},c\right) +\sqrt{\lambda^{(t)}}\epsilon^{(t)}, \tag{1}\] where the sequence \((\lambda^{(0)},\lambda^{(1)},\dots)\) is a hyperparameter, and \(\epsilon^{(t)}\sim\mathcal{N}(\epsilon;0,I)\). Under mild conditions and by sending \(\lambda^{(t)}\) to \(0\) at an appropriate rate, the limiting distribution of this Markov chain as \(t\rightarrow\infty\) is \(p(z|c)\). Langevin dynamics can be interpreted as gradient descent on \(E\) with added noise, and has been successfully applied to sample and train deep EBMs, where in practice it is common to deviate from theory and set \(\lambda^{(t)}=\lambda>0\) for all \(t\) (i.e. a single scalar hyperparameter \(\lambda\) is used) for improved empirical performance. Also, while in theory convergence does not depend upon the starting point \(z^{(0)}\), in practice this choice can greatly speed up convergence Hinton (2002); Nijkamp et al. (2020); Yoon et al. (2021), just as with gradient descent Boyd and Vandenberghe (2004); Glorot and Bengio (2010). ## 3 Tr0n ### Plug-and-play components of TR0N TR0N requires three key components to ensure that it can operate as a plug-and-play framework. First, TR0N takes an arbitrary pre-trained pushforward model \((p(z),G)\). TR0N also assumes access to a pre-trained auxiliary model \(f:\mathcal{X}\rightarrow\mathcal{C}\) that maps data to its corresponding condition. For example, if our goal is to condition on class labels, \(f\) would be a classifier, and \(\mathcal{C}\) the space of probability vectors of appropriate length. If we aim to condition on text, \(f\) could be given by the CLIP image encoder \(f^{\text{img}}\) - although we will see later that a different choice of \(f\) led us to improved empirical performance in this setting - and \(\mathcal{C}\) the latent space of CLIP, \(\mathcal{C}_{\text{CLIP}}\). The final component of TR0N is a function \(E:\mathcal{Z}\times\mathcal{C}\rightarrow\mathbb{R}\) which measures how much \(G(z)\) satisfies condition \(c\), an intuitive choice being \[E(z,c)=U(f(G(z)),c), \tag{2}\] where \(U:\mathcal{C}\times\mathcal{C}\rightarrow\mathbb{R}\) measures discrepancy between conditions, for example: when \(f\) is a classifier, \(U\) could be the categorical cross entropy; and when \(f\) is the image encoder from CLIP, \(U\) could be the negative cosine similarity, \(U_{\text{sim}}\). However, other choices of \(E\) are possible, as we will show in our experiments. ### Overview of TR0N Translator networksTR0N uses the aforementioned components to train the translator network which, given \(c\), aims to output a \(z\) with small \(E(z,c)\). This can be intuitively understood as amortizing the minimization of \(E\) with a neural network so as to not have to run a minimizer from scratch for every \(c\). Since there can be many latents \(z\) for which \(G(z)\) satisfies \(c\) (i.e. \(E(z,c)\) is small), we propose to have the translator be a distribution \(q_{\theta}(z|c)\), parameterized by \(\theta\). This way, the translator can assign high density to all the latents \(z\) such that \(E(z,c)\) is small. We will detail how we instantiate \(q_{\theta}(z|c)\) with a neural network in subsection 3.4, but highlight that any choice of conditional density is valid. Importantly, since we have access to the unconditional model \((p(z),G)\), we can generate synthetic data \(G(z)\) with \(z\sim p(z)\); and since we have access to \(f\), we can obtain the condition corresponding to \(G(z)\), namely \(c=f(G(z))\). Together, this means that the translator can be trained through maximum likelihood _without the need for a provided training dataset_, through \[\theta^{*}=\operatorname*{arg\,min}_{\theta}\mathbb{E}_{p(z)}\left[-\log q_{ \theta}\left(z|c=f(G(z))\right)\right]. \tag{3}\] We summarize the above objective in Algorithm 1. Error correctionThe translator is trained to stochastically recover \(z\) from \(c=f(G(z))\), so that intuitively it places high densities on latents which have low \(E(z,c)\) values. Yet, the translator is not directly trained to minimize \(E\), and thus having an error correction step, over which \(E\) is explicitly optimized, is beneficial to further improve its output. Thus, for a given \(c\), we run \(T\) steps of gradient descent on \(E(z,c)\) over \(z\), which we initialize with the help of the translator. Initializing optimization with \(q_{\theta}(z|c)\) rather than naively (e.g Gaussian noise) significantly speeds up convergence, and as we will see in our experiments, can also lead to better local optima. Importantly, we can use the translator in various ways to initialize optimization. For example, we can sample \(M\) times from \(q_{\theta^{*}}(z|c)\), and use the sample with the lowest \(E(z,c)\) value (which would be impossible if the translator was deterministic). We will detail another way to leverage the translator network to initialize optimization in subsection 3.4. In practice, we add Gaussian noise to gradient descent. Together with the stochasticity of the translator, this ensures diverse samples. Lastly, we transform the final latent, \(z^{(T)}\), through \(G\) to obtain a conditional sample from TR0N. We summarize this procedure in Algorithm 2. ``` Input:\(p(z)\), \(G\), \(f\), \(q_{\theta}(z|c)\), and batch size \(B\) while not converged do Sample \(z_{i}\sim p(z)\) for \(i=1,\dots,B\) \(c_{i}\gets f(G(z_{i}))\) for \(i=1,\dots,B\) \(\Delta\leftarrow\nabla\theta\frac{1}{B}\sum_{i=1}^{B}-\log q_{\theta}(z_{i}|c_ {i})\) Use \(\Delta\) to update \(\theta\), e.g. with ADAM (Kingma & Ba, 2015) ``` **Algorithm 1** TR0N training ### TR0N as an EBM sampler TR0N can be formalized as sampling from an EBM with Langevin dynamics. Defining the distribution \(p(z|c)\), which we call the _conditional prior_, as \(p(z|c)\propto e^{-\beta E(z,c)}\), Algorithm 2 uses Langevin dynamics (1) to sample from \(p(z|c)\), initialized with the help of \(q_{\theta^{*}}(z|c)\). Thus, TR0N can be interpreted as a sampling algorithm for the conditional push-forward model \((p(z|c),G)\). Again, \(G\) remains fixed throughout, and conditioning is achieved only through the prior \(p(z|c)\). In this view, the translator network \(q_{\theta^{*}}(z|c)\) can be understood as a rough approximation to \(p(z|c)\), as both of these distributions assign large densities to latents \(z\) for which \(E(z,c)\) is small. This is precisely why the translator provides a good initialization for Langevin dynamics: the more \(z^{(0)}\) "comes from \(p(z|c)\)", the faster (1) will converge. Why maximum-likelihood?If our goal is for the translator to be close to the conditional prior, i.e. \(q_{\theta^{*}}(z|c)\approx p(z|c)\), then a natural question is why train the translator through (3), which does not involve \(p(z|c)\), rather than by minimizing some discrepancy between these two distributions? The answer is that, since the target \(p(z|c)\) is specified only up to proportionality and true samples from it are not readily available (better sampling from \(p(z|c)\) is in fact what we designed TR0N to achieve), minimizing commonly-used discrepancies such as the KL divergence or the Wasserstein distance is not tractable. The only discrepancy we are aware of that could be used in this setting is the Stein discrepancy, which has also been used to train EBMs (Grathwohl et al., 2020). However, in preliminary experiments we observed very poor results by attempting to minimize this discrepancy. In contrast, the maximum-likelihood objective (3) is straightforward to optimize, and obtained strong empirical performance in our experiments. ### GMMs to parameterize translator networks While clearly any choice of conditional density model \(q_{\theta}(z|c)\) can be used in TR0N, we choose a Gaussian mixture model (GMM), as it has several advantages that we will discuss shortly. More specifically, we use a neural network, parameterized by \(\eta\), which maps conditions \(c\in\mathcal{C}\) to the mean \((\mu_{\eta,k}(c))_{k=1}^{K}\in\mathcal{Z}^{K}\) and weight \(w_{\eta}(c)\in\mathbb{R}^{K}\) parameters of a Gaussian mixture, i.e. \[q_{\theta}(z|c)=\sum_{k=1}^{K}w_{\eta,k}(c)\mathcal{N}(z;\mu_{\eta,k}(c), \text{diag}(\sigma^{2})), \tag{4}\] where \(w_{\eta}(c)\) has positive entries which add up to one (enforced with a softmax), and \(\theta=(\eta,\sigma)\), i.e. \(\sigma\) is learnable. We use a simple multilayer perceptron with multiple heads to parameterize this neural network. Our GMM choice for the stochastic translator has four important benefits: \((i)\) It is a very lightweight model, and thus achieves our goal of being much more tractable to train than any of the pre-trained components \(G\) and \(f\), which we once again highlight remain fixed throughout. \((ii)\) Sampling from a GMM is very straightforward and can be done very quickly. \((iii)\) Empirically, we found that using more complicated density models \(q_{\theta}(z|c)\) such as normalizing flows did not result in improved performance. We hypothesize that, since Langevin dynamics acts as an error correction step, \(q_{\theta^{*}}(z|c)\) just needs to approximate, rather than perfectly recover, \(p(z|c)\). \((iv)\) Finally, taking \(q_{\theta}(z|c)\) as a GMM allows using the translator to initialize Langevin dynamics in ways that are not straightforward to extend to a non-GMM setting. In particular, we found that sometimes (when diversity is not as paramount), rather than initializing (1) as described in Algorithm 2, better performance could be achieved by directly using the GMM parameters. That is, we initialize at the GMM mean, \(z^{(0)}=\sum_{k}w_{\eta^{\prime},k}(c)\mu_{\eta^{\prime},k}(c)\). Note that the mean of more complex distributions might not be so easily computable. Further, we found that when initializing this way, optimizing the weights and means directly yielded better performance, i.e we write \(z^{(t)}\) as \(z^{(t)}=\sum_{k}w_{k}^{(t)}\mu_{k}^{(t)}\), and perform Langevin dynamics as \[(w^{(t+1)},\mu^{(t+1)})= \tag{5}\] \[(w^{(t)},\mu^{(t)})-\frac{\beta\lambda}{2}\nabla_{(w,\mu)}E\left( z^{(t)},c\right)+\sqrt{\lambda}\epsilon^{(t)},\] where \(w^{(0)}=w_{\eta^{\prime}}(c)\), \(\mu_{k}^{(0)}=\mu_{\eta^{\prime},k}(c)\) for \(k=1,\ldots,K\), and the size of \(\epsilon^{(t)}\) is appropriately changed from (1). ### TR0N for Bayesian inference In some settings, the auxiliary model \(f\) might provide a probabilistic model \(p(c|x)\). For example, when \(f\) is a classifier, \(p(c|x)=f_{c}(x)\).1 Combined with the pushforward model, this provides a latent/data/condition joint distribution \(p(z,x,c)=p(z)\delta_{G(z)}(x)p(c|x)\), where \(\delta_{G(z)}(x)\) denotes a point mass on \(x\) at \(G(z)\). For Bayesian inference, it might be of interest to sample from the corresponding posterior \(p(x|c)\), which is equivalent to sampling from \(p(z|c)\) and transforming the result through \(G\). That is, in this scenario, the conditional prior \(p(z|c)\) is a proper posterior distribution of latents given a condition. TR0N can sample from this posterior by using specific choices of \(\beta\) and \(E\). While these choices provide a probabilistically principled way of combining \((p(z),G)\) and \(f\) into a conditional model, we find that non-Bayesian choices obtain stronger empirical results. We nonetheless believe that TR0N being compatible with Bayesian inference is worth highlighting. Due to space constraints, we include additional details in Appendix A. Footnote 1: We slightly abuse notation here and use \(c\) interchangeably as either a one-hot vector, or as the corresponding integer index. ## 4 Related Work Several methods aim to obtain a conditional generative model by combining pre-trained models, although none of them shares all of the advantages of TR0N. Notably, almost all the works we discuss below are shown to work for a single task, unlike TR0N which is widely applicable. Non-zero-shot methodsZhou et al. (2021) and Wang et al. (2022) leverage CLIP to train text-to-image models without text data, but unlike TR0N, still require a training dataset of images and relatively longer training times. Wang & Torr (2022) propose a method to turn a classifier into a conditional generative model which also requires training data to train a masked autoencoder. Nie et al. (2021) condition GANs through a similar EBM as us, but use data to train \(f\), do not condition on text, and do not use translator networks. Zhang & Agrawala (2023) add conditioning to pre-trained diffusion models (Ho et al., 2020), but require training data to do so. Deterministic optimizationThe works of Nguyen et al. (2016), Liu et al. (2021), Patashnik et al. (2021), and Li et al. (2022b) can be thought of as deterministic versions of our EBM, where rather than sampling from \(p(z|c)\propto e^{-\beta E(z,c)}\), the energy \(E(z,c)\) is directly minimized over \(z\). These methods do not account for the fact that there can be many latents \(z\) such that \(G(z)\) satisfies condition \(c\), and thus can be less diverse than TR0N. Additionally, these methods do not have a translator network, and with the exception of FuseDream (Liu et al., 2021), naively initialize optimization, resulting in reduced empirical performance and needing more gradient steps for optimization to converge. We also note that FuseDream's initialization scheme - which we detail in Appendix B for completeness - requires many forward passes through \(G\) and \(f\), and remains much more computationally demanding than TR0N's. Stochastic methodsAnsari et al. (2021) apply Langevin dynamics on the latent space of a GAN, but do so to iteratively refine its samples, rather than for conditional sampling. Nguyen et al. (2017) use a similar EBM to ours, but do not use a translator network and initialize Langevin dynamics naively, once again resulting in significantly decreased empirical performance as compared to TR0N. Wu et al. (2022) also define a similar EBM to ours, which is approximated with a normalizing flow for each different \(c\), meaning that a different model has to be trained for each condition, resulting in a method that is far less scalable than TR0N. Finally, Pinkney & Li (2022) propose clip2latent, which can be understood as using a diffusion model instead of a GMM as the translator network, making clip2latent more expensive to train than TR0N. Importantly, they perform no error correction step whatsoever, and thus do not leverage important information contained in the gradient of \(E\). ## 5 Experiments All our experimental details - including which translator-based initialization we used for each experiment - are provided in Appendix B. ### Conditioning on class labels We demonstrate TR0N's ability to make an unconditional model on CIFAR-10 (Krizhevsky, 2009) into a class-conditional one. To highlight the flexibile plug-and-play nature of TR0N, we use two different pushforward models \(G\): an NVAE (Vahdat & Kautz, 2020), and an AutoGAN (Gong et al., 2019) - we use this somewhat non-standard choice of GAN since most publicly available GANs pre-trained on CIFAR-10 are class-conditional. Here, \(\mathcal{C}\) is the space of probability vectors of length \(10\), we take \(f\) as a ResNet50 classifier (He et al., 2015), and use \(E\) as in (2) with \(U\) given by the cross-entropy loss, \(U_{\text{ent}}(c^{\prime},c)\coloneqq-\sum_{j}c_{j}\log c^{\prime}_{j}\). Figure 3 shows qualitative results: we can see that, for both pushforward models, TRON not only obtains samples from each of the 10 classes, but that it achieves this without sacrificing neither image quality nor diversity. We also make quantitative comparisons between each unconditional model (i.e. NVAE and AutoGAN) and the resulting conditional models provided by TRON. To make the comparison equitable, we sample unconditionally from TRON models by first sampling one of the \(10\) classes uniformly at random, and then sampling from the corresponding conditional. Results are shown in Table 1, by measuring image quality and diversity through both the FID score and the inception score (Salimans et al., 2016), and the quality of conditioning through the average probability that the ResNet50 assigns to the intended class of TRON samples. TRON not only makes the models conditional as these probabilities are very close to \(1\), especially for the AutoGAN-based model, but it also improves their FID and inception scores (IS): TRON leverages the classifier \(f\) not only to make a conditional model, but also to improve upon its underlying pre-trained pushforward model. Table 1 also includes some ablations: \((i)\) removing the error correction (Langevin dynamics) step altogether, which results in heavily degraded FID and IS for the NVAE-based model, and much worse conditioning for both models; \((ii)\) removing the translator, which is equivalent to a stochastic version (i.e. with Langevin dynamics instead of gradient descent) of the method of Nguyen et al. (2016), and which significantly hurts FID, IS, and conditioning performance, highlighting the relevance of translator networks; \((iii)\) using a deterministic translator rather than a stochastic one (see Appendix B for details), which significantly hurts FID and IS due to a lack of diversity since Langevin dynamics is always initialized at the same point for a given condition; and \((iv)\) using ADAM instead of gradient descent to update latents in Algorithm 2, which not only removes the formal interpretation of TRON as an EBM sampler, but also worsens performance across metrics. Finally, we include additional results in Appendix C using the Bayesian choice of \(\beta\) and \(E\) mentioned in subsection 3.5. ### Conditioning on text Natural imagesWe now show TRON's capability to turn unconditional models into text-to-image models. Here, we use \(\mathcal{C}\) as the latent space of CLIP, \(\mathcal{C}_{\text{CLIP}}\), and to condition on a text prompt \(y\in\mathcal{T}\), we simply use the text encoder, \(c=f^{\text{tr}}(y)\). First, we take \(G\) as a BigGAN2(Brock et al., 2018) pre-trained on ImageNet (Deng et al., 2009), and use two different choices of \(f\) leveraging CLIP. The first choice \begin{table} \begin{tabular}{l c c c} \hline \hline Model & FID \(\downarrow\) & IS \(\uparrow\) & Avg. prob. \(\uparrow\) \\ \hline NVAE & \(41.70\) & \(6.95\) & \(-\) \\ TRON:NVAE+ResNet50 & \(\mathbf{19.79}\) & \(\mathbf{8.64}\) & \(\mathbf{0.75}\) \\ TRON:NVAE+ResNet50 (no EC) & \(40.80\) & \(6.95\) & \(0.20\) \\ TRON:NVAE+ResNet50 (no T) & \(36.74\) & \(7.75\) & \(0.54\) \\ TRON:NVAE+ResNet50 (DT) & \(77.11\) & \(7.15\) & \(0.48\) \\ TRON:NVAE+ResNet50 (ADAM) & \(20.24\) & \(8.31\) & \(0.51\) \\ \hline AutoGAN & \(12.45\) & \(8.53\) & \(-\) \\ TRON:AutoGAN+ResNet50 & \(\mathbf{10.69}\) & \(\mathbf{8.91}\) & \(0.95\) \\ TRON:AutoGAN+ResNet50 (no EC) & \(11.00\) & \(8.66\) & \(0.41\) \\ TRON:AutoGAN+ResNet50 (no T) & \(14.30\) & \(8.37\) & \(0.68\) \\ TRON:AutoGAN+ResNet50 (DT) & \(123.23\) & \(5.48\) & \(\mathbf{0.97}\) \\ TRON:AutoGAN+ResNet50 (ADAM) & \(11.08\) & \(8.88\) & \(0.93\) \\ \hline \hline \end{tabular} \end{table} Table 1: FID, IS, and average probability assigned to the intended class of generated samples by a ResNet50 on CIFAR-10. “no EC” stands for “no error correction”, “no T” for “no translator”, “DT” for “deterministic translator”, and “ADAM” and for changing the optimizer in Langevin dynamics. Figure 3: Samples from NVAE (**first panel**), TRON:NVAE+ResNet50 (**second panel**), AutoGAN (**third panel**), and TRON:AutoGAN+ResNet50 (**fourth panel**). Rows on the second and fourth panels correspond to classes: TRON learns to diversely sample in a class-conditional way, while retaining the image quality of the underlying unconditional model. Best viewed while zoomed-in. is simply the image encoder of CLIP, \(f^{\text{img}}\). We focus our comparisons against FuseDream - which to the best of our knowledge is the best performing competing method.3 Footnote 3: While FuseDream is a deterministic method, its provided implementation (optionally) adds noise during gradient optimization: [https://github.com/gnobitab/FuseDream](https://github.com/gnobitab/FuseDream). As our second choice of \(f\), we also leverage a pre-trained caption model \(h:\mathcal{X}\rightarrow\mathcal{T}\) followed by CLIP's text encoder, i.e. \(f=f^{\text{text}}\circ h\), further demonstrating the plug-and-play nature of TR0N. The idea behind this choice is that CLIP's image and text encoders have been shown to not perfectly map images and text to the same regions of \(\mathcal{C}_{\text{CLIP}}\)(Liang et al., 2022). Adding the caption model \(h\) - which maps images to text descriptions - allows us to use the text encoder within \(f\), i.e. the same encoder used to obtain \(c\), resulting in better matching latents. This choice of \(f\) is a novel empirical contribution for zero-shot text-to-image generation. We use BLIP (Li et al., 2022) for the caption model \(h\). For both choices of \(f\), we follow FuseDream and use the negative augmented CLIP score \(E_{\text{CLIP}}\) as \(E\), which is given by \(E_{\text{CLIP}}(z,c)\coloneqq\mathbb{E}_{p(\phi)}[U_{\text{sim}}(f^{\text{ img}}(\phi[G(z)]),c)]\), where \(\phi[x]\) is a differentiable data-augmentation (Zhao et al., 2020) of \(x\), and \(p(\phi)\) a pre-specified distribution over data-augmentations. Like Liu et al. (2021), we find that using the data augmentations helps avoid adversarial examples with small values of \(E(z,c)\) which nonetheless do not satisfy \(c\). Note that \(E_{\text{CLIP}}\) always uses the image encoder from CLIP, regardless of which \(f\) we use to train the translator network. We compare TR0N against FuseDream on the MS-COCO dataset, which contains text/image pairs. For each text, we generate a corresponding image with both methods, and then compute both the FID and augmented CLIP score. Results are displayed in Figure 5 for various computational budgets (the higher the budget, the bigger \(T\), i.e. the longer Langevin dynamics is iterated for). As a consequence of FuseDream's expensive initialization scheme, TR0N can achieve similar performance much faster. This is true for our first choice of \(f\), where TR0N uses the same components as FuseDream (red vs orange lines), emphasizing once again the relevance of the translator, as also evidenced by the light blue lines in Figure 5, which correspond to TR0N with no translator (or equivalently, FuseDream with naive initialization). It is also true for our second choice of \(f\) (with a caption model), which allows TR0N to not only be faster than FuseDream (which cannot incorporate this \(f\) as it has no translator), but also outperform it (blue vs orange lines). We once again perform ablations over different design choices of the Figure 4: Samples from TR0N:BigGAN+CLIP (BLIP). We can see samples are diverse for all captions. Figure 5: Comparisons between TR0N:BigGAN+CLIP and FuseDream on MS-COCO as a function of time required to generate a sample. **Top panel**: FID score (in log scale), lower is better. **Bottom panel**: augmented CLIP score, higher is better. translator, which we include in Appendix C. Figure 1 and Figure 4 show text-to-image samples from TR0N. Although BigGAN was trained on ImageNet and remains fixed throughout, the images that TR0N manages to produce from it using text prompts are highly out-of-distribution for this dataset: TR0N's ability to efficiently leverage CLIP to explore the GAN's latent space \(\mathcal{Z}\) is noteworthy. We include additional samples in Appendix C, showing both how images evolve throughout Langevin dynamics, and failure cases of TR0N. By using the same \(G\) and version of CLIP as FuseDream, the previous experiments show that TR0N outperforms it thanks to its methodology, rather than an improved choice of networks. Yet, these networks can be improved. To further strengthen TR0N, we upgrade: \(G\) to a StyleGAN-XL (Sauer et al., 2022) - also pre-trained on ImageNet, CLIP to its LAION2B (Schuhmann et al., 2022) version, and the caption model to BLIP-2 (Li et al., 2023) (using BLIP-2 instead of BLIP as in other experiments again highlights the plug-and-play nature of TR0N). Table 2 shows quantitative results, where we can see that these updates significantly boost the performance of TR0N, to the point of making it competitive with very large models requiring text/image data and much more compute to train. While this StyleGAN-XL-based version of TR0N achieves particularly strong results on MS-COCO in terms of FID, we find that the images it produces are not consistently better, visually, than those from the BigGAN-based model. Samples and further discussion can be found in Appendix C. Facial imagesTo further highlight the wide applicability of TR0N, we show it can be used for other text-to-image tasks. We now use a StyleGAN2 (Karras et al., 2020) and an NVAE as \(G\), both pre-trained on FFHQ (Karras et al., 2019). We use CLIP's image encoder \(f^{\text{img}}\) as \(f\) (we do not use a caption model here as the descriptions of faces it outputs are too generic to be useful), and use the negative augmented clip score \(E_{\text{CLIP}}\) as \(E\). We compare against clip2latent, which uses the same setup with the StyleGAN2, but with a diffusion model instead of a GMM as a translator network, and no error correction procedure. Figure 1 and Figure 6 show qualitative results. We can see that TR0N produces images that are much more semantically aligned with the input text, which further corroborates that using a GMM as the translator is enough, while also emphasizing the relevance of error-correcting through \begin{table} \begin{tabular}{l c} \hline \hline Model & FID \(\downarrow\) \\ \hline DALL-E (Ramesh et al., 2021) & \(\approx 27.5^{\dagger}\) \\ StyleGAN-T (Sauer et al., 2023) & \(13.9^{\dagger}\) \\ Latent Diffusion (Rombach et al., 2022) & \(12.6^{\dagger}\) \\ GLIDE (Nichol et al., 2022) & \(12.2^{\dagger}\) \\ DALL-E 2 (Ramesh et al., 2022) & \(10.4^{\dagger}\) \\ Imagen (Sahara et al., 2022) & \(7.3^{\dagger}\) \\ Parti (Yu et al., 2022) & \(\mathbf{7.2^{\dagger}}\) \\ \hline FuseDream (Liu et al., 2021) & \(16.3^{*}\) \\ TR0N:BigGAN+CLIP (BLIP) & \(15.0\) \\ TR0N:StyleGAN-XL+LAION2BCLIP (BLIP-2) & \(\mathbf{10.9}\) \\ \hline \hline \end{tabular} \({}^{\dagger}\) Score as reported by the authors, not computed by us. \({}^{*}\) Liu et al. (2021) report an FID of \(21.9\) since they use ADAM instead of Langevin dynamics. \end{table} Table 2: FID score on MS-COCO. The top part of the table shows models trained directly for text-to-image generation using paired text/image data (these are sometimes called zero-shot as they were not trained on MS-COCO, but are not zero-shot in the same way as TR0N). The bottom part shows zero-shot methods that require only pre-trained models and no provided dataset. Figure 6: Comparison between TR0N:StyleGAN2+CLIP (**top row**) and clip2latent (**bottom row**). Numbers above each image correspond to average augmented CLIP score (higher is better) plus/minus standard error over \(10\) samples from the given caption. Thanks to the error correction step, TR0N better semantically matches the input text in its generated images than clip2latent. Langevin dynamics. We highlight that the pushforward models were pre-trained on FFHQ - not CelebA (Liu et al., 2015) - and thus likely have not seen celebrities such as Cristiano Ronaldo, Denzel Washington, and Muhammad Ali: we believe TR0N's performance is once again noteworthy. We omit large scale quantitative comparisons here because of several reasons: First, text descriptions of FFHQ images are highly generic, which makes it challenging to compute FID against FFHQ. Second, the FID score has recently been shown to be particularly poor at evaluating facial images (Kynkaanniemi et al., 2022). We thus only include the average augmented CLIP score for the used text prompts in Figure 6. We include additional samples for the NVAE-based TR0N model in Appendix C. ### Conditioning on image semantics We follow Ramesh et al. (2022) and consider two tasks which involve conditioning on image semantics: For the first, given an image \(x^{\prime}\), the goal is to generate diverse images \(x\) which share semantics with \(x^{\prime}\). Here, \(\mathcal{C}\) is still the latent space of CLIP, \(\mathcal{C}_{\text{CLIP}}\), and \(f\) is CLIP's image encoder, \(f^{\text{img}}\). Instead of obtaining conditions \(c\) from a text prompt, we take \(c=f^{\text{img}}(x^{\prime})\). We use both BigGAN and StyleGAN2 as \(G\), and still use the negative augmented CLIP score, \(E_{\text{CLIP}}\), as \(E\). For the second task, instead of computing \(c\) from a single image \(x^{\prime}\), we compute it by interpolating between the encodings \(f^{\text{img}}(x^{\prime}_{1})\) and \(f^{\text{img}}(x^{\prime}_{2})\) of two given images, \(x^{\prime}_{1}\) and \(x^{\prime}_{2}\). Results are shown in Figure 7, where we can see that TR0N produces meaningful samples and interpolations: this highlights that TR0N allows for arbitrary conditioning - not just class labels or text prompts. We show additional samples in Appendix C. ## 6 Conclusions, limitations, and future work In this paper we introduced TR0N, a highly general and simple-to-train framework to turn pre-trained unconditional generative models into conditional ones by learning a stochastic map from conditions to latents, whose output is used to initialize Langevin dynamics. TR0N is quick to sample from, outperforms competing methods, and has a remarkable ability to generate images outside of the distribution used to train \(G\). Despite the empirical performance of TR0N being good, it is inevitably limited by that of the pre-trained model \((p(z),G)\). Diffusion models have been shown to outperform GANs, but have no low-dimensional latent space \(\mathcal{Z}\) that the translator can map to, and thus applying TR0N in this setting is not straightforward. We thus believe extending TR0N to diffusion models to be an interesting direction for future work. We also hope that our ideas can be extended to initialize Langevin dynamics in other EBM settings. Given our results on CIFAR-10 where TR0N improved upon its pre-trained unconditional model, we also believe that further exploring how large pre-trained models can be used to improve upon existing generative models - rather than endowing them with conditional capabilities - to be a promising research avenue. Finally, here we focused exclusively on generating images, but combining large pre-trained models is of interest outside of this task. For example, zero-shot conditional text generation (Su et al., 2022) is a relevant problem, and we hope that the ideas behind TR0N can be extended to this task. Broader impactGenerative models have many applications, including among others: audio generation (van den Oord et al., 2016; Engel et al., 2017), chemistry (Gomez-Bombarelli et al., 2018), neuroscience (Sussillo et al., 2016; Gao et al., 2016; Loaiza-Ganem et al., 2019), and text generation (Bowman et al., 2016; Devlin et al., 2019; Brown et al., 2020). Each of these applications can have meaningful and positive effects on society, but can also be potentially misused for unethical purposes. Text-to-image generation is no exception, and thus the possibility exists that TR0N could be misemployed to generate inappropriate or deceitful content. We do highlight however that other powerful text-to-image models exist and are publicly available, and as such we do not foresee TR0N enabling nefarious actors to abuse text-to-image models in previously unavailable ways. ## Acknowledgements We thank Harry Braviner for early discussions, Anthony Caterini for comments on a preliminary draft, and the anonymous reviewers, whose feedback helped improve our paper. Figure 7: TR0N samples conditioning on image semantics with \(G\) as a BigGAN (**first and second panels**, \(x^{\prime}\) is highlighted in red), and interpolations with \(G\) as a StyleGAN2 (**third panel**, \(x^{\prime}_{1}\) and \(x^{\prime}_{2}\) are highlighted in red).
2305.04117
Weighted HOM-Problem for Nonnegative Integers
The HOM-problem asks whether the image of a regular tree language under a given tree homomorphism is again regular. It was recently shown to be decidable by Godoy, Gim\'enez, Ramos, and \`Alvarez. In this paper, the N-weighted version of this problem is considered and its decidability is proved. More precisely, it is decidable in polynomial time whether the image of a regular N-weighted tree language under a nondeleting, nonerasing tree homomorphism is regular.
Andreas Maletti, Andreea-Teodora Nász, Erik Paul
2023-05-06T18:59:41Z
http://arxiv.org/abs/2305.04117v1
# Weighted HOM-Problem for Nonnegative Integers ###### Abstract The HOM-problem asks whether the image of a regular tree language under a given tree homomorphism is again regular. It was recently shown to be decidable by Godoy, Gimenez, Ramos, and Alvarez. In this paper, the \(\mathbb{N}\)-weighted version of this problem is considered and its decidability is proved. More precisely, it is decidable in polynomial time whether the image of a regular \(\mathbb{N}\)-weighted tree language under a nondeleting, nonerasing tree homomorphism is regular. Weighted Tree Automaton, Decision Problem, Subtree Equality Constraint, Tree Homomorphism, HOM-Problem, Weighted Tree Grammar, Weighted HOM-Problem Digital Object Identifier 10.4230/LIPIcs.CVIT.2016.23 _Andreea-Teodora Nasz_: Research financially supported by a scholarship awarded to T. Nasz by the Free State of Saxony. ## 1 Introduction The prominent model of nondeterministic finite-state string automata has seen a variety of extensions in the past few decades. Notably, their qualitative evaluation was generalized to a quantitative one by means of weighted automata in [21]. Those automata have been extensively studied [20], not least because of their ability to neatly represent process factors such as costs, consumption of resources or time, and probabilities related to the processed input. Semirings [13, 14] present themselves as a well suited algebraic structure for the evaluation of the weights because of their generality as well as their reasonable computational efficiency that is derived from distributivity. Parallel to this development, finite-state automata have been generalized to process other forms of inputs such as infinite words [18] and trees [2]. Finite-state tree automata and the _regular tree languages_ they generate have been widely researched since their introduction in [4, 22, 23]. These models have proven to be useful in a variety of application areas including natural language processing [15], image generation [5], and compiler construction [24]. In many cases, applications require the integration of both the quantitative evaluation and trees as a more expressive input structure, which led to the development of several weighted tree automaton (WTA) models. An extensive overview can be found in [9, Chapter 9]. Finite-state tree automata have several serious limitations including their inability to ensure the equality of two subtrees of any size in an accepted tree. These restrictions are well-known [10], and the mentioned drawback was addressed in [17], where an extension was proposed that is capable of explicitly requiring certain subtrees to be equal or different. These models are highly convenient in the study of tree transformations [9], which can implement subtree duplication, and they are also the primary tool used in the seminal paper [11], where the decidability of the HOM-problem was established. The HOM-problem, a previously long-standing open question in the study of tree languages, asks whether the image of a regular tree language under a given tree homomorphism is also regular. The image need not be regular since tree homomorphisms can generate copies of subtrees. Indeed, if this copying ability is removed from the tree homomorphism (e.g., linear tree homomorphisms), then the image is always regular [10]. The classical (Boolean) HOM-problem was recently solved in [11, 12], where the image is represented by a tree automaton with constraints, for which it is then determined whether it generates a regular tree language. The problem was later shown to be EXPTIME-complete [3]. In the weighted case, decidability of the HOM-problem remains open. Previous research on the preservation of regularity in the weighted setting [1, 6, 7, 8] focuses on cases that explicitly exclude the copying power of the homomorphism. In the present work, we prove that the HOM-problem for regular \(\mathbb{N}\)-weighted tree languages can easily be decided in polynomial time. Our proof outline is inspired by [11]: Consider a regular \(\mathbb{N}\)-weighted tree language and a nondeleting, nonerasing tree homomorphism. First, we represent this image efficiently using an extension (WTGh) of weighted tree automata [16]. The question is now regularity of this WTGh, and the idea behind our contribution is the reduction of its (non)regularity to something more tangible: the large duplication property. In turn, we prove decidability in polynomial time of the large duplication property directly in Lemma 11. If the WTGh for the homomorphic image does not have this property, then we give an effective construction of an equivalent \(\mathbb{N}\)-weighted WTG (albeit in exponential time), thus proving its regularity. Otherwise, we use a pumping lemma presented in [16] and isolate a strictly nonregular part from the WTGh. The most challenging part of our proof and our main technical contribution is showing that the remaining part of the homomorphic image cannot compensate for this nonregular behavior. For this, we employ Ramsey's theorem [19] to identify a witness for the nonregularity of the whole weighted tree language. ## 2 Preliminaries We denote the set of nonnegative integers by \(\mathbb{N}\). For \(i,j\in\mathbb{N}\) we let \([i,j]=\{k\in\mathbb{N}\mid i\leq k\leq j\}\) and \([j]=[1,j]\). Let \(Z\) be an arbitrary set. The cardinality of \(Z\) is denoted by \(|Z|\), and the set of words over \(Z\) (i.e., the set of ordered finite sequences of elements of \(Z\)) is denoted by \(Z^{*}\). ### Trees, Substitutions, and Contexts A _ranked alphabet_\((\Sigma,\mathrm{rk})\) consists of a finite set \(\Sigma\) and a mapping \(\mathrm{rk}\colon\Sigma\to\mathbb{N}\) that assigns a rank to each symbol of \(\Sigma\). If there is no risk of confusion, then we denote the ranked alphabet \((\Sigma,\mathrm{rk})\) by \(\Sigma\) alone. We write \(\sigma^{(k)}\) to indicate that \(\mathrm{rk}(\sigma)=k\). Moreover, for every \(k\in\mathbb{N}\) we let \(\Sigma_{k}=\mathrm{rk}^{-1}(k)\) and \(\mathrm{rk}(\Sigma)=\max\,\{k\in\mathbb{N}\mid\Sigma_{k}\neq\emptyset\}\) be the maximal rank of symbols of \(\Sigma\). Let \(X=\{x_{i}\mid i\in\mathbb{N}\}\) be a countable set of (formal) variables. For every \(n\in\mathbb{N}\), we let \(X_{n}=\{x_{i}\mid i\in[n]\}\). Given a ranked alphabet \(\Sigma\) and a set \(Z\), the set \(T_{\Sigma}(Z)\) of \(\Sigma\)_-trees indexed by \(Z\)_ is the smallest set such that \(Z\subseteq T_{\Sigma}(Z)\) and \(\sigma(t_{1},\ldots,t_{k})\in T_{\Sigma}(Z)\) for every \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). We abbreviate \(T_{\Sigma}(\emptyset)\) simply by \(T_{\Sigma}\), and any subset \(L\subseteq T_{\Sigma}\) is called a _tree language_. Let \(\Sigma\) be a ranked alphabet, \(Z\) a set, and \(t\in T_{\Sigma}(Z)\). The set \(\mathrm{pos}(t)\) of _positions of \(t\)_ is defined by \(\mathrm{pos}(z)=\{\varepsilon\}\) for all \(z\in Z\) and by \(\mathrm{pos}(\sigma(t_{1},\ldots,t_{k}))=\{\varepsilon\}\cup\{iw\mid i\in[k],w \in\mathrm{pos}(t_{i})\}\) for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). With their help, we define the _size_'size(\(t\))' and _height_ '\(\mathrm{ht}(t)\)' of \(t\) as \(\mathrm{size}(t)=|\mathrm{pos}(t)|\) and \(\mathrm{ht}(t)=\max_{w\in\mathrm{pos}(t)}|w|\). Positions are partially ordered by the standard prefix order \(\leq\) on \([\operatorname{rk}(\Sigma)]^{*}\), and they are totally ordered by the ascending lexicographic order \(\preceq\) on \([\operatorname{rk}(\Sigma)]^{*}\), in which prefixes are larger; i.e., \(\varepsilon\) is the largest element. More precisely, for \(v,w\in\operatorname{pos}(t)\) if there exists \(u\in[\operatorname{rk}(\Sigma)]^{*}\) with \(vu=w\), then we write \(v\leq w\), call \(v\) a _prefix_ of \(w\), and let \(v^{-1}w=u\) because \(u\) is uniquely determined if it exists. Provided that \(u=u_{1}\cdots u_{n}\) with \(u_{1},\ldots,u_{n}\in[\operatorname{rk}(\Sigma)]\) we also define the _path_\([v,\ldots,w]\)_from \(v\) to \(w\)_ as the sequence \((v,vu_{1},vu_{1}u_{2},\ldots,w)\) of positions. Any two positions that are \(\leq\)-incomparable are called _parallel_. Given \(t,t^{\prime}\in T_{\Sigma}(Z)\) and \(w\in\operatorname{pos}(t)\), the _label_\(t(w)\) of \(t\) at \(w\), the _subtree_\(t|_{w}\) of \(t\) at \(w\), and the _substitution_\(t[t^{\prime}]_{w}\) of \(t^{\prime}\) into \(t\) at \(w\) are defined by \(z(\varepsilon)=z|_{\varepsilon}=z\) and \(z[t^{\prime}]_{\varepsilon}=t^{\prime}\) for all \(z\in Z\) and by \(t(\varepsilon)=\sigma\), \(t(iw^{\prime})=t_{i}(w^{\prime})\), \(t|_{\varepsilon}=t\), \(t|_{iw^{\prime}}=t_{i}|_{w^{\prime}}\), \(t[t^{\prime}]_{e}=t^{\prime}\), and \(t[t^{\prime}]_{iw^{\prime}}=\sigma\big{(}t_{1},\ldots,t_{i-1},t_{i}[t^{\prime }]_{w^{\prime}},t_{i+1},\ldots,t_{k}\big{)}\) for all trees \(t=\sigma(t_{1},\ldots,t_{k})\) with \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\), all \(i\in[k]\), and all \(w^{\prime}\in\operatorname{pos}(t_{i})\). For all sets \(S\subseteq\Sigma\cup Z\) of symbols, we let \(\operatorname{pos}_{S}(t)=\{w\in\operatorname{pos}(t)\mid t(w)\in S\}\), and we write \(\operatorname{pos}_{s}(t)\) instead of \(\operatorname{pos}_{\{s\}}(t)\) for every \(s\in\Sigma\cup Z\). The set of variables occuring in \(t\) is \(\operatorname{var}(t)=\{x\in X\mid\operatorname{pos}_{x}(t)\neq\emptyset\}\). Finally, consider \(n\in\mathbb{N}\) and a mapping \(\theta^{\prime}\colon X_{n}\to T_{\Sigma}(Z)\). Then by substitution, \(\theta^{\prime}\) induces a mapping \(\theta\colon T_{\Sigma}(Z)\to T_{\Sigma}(Z)\) defined by \(\theta(x)=\theta^{\prime}(x)\) for every \(x\in X_{n}\), \(\theta(z)=z\) for every \(z\in Z\setminus X_{n}\), and \(\theta(\sigma(t_{1},\ldots,t_{k}))=\sigma(\theta(t_{1}),\ldots,\theta(t_{k}))\) for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}(Z)\). For \(t\in T_{\Sigma}(Z)\), we denote \(\theta(t)\) by \(t\theta\) or, more commonly, by \(t[x_{1}\leftarrow\theta^{\prime}(x_{1}),\ldots,x_{n}\leftarrow\theta^{\prime} (x_{n})]\). Let \(\square\notin\Sigma\). A _context_ is a tree \(C\in T_{\Sigma}(\square)\) with \(\operatorname{pos}_{\square}(C)\neq\emptyset\). More specifically, we call \(C\) an _\(n\)-context_ if \(n=|\operatorname{pos}_{\square}(C)|\). For an \(n\)-context \(C\) and \(t_{1},\ldots,t_{n}\in T_{\Sigma}\), we define the substitution \(C[t_{1},\ldots,t_{n}]\) as follows. Let \(\operatorname{pos}_{\square}(C)=\{w_{1},\ldots,w_{n}\}\) be the occurrences of \(\square\) in \(C\) in lexicographic order \(w_{1}\prec\cdots\prec w_{n}\). Then we let \(C[t_{1},\ldots,t_{n}]=C[t_{1}]_{w_{1}}\cdots[t_{n}]_{w_{n}}\). ### Tree Homomorphisms and Weighted Tree Grammars Given ranked alphabets \(\Sigma\) and \(\Gamma\), let \(h^{\prime}\colon\Sigma\to T_{\Gamma}(X)\) be a mapping with \(h^{\prime}(\sigma)\in T_{\Gamma}(X_{k})\) for all \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\). We extend \(h^{\prime}\) to \(h\colon T_{\Sigma}\to T_{\Gamma}\) by \(h(\alpha)=h^{\prime}(\alpha)\in T_{\Gamma}(X_{0})=T_{\Gamma}\) for all \(\alpha\in\Sigma_{0}\) and \(h(\sigma(t_{1},\ldots,t_{k}))=h^{\prime}(\sigma)[x_{1}\gets h(t_{1}), \ldots,x_{k}\gets h(t_{k})]\) for all \(k\in\mathbb{N}\), \(\sigma\in\Sigma_{k}\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}\). The mapping \(h\) is called the _tree homomorphism induced by \(h^{\prime}\)_, and we identify \(h^{\prime}\) and its induced tree homomorphism \(h\). For the complexity analysis of our decision procedure, we define the size of \(h\) as \(\operatorname{size}(h)=\sum_{\sigma\in\Sigma}|\operatorname{pos}(h(\sigma))|\). We call \(h\)_nonerasing_ (respectively, _nondeleting_) if \(h^{\prime}(\sigma)\notin X\) (respectively, \(\operatorname{var}(h^{\prime}(\sigma))=X_{k}\)) for all \(k\in\mathbb{N}\) and \(\sigma\in\Sigma_{k}\). In this contribution, we will only consider norenasing and nondeleting tree homomorphisms \(h\colon T_{\Sigma}\to T_{\Gamma}\), which are therefore _input finitary_; i.e., the preimage \(h^{-1}(u)\) is finite for every \(u\in T_{\Gamma}\) since \(|t|\leq|u|\) for every \(t\in h^{-1}(u)\). Any mapping \(A\colon T_{\Sigma}\to\mathbb{N}\) is called _\(\mathbb{N}\)-weighted tree language_, and we define the weighted tree language \(h_{A}\colon T_{\Gamma}\to\mathbb{N}\) for every \(u\in T_{\Gamma}\) by \(h_{A}(u)=\sum_{t\in h^{-1}(u)}A(t)\) and call it the _image of \(A\) under \(h\)_. This definition relies on the tree homomorphism to be input-finitary; otherwise the defining sum is not finite, so the value \(h_{A}(u)\) is not necessarily well-defined. A _weighted tree grammar with equality constraints_ (WTGc) [16] is a tuple \((Q,\Sigma,F,P,\operatorname{wt})\), in which \(Q\) is a finite set of _states_, \(\Sigma\) is a ranked alphabet of _input symbols_, \(F\colon Q\to\mathbb{N}\) assigns a _final weight_ to every state, \(P\) is a finite set of _productions_ of the form \((\ell,q,E)\) with \(\ell\in T_{\Sigma}(Q)\setminus Q\), \(q\in Q\), and finite subset \(E\subseteq\mathbb{N}^{*}\times\mathbb{N}^{*}\), and \(\operatorname{wt}\colon P\to\mathbb{N}\) assigns a _weight_ to every production. A production \(p=(\ell,q,E)\in P\) is usually written \(p=\ell\stackrel{{ E}}{{\longrightarrow}}q\) or \(p=\ell\stackrel{{ E}}{{\longrightarrow}}\operatorname{wt}(p)\)\(q\), and the tree \(\ell\) is called its _left-hand side_, \(q\) is its _target state_, and \(E\) are its _equality constraints_, respectively. Equality constraints \((v,v^{\prime})\in E\) are also written as \(v=v^{\prime}\). A state \(q\in Q\) is _final_ if \(F(q)\neq 0\). Next, we recall the _derivation semantics_ of WTGc from [16]. Let \((v,v^{\prime})\in\mathbb{N}^{*}\times\mathbb{N}^{*}\) be an equality constraint and \(t\in T_{\Sigma}\). The tree \(t\) satisfies \((v,v^{\prime})\) if and only if \(v,v^{\prime}\in\operatorname{pos}(t)\) and \(t|_{v}=t|_{v^{\prime}}\), and for a finite set \(C\subseteq\mathbb{N}^{*}\times\mathbb{N}^{*}\) of equality constraints, we write \(t\models C\) if \(t\) satisfies all \((v,v^{\prime})\in C\). Let \(G=(Q,\Sigma,F,P,\operatorname{wt})\) be a WTGc. A _sentential form (for \(G\))_ is a tree \(\xi\in T_{\Sigma}(Q)\). Given an input tree \(t\in T_{\Sigma}\), sentential forms \(\xi,\zeta\in T_{\Sigma}(Q)\), a production \(p=\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\), and a position \(w\in\operatorname{pos}(\xi)\), we write \(\xi\Rightarrow_{G,t}^{p,w}\zeta\) if \(\xi|_{w}=\ell\), \(\zeta=\xi[q]_{w}\), and \(t|_{w}\models E\); i.e., the equality constraints \(E\) are fulfilled on \(t|_{w}\). A sequence \(d=(p_{1},w_{1})\cdots(p_{n},w_{n})\in(P\times\mathbb{N}^{*})^{*}\) is a _derivation (of \(G\)) for \(t\)_ if there exist \(\xi_{0},\ldots,\xi_{n}\in T_{\Sigma}(Q)\) such that \(\xi_{0}=t\) and \(\xi_{i-1}\Rightarrow_{G,t}^{p_{i},w_{i}}\xi_{i}\) for all \(i\in[n]\). We call \(d\)_left-most_ if additionally \(w_{1}\prec w_{2}\prec\cdots\prec w_{n}\). Note that the sentential forms \(\xi_{0},\ldots,\xi_{n}\) are uniquely determined if they exist, and for any derivation \(d\) for \(t\) there exists a unique permutation of \(d\) that is a left-most derivation for \(t\). We call \(d\)_complete_ if \(\xi_{n}\in Q\), and in this case we also call it a derivation _to \(\xi_{n}\)_. The set of all complete left-most derivations for \(t\) to \(q\in Q\) is denoted by \(D_{G}^{q}(t)\). A complete derivation to some final state is called _accepting_. If for every \(p\in P\), there exists a tree \(t\in T_{\Sigma}\), a final state \(q\) and a derivation \(d=(p_{1},w_{1})\cdots(p_{m},w_{m})\in D_{G}^{q}(t)\) such that \(F(q)\cdot\operatorname{wt}_{G}(d)\neq 0\) and \(p\in\{p_{1},\ldots,p_{m}\}\); i.e. if every production is used in some accepting derivation, then \(G\) is _trim_. Let \(d=(p_{1},w_{1})\cdots(p_{n},w_{n})\in D_{G}^{q}(t)\) for some \(t\in T_{\Sigma}\) and \(i\in[n]\). Moreso, let \(\{j_{1},\ldots,j_{\ell}\}\) be the set \(\{j\in[n]\mid w_{i}\leq w_{j}\}\) with the indices \(j_{1}<\cdots<j_{\ell}\) of those positions of which \(w_{i}\) is a prefix. We refer to \((p_{j_{1}},w_{i}^{-1}w_{j_{1}}),\ldots,(p_{j_{\ell}},w_{i}^{-1}w_{j_{\ell}})\) as the _derivation for \(t|_{w_{i}}\) incorporated in \(d\)_. Conversely, for \(w\in\mathbb{N}^{*}\) we abbreviate the derivation \((p_{1},ww_{1})\cdots(p_{n},ww_{n})\) by \(wd\). The _weight_ of a derivation \(d=(p_{1},w_{1})\cdots(p_{n},w_{n})\) is defined as \(\operatorname{wt}_{G}(d)=\prod_{i=1}^{n}\operatorname{wt}(p_{i})\). The weighted tree language generated by \(G\), written \(\llbracket G\rrbracket\colon T_{\Sigma}\to\mathbb{N}\), is defined for all \(t\in T_{\Sigma}\) by \[\llbracket G\rrbracket(t)=\sum_{q\in Q,\,d\in D_{G}^{q}(t)}F(q)\cdot \operatorname{wt}_{G}(d)\enspace.\] For \(t\in T_{\Sigma}\) and \(q\in Q\), we will often use the value \(\operatorname{wt}_{G}^{q}(t)\) defined as \(\operatorname{wt}_{G}^{q}(t)=\sum_{d\in D_{G}^{q}(t)}\operatorname{wt}_{G}(d)\). Using distributivity, \(\llbracket G\rrbracket(t)\) then simplifies to \(\llbracket G\rrbracket(t)=\sum_{q\in Q}F(q)\cdot\operatorname{wt}_{G}^{q}(t)\). We call two WTGc _equivalent_ if they generate the same weighted tree language. We call a WTGc \((Q,\Sigma,F,P,\operatorname{wt})\) a _weighted tree grammar_ (WTG) if \(E=\emptyset\) for every production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\); i.e., no production utilizes equality constraints. Instead of \(\ell\stackrel{{\emptyset}}{{\longrightarrow}}q\) we also simply write \(\ell\to q\). Moreover, we call a WTGc a _weighted tree automaton with equality constraints_ (WTAc) if \(\operatorname{pos}_{\Sigma}(\ell)=\{\varepsilon\}\) for every production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\), and a _weighted tree automaton_ (WTA) if it is both a WTG and a WTAc. The classes of WTGc and WTAc are equally expressive, and they are strictly more expressive than the class of WTA [16]. We call a weighted tree language _regular_ if it is generated by a WTA and _constraint-regular_ if it is generated by a WTGc. Productions with weight \(0\) are obviously useless, so we may assume that \(\operatorname{wt}(p)\neq 0\) for every production \(p\). Finally, we define the size of a WTGc as follows. Let \(G=(Q,\Sigma,F,P,\operatorname{wt})\) be a WTGc and \(p=\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\) be a production. We define the _height of \(p\)_as \(\operatorname{ht}(p)=\operatorname{ht}(\ell)\) and its _size_as \(\operatorname{size}(p)=|\operatorname{pos}(\ell)|\), the _height of \(P\) as \(\operatorname{ht}(P)=\max_{p\in P}\operatorname{ht}(p)\) and its _size_as \(\operatorname{size}(P)=\sum_{p\in P}\operatorname{size}(p)\), and finally the _height of \(G\) as \(\operatorname{ht}(G)=|Q|\cdot\operatorname{ht}(P)\) and its _size_as \(\operatorname{size}(G)=|Q|+\operatorname{size}(P)\). It is known [16] that WTGc can be used to represent homomorphic images of regular weighted tree languages. Let \(A\colon T_{\Sigma}\to\mathbb{N}\) be a regular weighted tree language (effectively given by a WTA) and \(h\colon T_{\Sigma}\to T_{\Gamma}\) be a tree homomorphism. Following [16, Theorem 5] we can construct a WTGc \(G=(Q,\Gamma,F,P,\operatorname{wt})\) of a specific shape such that \(\llbracket G\rrbracket=h_{A}\). More precisely, the constructed WTGc \(G\) has a designated nonfinal _sink state_\(\bot\in Q\) such that \(F(\bot)=0\) as well as \(p_{\gamma}=\gamma(\bot,\ldots,\bot)\rightarrow\bot\in P\) and \(\operatorname{wt}(p_{\gamma})=1\) for every \(\gamma\in\Gamma\). In addition, every production \(p=\ell\xrightarrow{E}q\in P\) satisfies the following two properties. First, \(E\subseteq\operatorname{pos}_{Q}(\ell)^{2}\); i.e., all equality constraints point to the \(Q\)-labeled positions of its left-hand side. Without loss of generality, we can assume that the set \(E\) of equality constraints is reflexive, symmetric, and transitive; i.e., an equivalence relation on a subset \(D\subseteq\operatorname{pos}_{Q}(\ell)\), so not all occurrences of states need to be constrained. Second, \(\ell(v)=\bot\) and \(\ell(w)\neq\bot\) for every \(v\in[w^{\prime}]_{E}\setminus\{w\}\) and \(w^{\prime}\in D\), where \(w=\min_{\preceq}[w^{\prime}]_{E}\); i.e., all but the lexicographically least position in each equivalence class of \(E\) are guarded by state \(\bot\). Essentially, the WTGc \(G\) performs its checks (and charges weights) exclusively on the lexicographically least occurrences of equality-constrained subtrees. All the other subtrees, which by means of the constraint are forced to coincide with another subtree, are simply ignored by the WTGc, which formally means that they are processed in the designated sink state \(\bot\). In the following, we will use \(\bot\) to indicate such a sink state, and write \(Q\cup\{\bot\}\) to explicitly indicate its presence. In [16] WTGc of the special shape just discussed were called eq-restricted, but since these will be the primary objects of interest in this work, we simply call them WTGh here. The constructive proof of the following statement can be found in the appendix. [see [16, Theorem 5]] Let \(G=(Q,\Sigma,F,P,\operatorname{wt})\) be a trim WTA and \(h\colon T_{\Sigma}\to T_{\Gamma}\) be a nondeleting and nonerasing tree homomorphism. Then there exists a trim WTGh \(G^{\prime}\) with \([\![G^{\prime}]\!]=h_{[\![G]\!]}\). Moreover, \(\operatorname{size}(G^{\prime})\in\mathcal{O}\big{(}\operatorname{size}(G) \cdot\operatorname{size}(h)\big{)}\) and \(\operatorname{ht}(G^{\prime})\in\mathcal{O}\big{(}\operatorname{size}(h) \big{)}\). Let \(G=(Q\cup\{\bot\},\Gamma,F,P,\operatorname{wt})\) with \(Q=\{q,q_{f}\}\), \(\Gamma=\{\alpha^{(0)},\gamma^{(1)},\delta^{(3)}\}\), \(F(q)=F(\bot)=0\) and \(F(q_{f})=1\), and the following set \(P\) of productions. \[\Big{\{}\alpha\rightarrow_{1}q,\;\gamma(q)\rightarrow_{2}q,\;\delta\big{(}q, \gamma(\bot),q\big{)}\xrightarrow{1=21}q_{f},\quad\alpha\rightarrow_{1}\bot, \;\gamma(\bot)\rightarrow_{1}\bot,\;\delta(\bot,\bot,\bot)\rightarrow_{1}\bot \Big{\}}\] The WTGc \(G\) is a WTGh. It generates the homomorphic image \([\![G]\!]=h_{A}\) for the tree homomorphism \(h\) induced by the mapping \(\alpha\mapsto\alpha\), \(\gamma\mapsto\gamma(x_{1})\), and \(\sigma\mapsto\delta\big{(}x_{2},\gamma(x_{2}),x_{1}\big{)}\) applied to the regular weighted tree language \(A\colon T_{\Sigma}\rightarrow\mathbb{N}\) given by \(A(t)=2^{\operatorname{\text{\rm{I}op}},(t)}\) for every \(t\in T_{\Sigma}\) with \(\Sigma=\{\alpha^{(0)},\gamma^{(1)},\sigma^{(2)}\}\). The weighted tree language \([\![G]\!]\) is itself not regular because its support is clearly not a regular tree language. The restrictions in the definition of a WTGh allow us to trim it effectively using a simple reachability algorithm. For more details, we refer the reader to the appendix. Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\operatorname{wt})\) be a WTGh. An equivalent, trim WTGh \(G^{\prime}\) can be constructed in polynomial time. ## 3 Substitutions in the Presence of Equality Constraints This short section recalls from [16] some definitions together with a pumping lemma for WTGh, which will be essential for deciding the integer-weighted HOM-problem. First, we need to refine the substitution of trees such that it complies with existing constraints. [see [16] and cf. [11]] Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\operatorname{wt})\) be a WTGh, and let \(d=(p_{1},w_{1})\cdots(p_{m},w_{m})\) be a complete left-most derivation for a tree \(t\in T_{\Sigma}\) to a state \(q\in Q\cup\{\bot\}\). Furthermore, let \(j\in[m]\) such that \(q_{j}=\bot\) if and only if \(q=\bot\); i.e., the target state \(q_{j}\) of production \(p_{j}\) is \(\bot\) if and only if \(q=\bot\). We note that automatically \(q_{j}=\bot\) whenever \(q=\bot\). Finally, let \(d^{\prime}\) be a derivation to \(q_{j}\) for some tree \(t^{\prime}\in T_{\Sigma}\), and let \(d^{\prime}_{\bot}\) be the derivation of \(G\) for \(t^{\prime}\) where every occurring state is \(\bot\). We define the substitution \(d[\![d^{\prime}]\!]_{w_{j}}\) of \(d^{\prime}\) into \(d\) at \(w_{j}\) recursively as follows. * _If_ \(w_{j}=\varepsilon\) _(i.e.,_ \(j=m\)_), then we define_ \(d[\![d^{\prime}]\!]_{w_{j}}=d^{\prime}\)_._ * _Otherwise, let_ \(p_{m}=\ell\stackrel{{ E}}{{\longrightarrow}}q\) _be the production utilized last,_ \(\operatorname{pos}_{Q}(\ell)=\{v_{1},\ldots,v_{n}\}\)_, and let_ \(d_{1},\ldots,d_{n}\) _be the derivations for_ \(t|_{v_{1}},\ldots,t|_{v_{n}}\) _incorporated in_ \(d\)_, respectively. Obviously there exists_ \(s\in[n]\) _such that_ \(v_{s}\leq w_{j}\)_. Let_ \(\hat{w}=v_{s}^{-1}w_{j}\)_, which is a position occurring in_ \(d_{s}\)_. Correspondingly, we define_ \(d_{s}^{\prime}=d_{s}[\![d^{\prime}]\!]_{\hat{w}}\) _and for every_ \(i\in[n]\setminus\{s\}\)_, we define_ \(d_{i}^{\prime}=d_{i}[\![d^{\prime}_{\perp}]\!]_{\hat{w}}\) _if_ \((v_{i},v_{s})\in E\) _and otherwise_ \(d_{i}^{\prime}=d_{i}\)_. Then_ \(d[\![d^{\prime}]\!]_{w_{j}}\) _is obtained by reordering the derivation_ \((v_{1}d_{1}^{\prime})\cdots(v_{n}d_{n}^{\prime})(p_{m},w_{m})\) _such that it is left-most._ _The tree derived by_ \(d[\![d^{\prime}]\!]_{w_{j}}\) _is denoted by_ \(t[\![t^{\prime}]\!]_{w_{j}}^{d}\) _or simply_ \(t[\![t^{\prime}]\!]_{w_{j}}\)_, if the original derivation for_ \(t\) _is clear from the context._ **Example 6**.: Consider the WTGh \(G\) of Example 3 and the following tree \(t\) it generates into which we want to substitute the tree \(t^{\prime}=\gamma(\alpha)\) at position \(w=11\). We consider the following complete left-most derivation for \(t\) to \(q_{f}\). \[d=\left(\alpha\to q,11\right)\left(\gamma(q)\to q,1\right) \quad\left(\alpha\to\bot,211\right)\left(\gamma(\bot)\to\bot,21\right)\] \[\quad\left(\alpha\to q,31\right)\left(\gamma(q)\to q,3\right) \left(\delta\big{(}q,\gamma(\bot),q\big{)}\stackrel{{ 1=21}}{{\longrightarrow}}q_{f},\varepsilon\right)\] Moreover, let \(d^{\prime}=\left(\alpha\to q,1\right)\left(\gamma(q)\to q,\varepsilon\right)\) and \(d_{\perp}^{\prime}=\left(\alpha\to\bot,1\right)\left(\gamma(\bot)\to\bot, \varepsilon\right)\). With the notation of Definition 5, in the first step we have \(v_{1}=1\), \(v_{2}=21\), \(v_{3}=3\), \(d_{1}=d_{3}=d^{\prime}\), \(d_{2}=d_{\perp}^{\prime}\), and \(\hat{w}=v_{1}^{-1}w=1\). Respecting the only constraint \(1=21\), we set \(d_{1}^{\prime}=d_{1}[\![d^{\prime}]\!]_{\hat{w}}=d^{\prime}[\![d^{\prime}]\!]_{1}\), \(d_{2}^{\prime}=d_{2}[\![d^{\prime}_{\perp}]\!]_{\hat{w}}=d_{\perp}^{\prime}[\![d ^{\prime}_{\perp}]\!]_{1}\), and \(d_{3}^{\prime}=d_{3}=d^{\prime}\). Eventually, \(d_{1}^{\prime}\!=\!(\alpha\to q,11)(\gamma(q)\to q,1)(\gamma(q)\to q,\varepsilon)\) and \(d_{2}^{\prime}\!=\!(\alpha\to\bot,11)(\gamma(\bot)\to\bot,1)(\gamma(\bot) \to\bot,\varepsilon)\). Hence, we obtain the following derivation \(d[\![d^{\prime}]\!]_{11}\) for our new tree \(t[\![t^{\prime}]\!]_{11}\). \[d[\![d^{\prime}]\!]_{11}=\left(\alpha\to q,11\right)\left(\gamma(q) \to q,11\right)\left(\gamma(q)\to q,1\right)\left(\alpha\to\bot,2111\right) \left(\gamma(\bot)\to\bot,211\right)\] \[\quad\left(\gamma(\bot)\to\bot,21\right)\quad\left(\alpha\to q,31 \right)\left(\gamma(q)\to q,3\right)\left(\delta\big{(}q,\gamma(\bot),q \big{)}\stackrel{{ 1=21}}{{\longrightarrow}}q_{f},\varepsilon\right)\] Although \(t|_{31}=\alpha\) also coincides with the subtree \(t|_{11}=\alpha\) we replaced, these two subtrees are not equality-constrained, so the simultaneous substitution does not affect \(t|_{31}\). The substitution of Definition 5 allows us to prove a pumping lemma for the class of WTGh: If \(d\) is an accepting derivation of a WTGh \(G=(Q\cup\{\bot\},\Sigma,F,P,\operatorname{wt})\) for a tree \(t\) with \(\operatorname{ht}(t)>\operatorname{ht}(G)\), then there exist at least \(|Q\setminus\{\bot\}|+1\) positions \(w_{1}>\cdots>w_{|Q|+1}\) in \(t\) at which \(d\) applies productions with non-sink target states. By the pigeonhole principle, there thus exist two positions \(w_{i}>w_{j}\) in \(t\) at which \(d\) applies productions with the same non-sink target state. Employing the substitution we just defined, we can substitute \(t|_{w_{j}}\) into \(w_{i}\) and obtain a derivation of \(G\) for \(t[\![t_{w_{j}}]\!]_{w_{i}}\). This process can be repeated to obtain an infinite sequence of trees strictly increasing in size. Formally, the following lemma was proved in [16]. **Lemma 7** ([16, Lemma 4]).: _Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\mathrm{wt})\) be a WTGh. Consider some tree \(t\in T_{\Sigma}\) and non-sink state \(q\in Q\setminus\{\bot\}\) such that \(\mathrm{ht}(t)>\mathrm{ht}(G)\) and \(D^{q}_{G}(t)\neq\emptyset\). Then there are infinitely many pairwise distinct trees \(t_{0},t_{1},\dots\) such that \(D^{q}_{G}(t_{i})\neq\emptyset\) for all \(i\in\mathbb{N}\)._ **Example 8**.: Recall the WTGh \(G\) of Example 3. We have \(\mathrm{ht}(P)=2\) and \(\mathrm{ht}(G)=4\), but for simplicity, we choose the smaller tree \(t=\delta(\gamma(\alpha),\gamma(\gamma(\alpha)),\gamma(\alpha))\), which we also considered in Example 6, since it also allows pumping. The derivation \(d\) presented in Example 6 for \(t\) applies the productions \((\alpha\to q)\) at \(11\) and \(\gamma(q)\to q\) at \(1\), so we substitute \(t|_{1}=\gamma(\alpha)\) at \(11\) to obtain \(t[\![\gamma(\alpha)]\!]_{11}\). In fact, this is exactly the substitution we illustrated in Example 6. ## 4 The Decision Procedure Let us now turn to the \(\mathbb{N}\)-weighted version of the HOM-problem. In the following, we show that the regularity of the homomorphic image of a regular \(\mathbb{N}\)-weighted tree language is decidable in polynomial time. More precisely, we prove the following theorem. **Theorem 9**.: _The weighted HOM-problem over \(\mathbb{N}\) is polynomial; i.e. for fixed ranked alphabets \(\Gamma\) and \(\Sigma\), given a trim WTA \(H\) over \(\Gamma\), and a nondeleting, nonerasing tree homomorphism \(h\colon T_{\Gamma}\to T_{\Sigma}\), it is decidable in polynomial time whether \(h_{[\![H]\!]}\) is regular._ For the proof, we follow the general outline of the unweighted case [11]. Given a regular weighted tree language \(A\) (represented by a trim WTA) and a tree homomorphism \(h\), we begin by constructing a trim WTGh \(G\) for its image \([\![G]\!]=h_{A}\) applying Theorem 2. We then show that \([\![G]\!]\) is regular if and only if for all derivations of \(G\) the equality constraints occurring in the derivation only apply to subtrees of height at most \(\mathrm{ht}(G)\). In other words, if there exists a production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\) in \(G\) such that for some equality constraint \((u,v)\in E\) with non-sink state \(q=\ell(u)\) there exists a tree \(t\in T_{\Sigma}\) with \(\mathrm{ht}(t)>\mathrm{ht}(G)\) and \(D^{q}_{G}(t)\neq\emptyset\), then \([\![G]\!]\) is not regular, and if no such production exists, then \([\![G]\!]\) is regular. There are thus three parts to our proof. First, we show that the existence of such a production is decidable in polynomial time. Then we show that \([\![G]\!]\) is regular if no such production exists. Finally, we show that \([\![G]\!]\) is not regular if such a production exists. For convenience, we attach a name to the property described here. **Definition 10**.: _Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\mathrm{wt})\) be a trim WTGh. We say that \(G\) has the large duplication property if there exist a production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\), an equality constraint \((u,v)\in E\) with \(\ell(u)\neq\bot=\ell(v)\), and a tree \(t\in T_{\Sigma}\) such that \(\mathrm{ht}(t)>\mathrm{ht}(G)\) and \(D^{\ell(u)}_{G}(t)\neq\emptyset\)._ We start with the decidability of the large duplication property. **Lemma 11**.: _Consider a fixed ranked alphabet \(\Sigma\). The following is decidable in polynomial time: Given a trim WTGh \(G\), does it satisfy the large duplication property?_ Proof.: Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\mathrm{wt})\) and construct the directed graph \(G=(Q,E)\) with edges \(E=\bigcup_{\ell\stackrel{{ E}}{{\longrightarrow}}q\in P}\{(q^{ \prime},q)\ |\ q^{\prime}\in Q,\mathrm{pos}_{q^{\prime}}(\ell)\neq\emptyset\}\). Clearly, the large duplication property is equivalent to the condition that there exists a production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\in P\), an equality constraint \((u,v)\in E\) with \(\ell(u)\neq\bot=\ell(v)\), and a state \(q^{\prime}\in Q\setminus\{\bot\}\) such that there exists a cycle from \(q^{\prime}\) to \(q^{\prime}\) in \(G\) and a path from \(q^{\prime}\) to \(q\) in \(G\). This equivalent condition can be checked in polynomial time. The equivalence of the two statements is easy to establish. If the large duplication property holds, then the pumping lemma [16, Lemma 4] exhibits the required cycle and path. Conversely, if the cycle and path exist, then the pumping lemma [16, Lemma 4] can be used to derive arbitrarily tall trees for which a derivation exists. Next, we show that a WTGh \(G\) generates regular \(\llbracket G\rrbracket\) if it does not satisfy the large duplication property. To this end, we construct the _linearization_ of \(G\). The linearization of a WTGh \(G\) is a WTG that simulates all derivations of \(G\) which only ensure the equivalence of subtrees of height at most \(\operatorname{ht}(G)\). This is achieved by replacing every production \(\ell\stackrel{{ E}}{{\longrightarrow}}q\) in \(G\) by the collection of all productions \(\ell^{\prime}\to q\) which can be obtained by substituting each position constrained by \(E\) with a compatible tree of height at most \(\operatorname{ht}(G)\) that satisfies the equality constraints of \(E\). Note that positions in \(\ell\) that are unconstrained by \(E\) are unaffected by these substitutions. Formally, we define the linearization following [11, Definition 7.1]. Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\operatorname{wt})\) be a WTGh. The linearization \(\operatorname{lin}(G)\) of \(G\) is the WTG \(\operatorname{lin}(G)=(Q\cup\{\bot\},\Sigma,F,P_{\operatorname{lin}}, \operatorname{wt}_{\operatorname{lin}})\), where \(P_{\operatorname{lin}}\) and \(\operatorname{wt}_{\operatorname{lin}}\) are defined as follows. For \(\ell^{\prime}\in T_{\Sigma}(Q)\setminus Q\) and \(q\in Q\), we let \((\ell^{\prime}\to q)\in P_{\operatorname{lin}}\) if and only if there exist a production \((\ell\stackrel{{ E}}{{\longrightarrow}}q)\in P\), positions \(w_{1},\ldots,w_{k}\in\operatorname{pos}_{Q\cup\{\bot\}}(\ell)\), and trees \(t_{1},\ldots,t_{k}\in T_{\Sigma}\) with \(\operatorname{ht}(t)>\operatorname{ht}(G)\) and a constraint \((u,v)\in E\) with \(\ell(u)\neq\bot=\ell(v)\) and \(D_{G}^{\ell(u)}(t)\neq\emptyset\). Thus, \(t_{i}=t_{j}\) if \((w_{i},w_{j})\in E\) for all \(i,j\in[k]\), \(\ell^{\prime}=\ell[t_{1}]_{w_{1}}\cdots[t_{k}]_{w_{k}}\), and \(D_{G}^{\ell(w_{i})}(t_{i})\neq\emptyset\) and \(\operatorname{ht}(t_{i})\leq\operatorname{ht}(G)\) for all \(i\in[k]\). For every such production \(\ell^{\prime}\to q\) we define \(\operatorname{wt}_{\operatorname{lin}}(\ell^{\prime}\to q)\) as the sum over all weights \[\operatorname{wt}(\ell\stackrel{{ E}}{{\longrightarrow}}q)\cdot \prod_{i\in[k]}\operatorname{wt}_{G}^{\ell(w_{i})}(t_{i})\] for all \((\ell\stackrel{{ E}}{{\longrightarrow}}q)\in P\), \(w_{1},\ldots,w_{k}\in\operatorname{pos}_{Q\cup\{\bot\}}(\ell)\), and \(t_{1},\ldots,t_{k}\in T_{\Sigma}\) as above. If a trim WTGh \(G\) does not satisfy the large duplication property, then every equality constraint in every derivation of \(G\) only ensures the equality of subtrees of height at most \(\operatorname{ht}(G)\). Thus, \(\operatorname{lin}(G)\) and \(G\) generate the same weighted tree language \(\llbracket G\rrbracket=\llbracket\operatorname{lin}(G)\rrbracket\), which is then regular because \(\operatorname{lin}(G)\) is a WTG. Thus we summarize: Let \(G\) be a trim WTGh and suppose that \(G\) does not satisfy the large duplication property. Then \(\llbracket G\rrbracket\) is a regular weighted tree language. Finally, we show that if a WTGh \(G=(Q\cup\{\bot\},\Sigma,F,P,\operatorname{wt})\) satisfies the large duplication property, then \(\llbracket G\rrbracket\) is not regular. For this, we first show that if \(G\) satisfies the large duplication property, then we can decompose it into two WTGh \(G_{1}\) and \(G_{2}\) such that \(\llbracket G\rrbracket=\llbracket G_{1}\rrbracket+\llbracket G_{2}\rrbracket\) and at least one of \(\llbracket G_{1}\rrbracket\) and \(\llbracket G_{2}\rrbracket\) is not regular. To conclude the desired statement, we then show that the sum \(\llbracket G\rrbracket=\llbracket G_{1}\rrbracket+\llbracket G_{2}\rrbracket\) is also not regular. For the decomposition, consider the following idea. Assume that there exists a production \(p=(\ell\stackrel{{ E}}{{\longrightarrow}}q)\in P\) as in the large duplication property such that \(F(q)\neq 0\). Then we create two copies \(G_{1}\) and \(G_{2}\) of \(G\) as follows. In \(G_{1}\) we set all final weights to \(0\), add a new state \(f\) with final weight \(F(q)\), and add the new production \((\ell\stackrel{{ E}}{{\longrightarrow}}f)\) with the same weight as \(p\). On the other hand, in \(G_{2}\) we set the final weight of \(q\) to \(0\), add a new state \(f\) with final weight \(F(q)\), and for every production \(p^{\prime}=(\ell^{\prime}\stackrel{{ E^{\prime}}}{{\longrightarrow }}q)\in P\) except \(p\), we add the new production \(\ell^{\prime}\stackrel{{ E^{\prime}}}{{\longrightarrow}}f\) to \(G_{2}\) with the same weight as \(p^{\prime}\). Then because every derivation of \(G\) whose last production is \(p\) is now a derivation of \(G_{1}\) to \(f\), and every other derivation is either directly a derivation of \(G_{2}\) or, in case of other derivations to \(q\), is a derivation of \(G_{2}\) to \(f\). By our assumption on the production \(p=(\ell\stackrel{{ E}}{{\longrightarrow}}q)\), there exist a full tree \(t\in T_{\Sigma}\) with \(\operatorname{ht}(t)>\operatorname{ht}(G)\) and a constraint \((u,v)\in E\) with \(\ell(u)\neq\bot=\ell(v)\) and \(D_{G}^{\ell(u)}(t)\neq\emptyset\). Thus, every tree \(t^{\prime}\) generated by \(G_{1}\) satisfies \(t^{\prime}|_{u}=t^{\prime}|_{v}\), and by Lemma 7, there exist infinitely many pairwise distinct trees with a derivation to \(\ell(u)\). The support (i.e., set of nonzero weighted trees) of \(\llbracket\!\llbracket G_{1}\rrbracket\) is therefore not a regular tree language. This implies that \(\llbracket\!\llbracket G_{1}\rrbracket\) is not regular as the support of every regular weighted tree language over \(\mathbb{N}\) is a regular tree language [9]. In general, we cannot expect that a production \(\ell\xrightarrow{E}q\) as in the large duplication property exists with \(F(q)\neq 0\). For details on the general case, we refer the reader to the appendix. Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\mathrm{wt})\) be a trim WTGh that satisfies the large duplication property. Then there exist two trim WTGh \(G_{1}=(Q_{1}\cup\{\bot\},\Sigma,F_{1},P_{1},\mathrm{wt}_{1})\) and \(G_{2}=(Q_{2}\cup\{\bot\},\Sigma,F_{2},P_{2},\mathrm{wt}_{2})\) such that \(\llbracket\!\llbracket G\rrbracket=\llbracket G_{1}\rrbracket+\llbracket\! \llbracket G_{2}\rrbracket\) and for some \(f\in Q_{1}\) we have * \(F_{1}(f)\neq 0\) _and_ \(F_{1}(q)=0\) _for all_ \(q\in Q_{1}\setminus\{f\}\)_, and_ * _there exists exactly one production_ \(p_{\mathrm{f}}=(\ell_{\mathrm{f}}\,\frac{E_{\mathrm{f}}}{q}\,f)\in P_{1}\) _with target state_ \(f\)_, and for this production there exists_ \((u,v)\in E_{\mathrm{f}}\) _with_ \(\ell_{\mathrm{f}}(u)\neq\ell_{\mathrm{f}}(v)=\bot\) _and an infinite sequence of pairwise distinct trees_ \(t_{0},t_{1},t_{2},\ldots\in T_{\Sigma}\) _such that_ \(D_{G_{1}}^{\ell_{\mathrm{f}}(u)}(t_{i})\neq 0\) _for all_ \(i\in\mathbb{N}\)_. We present an example for the decomposition in Lemma 4. Consider the trim WTGh \(G=(Q\cup\{\bot\},\Sigma,P,F,\mathrm{wt})\) with \(Q=\{q_{0},\bar{q},q_{\mathrm{f}}\}\), \(\Sigma=\{\alpha^{(0)},\gamma^{(1)},\sigma^{(2)},\gamma_{1}^{(1)},\gamma_{2}^{ (1)}\}\), final weights \(F(q_{\mathrm{f}})=1\) and \(F(q_{0})=F(\bar{q})=F(\bot)=0\), and the set \(P=P_{\bot}\cup P^{\prime}\) defined by \(P^{\prime}=\big{\{}\,\alpha\to_{1}q_{0},\,\gamma(q_{0})\to_{1}q_{0},\,\sigma(q_ {0},\bot)\xrightarrow{12\cdots 2}\bar{q},\,\gamma_{1}(\bar{q})\to_{2}\bar{q},\,\gamma_{2}(\bar{q})\to_{2} \bar{q},\,\sigma(\bar{q},q_{0})\to_{2}q_{\mathrm{f}}\,\big{\}}\) and the usual productions targeting \(\bot\) in \(P_{\bot}\). Trees of the form \(\gamma(\cdots(\gamma(\alpha))\cdots)\) of arbitrary height are subject to the constraint \(1=2\), so \(G\) satisfies the large duplication property. We consider \(t^{\prime}\) as in Figure 1 and use its (unique) derivation in \(G\). Following the approach sketched above, we choose a new state \(f\) and define \(G_{1}=(Q\cup\{f\}\cup\{\bot\},\Sigma,F_{1},P_{1},\mathrm{wt}_{1})\), where \(F_{1}(f)=1\) and \(F_{1}(q)=0\) for every \(q\in Q\cup\{\bot\}\), and \(P_{1}=P\cup\{p_{\mathrm{f}}\}\) with the new production \(p_{\mathrm{f}}\) depicted in Figure 1, which joins all the productions of \(G\) used to derive \(t^{\prime}\), from the one evoking the large duplication property to the one targeting a final state. It remains to construct a WTGh \(G_{2}\) such that \(\llbracket\!\llbracket G\rrbracket=\llbracket\!\llbracket G_{1}\rrbracket+ \llbracket\!\llbracket G_{2}\rrbracket\). All productions of \(G\) still occur in \(G_{2}\), but \(q_{\mathrm{f}}\) is not final anymore. Instead, we add a state \(f\) with \(F_{2}(f)=F(q_{\mathrm{f}})=1\) and make sure that this state adopts all other accepting derivations that formerly led to \(q_{\mathrm{f}}\). For this, we handle first the derivations that coincide with the derivation for \(t^{\prime}\) at the juncture positions \(\varepsilon\) and \(1\), but not at \(2\). This leads to the following new productions \(p_{1}^{1}\) and \(p_{2}^{1}\): \[\begin{array}{ Next we cover the derivations that differ from the derivation for \(t^{\prime}\) at the position \(1\) but coincide with it at the root. This leads to the new productions Apart from the production incorporated at the root of \(p_{\text{f}}\), no other production of \(G\) targets \(q_{\text{f}}\) directly, so no more productions are added to \(P_{2}\). Finally, we define the WTGN \(G_{2}=(Q\cup\{f\}\cup\{\bot\},\Sigma,F_{2},P_{2},\text{wt}_{2})\) with \(F_{2}(f)=F(q_{\text{f}})=1\), \(F_{2}(q_{\text{f}})=F_{2}(q_{0})=F_{2}(\bar{q})=F_{2}(\bot)=0\), and \(P_{2}=P\cup\{p_{1}^{1},p_{2}^{1}\}\cup\{p_{1}^{2},p_{2}^{2}\}\). It remains to show that the existence of a decomposition \(\llbracket G\rrbracket=\llbracket G_{1}\rrbracket+\llbracket G_{2}\rrbracket\) as in Lemma 14 implies the nonregularity of \(\llbracket G\rrbracket\). For this, we employ the following idea. Consider a ranked alphabet \(\Sigma\) containing a letter \(\sigma\) of rank \(2\), a WTA \(G^{\prime}=(Q,\Sigma,F,P,\text{wt})\) over \(\Sigma\) (which exemplifies \(G_{2}\)), and a sequence \(t_{0},t_{1},t_{2},\ldots\in T_{\Sigma}\) of pairwise distinct trees. At this point, we assume that \(P\) contains all possible productions, but we may have \(\text{wt}(p)=0\) for \(p\in P\). Using the initial algebra semantics [9], we can find a matrix representation for the weights assigned by \(G^{\prime}\) to trees of the form \(\sigma(t_{i},t_{j})\) as follows. We enumerate the states \(Q=\{q_{1},\ldots,q_{n}\}\) and for every \(i\in\mathbb{N}\) define a (column) vector \(\nu_{i}\in\mathbb{N}^{n}\) by \((\nu_{i})_{k}=\text{wt}_{G^{\prime}}^{q_{i}}(t_{i})\) for \(k\in[n]\). Furthermore, we define a matrix \(N\in\mathbb{N}^{n\times n}\) by \(N_{kh}=\sum_{q\in Q}F(q)\cdot\text{wt}(\sigma(q_{k},q_{h})\to q)\) for \(k,h\in[n]\). Then \(\llbracket G^{\prime}\rrbracket(\sigma(t_{i},t_{j}))=\nu_{i}^{\text{T}}N\nu_{j}\) for all \(i,j\in\mathbb{N}\), where \(\nu_{i}^{\text{T}}\) is the transpose of \(\nu_{i}\). We employ this matrix representation to show that the sum of \(\llbracket G^{\prime}\rrbracket\) and the (nonregular) characteristic function \(1_{L}\) of the tree language \(L=\{\sigma(t_{i},t_{i})\mid i\in\mathbb{N}\}\) is not regular. We proceed by contradiction and assume that \(\llbracket G^{\prime}\rrbracket+1_{L}\) is regular. Thus we can find an analogous matrix representation using a matrix \(N^{\prime}\) and vectors \(\nu_{i}^{\prime}\) for \(\llbracket G^{\prime}\rrbracket+1_{L}\). Since the trees \(t_{0},t_{1},t_{2},\dots\) are pairwise distinct, we can write \[\big{(}\llbracket G^{\prime}\rrbracket+1_{L}\big{)}\big{(}\sigma(t_{i},t_{j}) \big{)}=(\nu_{i}^{\prime})^{\text{T}}N^{\prime}\nu_{j}^{\prime}=\llbracket G ^{\prime}\rrbracket\big{(}\sigma(t_{i},t_{j})\big{)}+\delta_{ij}=\nu_{i}^{ \text{T}}N\nu_{j}+\delta_{ij}\] for all \(i,j\in\mathbb{N}\), where \(\delta_{ij}\) denotes the Kronecker delta. The vectors \(\nu_{i}^{\prime}\) and \(\nu_{i}\) contain nonnegative integers, so we may consider the concatenated vectors \(\langle\nu_{i}^{\prime},\nu_{i}\rangle\) as vectors of \(\mathbb{Q}^{m}\) where \(m\in\mathbb{N}\) is the sum of number of states of \(G^{\prime}\) and of the WTA we assumed recognizes \(\llbracket G^{\prime}\rrbracket+1_{L}\). Since \(\mathbb{Q}^{m}\) is a finite dimensional \(\mathbb{Q}\)-vector space, the \(\mathbb{Q}\)-vector space spanned by the family \((\langle\nu_{i}^{\prime},\nu_{i}\rangle)_{i\in\mathbb{N}}\) is also finite dimensional. We may thus select a finite generating set from \((\langle\nu_{i}^{\prime},\nu_{i}\rangle)_{i\in\mathbb{N}}\). For simplicity, we assume that \(\langle\nu_{1}^{\prime},\nu_{1}\rangle,\ldots,\langle\nu_{K}^{\prime},\nu_{K }\rangle\) form such a generating set. Thus there exist \(a_{1},\ldots,a_{K}\in\mathbb{Q}\) with \(\langle\nu_{K+1}^{\prime},\nu_{K+1}\rangle=\sum_{i\in[K]}a_{i}\langle\nu_{i}^{ \prime},\nu_{i}\rangle\). Applying the usual distributivity laws for matrix multiplication, we reach a contradiction as follows. \[\big{(}\llbracket G^{\prime}\rrbracket+1_{L}\big{)}\big{(}\sigma(t _{K+1},t_{K+1})\big{)} =\,(\nu_{K+1}^{\prime})^{\text{T}}N^{\prime}\nu_{K+1}^{\prime}\,= \sum_{i\in[K]}a_{i}(\nu_{i}^{\prime})^{\text{T}}N^{\prime}\nu_{K+1}^{\prime}\] \[=\sum_{i\in[K]}a_{i}\nu_{i}^{\text{T}}N\nu_{K+1}=\nu_{K+1}^{\text {T}}N\nu_{K+1}=\llbracket G^{\prime}\rrbracket\big{(}\sigma(t_{K+1},t_{K+1}) \big{)}\] For the general case, we do not want to assume that \(\llbracket G_{2}\rrbracket\) is regular, so we cannot assume to have a matrix representation as we had for \(\llbracket G^{\prime}\rrbracket\) above. In order to make our idea work, we identify a set of trees for which the behavior of \(\llbracket G_{1}\rrbracket+\llbracket G_{2}\rrbracket\) resembles that of \(\llbracket G^{\prime}\rrbracket+1_{L}\); more precisely, we construct a context \(C\) and a sequence \(t_{0},t_{1},t_{2},\dots\) of pairwise distinct trees such that \((\llbracket\![G_{1}]\!]+\llbracket\![G_{2}]\!])(C(t_{i},t_{j}))=\nu_{i}^{(1)}N\nu_{j}^{( 2)}+\delta_{ij}\mu_{i}\) for all \(i,j\in\mathbb{N}\) and additionally, \(\mu_{i}>0\) for all \(i\in\mathbb{N}\). Unfortunately, working with a 2-context \(C\) may be insufficient if \(G_{1}\) uses constraints of the form \(\{v=v^{\prime},v^{\prime}=v^{\prime\prime}\}\), where more than two positions are constrained to be pairwise equivalent. Therefore, we have to consider more general \(n\)-contexts \(C\) and then identify a sequence of trees such that the equation above is satisfied on \(C(t_{i},t_{j},t_{j},\ldots,t_{j})\). We are now ready for the final theorem. For this, we will use the following version of Ramsey's theorem [19]. For a set \(X\), we denote by \(\binom{X}{2}\) the set of all subsets of \(X\) of size 2. Let \(k\geq 1\) be an integer and \(f\colon\binom{\mathbb{N}}{2}\to[k]\) a mapping. There exists an infinite subset \(E\subseteq\mathbb{N}\) such that \(f|_{\binom{E}{2}}\equiv i\) for some \(i\in[k]\). Let \(G=(Q\cup\{\bot\},\Sigma,F,P,\mathrm{wt})\) be a trim WTGh. If \(G\) satisfies the large duplication property, then \(\llbracket\![G]\!\rrbracket\) is not regular. Proof.: By Lemma 3.1 there exist two trim WTGh \(G_{1}=(Q_{1}\cup\{\bot\},\Sigma,F_{1},P_{1},\mathrm{wt}_{1})\) and \(G_{2}=(Q_{2}\cup\{\bot\},\Sigma,F_{2},P_{2},\mathrm{wt}_{2})\) with \(\llbracket\![G]\!](t)=\llbracket\![G_{1}]\!](t)+\llbracket\![G_{2}]\!](t)\) for all \(t\in T_{\Sigma}\). Additionally, there exists \(f\in Q_{1}\) with \(F_{1}(f)\neq 0\) and \(F_{1}(q)=0\) for all \(q\in Q_{1}\setminus\{f\}\) and there exists exactly one production \(p_{\mathrm{f}}=(\ell_{\ell}\stackrel{{ E_{\mathrm{f}}}}{{ \longrightarrow}}f)\in P_{1}\) whose target state is \(f\). Finally, for this production \(p_{\mathrm{f}}\) there exists \((u,v)\in E_{\mathrm{f}}\) with \(\ell_{\ell}(u)\neq\ell_{\mathrm{f}}(v)=\bot\) and an infinite sequence \(t_{0},t_{1},t_{2},\ldots\in T_{\Sigma}\) of pairwise distinct trees with \(D_{G_{1}}^{\ell_{\ell}(u)}(t_{i})\neq\emptyset\) for all \(i\in\mathbb{N}\). Let \(t\in T_{\Sigma}\) be such that \(D_{G_{1}}^{\ell_{1}}(t)\neq\emptyset\), and let \(w_{1},\ldots,w_{r}\) be an enumeration of all positions that are equality-constrained to \(u\) via \(E_{\mathrm{f}}\), where we assume that \(w_{1}=u\). We define a context \(C=t\llbracket\Box\rangle_{w_{1}}\cdots\llbracket\Box\rangle_{w_{r}}\). Then \(\llbracket\![G_{1}]\!](C(t_{i},t_{j},t_{j},\ldots,t_{j}))>0\) if and only if \(i=j\). Let us establish some additional notations. Let \(k,h\in\mathbb{N}\) and assume there is \(q\in Q_{2}\) with \(F_{2}(q)\neq 0\) and \(d=(p_{1},w_{1})\cdots(p_{m},w_{m})\in D_{G_{2}}^{q}(C(t_{k},t_{h},t_{h},\ldots,t _{h}))\). Let \(p_{i}=\ell_{i}\stackrel{{ E_{i}}}{{\longrightarrow}}q_{i}\) for every \(i\in[m]\), and for a set \(X\subseteq\mathrm{pos}(C(t_{k},t_{h},t_{h},\ldots,t_{h}))\), we let \(i_{1}<\cdots<i_{n}\) be such that \(w_{i_{1}},\ldots,w_{i_{n}}\) is an enumeration of \(\{w_{1},\ldots,w_{m}\}\cap X\); i.e., all positions in \(X\) to which \(d\) applies productions. We set \(d|_{X}=(p_{i_{1}},w_{i_{1}})\cdots(p_{i_{n}},w_{i_{n}})\), \(\mathrm{wt}_{2}(d|_{X})=\prod_{j\in[n]}\mathrm{wt}_{2}(p_{i_{j}})\), and \(D_{kh}=\{d^{\prime}|_{\mathrm{pos}(C)}\mid\exists q^{\prime}\in Q_{2}\colon F _{2}(q^{\prime})\neq 0,\,d^{\prime}\in D_{G_{2}}^{q^{\prime}}(C(t_{k},t_{h},t_{h}, \ldots,t_{h}))\}\). We now employ Ramsey's theorem in the following way. For \(k,h\in\mathbb{N}\) with \(k<h\), we consider the mapping \(\{k,h\}\mapsto D_{kh}\). This mapping has a finite range as every \(D_{kh}\) is a set of finite words over the alphabet \(P_{2}\times\mathrm{pos}(C)\) of length at most \(|\mathrm{pos}(C)|\). Thus, by Ramsey's theorem, we obtain a subsequence \((t_{i_{j}})_{j\in\mathbb{N}}\) with \(D_{i_{k}i_{h}}=D_{<}\) for all \(k,h\in\mathbb{N}\) and some set \(D_{<}\). For simplicity, we assume \(D_{kh}=D_{<}\) for all \(k,h\in\mathbb{N}\) with \(k<h\). Similarly, we select a further subsequence and assume \(D_{kh}=D_{>}\) for all \(k,h\in\mathbb{N}\) with \(k>h\). Finally, the mapping \(k\mapsto D_{kk}\) also has a finite range, so by the pigeonhole principle, we may select a further subsequence and assume that \(D_{kk}=D_{=}\) for all \(k\in\mathbb{N}\) and some set \(D_{=}\). In the following, we show that \(D_{<}=D_{>}\subseteq D_{=}\). For now, we assume \(D_{<}\neq\emptyset\), let \((p_{1},w_{1})\cdots(p_{m},w_{m})\in D_{<}\), and let \(p_{i}=\ell_{i}\stackrel{{ E_{i}}}{{\longrightarrow}}q_{i}\) for every \(i\in[m]\). Also, we define \(C_{kh}=C(t_{k},t_{h},t_{h},\ldots,t_{h})\), \(C_{k\square}=C(t_{k},\square,\ldots,\square)\), and \(C_{\square h}=C(\square,t_{h},t_{h},\ldots,t_{h})\) for \(k,h\in\mathbb{N}\). We show that every constraint from every \(E_{i}\) is satisfied on all \(C_{kh}\) with \(k,h\geq 1\), not just for \(k<h\). More precisely, let \(i\in[m]\), \((u^{\prime},v^{\prime})\in E_{i}\), and \((u,v)=(w_{i}u^{\prime},w_{i}v^{\prime})\). We show \(C_{kh}|_{u}=C_{kh}|_{v}\) for all \(k,h\geq 1\). Note that by assumption, \(C_{kh}|_{u}=C_{kh}|_{v}\) is true for all \(k,h\in\mathbb{N}\) with \(k<h\). We show our statement by a case distinction depending on the position of \(u\) and \(v\) in relation to the positions \(w_{1},\ldots,w_{r}\). 1. If both \(u\) and \(v\) are parallel to \(w_{1}\), then \(C_{ij}|_{u}\) and \(C_{ij}|_{v}\) do not depend on \(i\). Thus, \(C_{0j}|_{u}=C_{0j}|_{v}\) for all \(j\geq 1\) implies the statement. 2. If \(u\) is in prefix-relation with \(w_{1}\) and \(v\) is parallel to \(w_{1}\), then \(C_{ij}|_{v}\) does not depend on \(i\). If \(u\leq w_{1}\), then by our assumption that \((t_{i})_{i\in\mathbb{N}}\) are pairwise distinct, we obtain the contradiction \(C_{02}|_{v}=C_{02}|_{u}\neq C_{12}|_{u}=C_{12}|_{v}\), where \(C_{02}|_{v}=C_{12}|_{v}\) should hold. Thus, we have \(w_{1}\leq u\) and in particular, \(C_{ij}|_{u}\) does not depend on \(j\). Thus, for all \(i,j\geq 1\) we obtain \(C_{ij}|_{u}=C_{i,i+1}|_{u}=C_{i,i+1}|_{v}=C_{0,i+1}|_{v}=C_{0,i+1}|_{u}=C_{0j}| _{u}=C_{0j}|_{v}=C_{ij}|_{v}\). If \(v\) is in prefix-relation with \(w_{1}\) and \(u\) is parallel to \(w_{1}\), then we come to the same conclusion by formally exchanging \(u\) and \(v\) in this argumentation. 3. If \(u\) and \(v\) are both in prefix-relation with \(w_{1}\), then \(u\) and \(v\) being parallel to each other implies \(w_{1}\leq u\) and \(w_{1}\leq v\). In particular, both \(u\) and \(v\) are parallel to all \(w_{2},\ldots,w_{m}\). Thus, we obtain, as in the first case, that \(C_{ij}|_{u}\) and \(C_{ij}|_{v}\) do not depend on \(j\) and the statement follows from \(C_{i,i+1}|_{u}=C_{i,i+1}|_{v}\) for all \(i\in\mathbb{N}\). Let \(k,h\geq 1\) and \(d_{C}\in D_{<}\), and let \(q\in Q_{2}\), \(d_{k,k+1}\in D_{G_{2}}^{q}(C_{k,k+1})\), and \(d_{h-1,h}\in D_{G_{2}}^{q}(C_{h-1,h})\) such that \(d_{C}=d_{k,k+1}|_{\mathrm{pos}(C)}=d_{h-1,h}|_{\mathrm{pos}(C_{h})}\). Then for \(d_{k}=d_{k,k+1}|_{\mathrm{pos}(C_{k,k+1})\setminus\mathrm{pos}(C_{\square,k+1})}\) and \(d_{h}=d_{h-1,h}|_{\mathrm{pos}(C_{h-1,h})\setminus\mathrm{pos}(C_{h-1,\square})}\), we can reorder \(d=d_{k}d_{h}d_{C}\) to a complete left-most derivation of \(G_{2}\) for \(C_{kh}\), as all equality constraints from \(d_{k}\) are satisfied by the assumption on \(d_{k,k+1}\), all equality constraints from \(d_{h}\) are satisfied by the assumption on \(d_{h-1,h}\), and all equality constraints from \(d_{C}\) are satisfied by our case distinction. Considering the special cases \(k=2\), \(h=1\), and \(k=h=1\), and the definitions of \(D_{>}\) and \(D_{=}\), we obtain \(d_{C}\in D_{21}=D_{>}\) and \(d_{C}\in D_{11}=D_{=}\), and hence, \(D_{<}\subseteq D_{>}\) and \(D_{<}\subseteq D_{=}\). The converse inclusion \(D_{>}\subseteq D_{<}\) follows with an analogous reasoning. In conclusion, we obtain \(D_{<}=D_{>}\subseteq D_{=}\). By the reasoning above, the case \(D_{<}=\emptyset\) we excluded earlier is only possible if also \(D_{>}=\emptyset\), in which case we again have \(D_{<}=D_{>}\subseteq D_{=}\). Let \(d_{1},\ldots,d_{n}\) be an enumeration of \(D_{<}\), \(i\in[n]\), and \(k\in\mathbb{N}\). We define the sets \[D_{i,k}^{(1)} =\left\{d|_{\mathrm{pos}(C_{k,k+1})\setminus\mathrm{pos}(C_{ \square,k+1})}\mid d\in D_{G_{2}}^{q}(C_{k,k+1}),\,d_{i}=d|_{\mathrm{pos}(C)},\, q\in Q_{2}\right\}\] \[D_{i,k}^{(2)} =\left\{d|_{\mathrm{pos}(C_{k+1,k})\setminus\mathrm{pos}(C_{k+1, \square})}\mid d\in D_{G_{2}}^{q}(C_{k+1,k}),\,d_{i}=d|_{\mathrm{pos}(C)},\,q\in Q _{2}\right\}\] and the corresponding weights \(\nu_{i,k}^{(1)}=\sum_{d\in D_{i,k}^{(1)}}\mathrm{wt}_{2}(d)\) and \(\nu_{i,k}^{(2)}=\sum_{d\in D_{i,k}^{(2)}}\mathrm{wt}_{2}(d)\). Finally, we let \(q_{i}\) be the target state of the last production in \(d_{i}\) and define \(\nu_{i}=F_{2}(q_{i})\cdot\mathrm{wt}_{2}(d_{i})\). Then for all \(k,h\in\mathbb{N}\) we have \(\llbracket G_{2}\rrbracket(C_{kh})=\sum_{i\in[n]}(\nu_{i,k}^{(1)}\cdot\nu_{i} \cdot\nu_{i,h}^{(2)})+\delta_{kh}\mu_{k}\) for nonnegative integer weights \((\mu_{j})_{j\in\mathbb{N}}\), which stem from the fact that \(D_{=}\setminus D_{<}\neq\emptyset\) may hold. We arrange the weights \(\nu_{i,k}^{(1)}\) into a row vector \(\nu_{k}^{(1)}\), and the weights \(\nu_{i,h}^{(2)}\) into a column vector \(\nu_{h}^{(2)}\), and the weights \(\nu_{i}\) into a diagonal matrix \(N\) such that \(\llbracket G_{2}\rrbracket(C_{kh})=\nu_{k}^{(1)}N\nu_{h}^{(2)}+\delta_{kh}\mu_{k}\). Recall that \(\llbracket G_{1}\rrbracket(C_{kh})>0\) if and only if \(k=h\) for all \(k,h\in\mathbb{N}\). We can thus modify the weights \(\mu_{k}\) to obtain \(\llbracket G\rrbracket(C_{kh})=\llbracket G_{2}\rrbracket(C_{kh})+\llbracket G _{1}\rrbracket(C_{kh})=\nu_{k}^{(1)}N\nu_{h}^{(2)}+\delta_{kh}\mu_{k}\) with \(\mu_{k}>0\) for all \(k\in\mathbb{N}\). If \(\llbracket G\rrbracket\) is regular, we can assume a representation \(\llbracket G\rrbracket(C_{kh})=g(\kappa_{k},\kappa_{h},\kappa_{h},\ldots, \kappa_{h})\) for all \(k,h\in\mathbb{N}\), where \(\kappa_{h}\) is a finite vector of weights over \(\mathbb{N}\) where each entry corresponds to the sum of all derivations for \(t_{h}\) to a specific state of a weighted tree automaton, and \(g\) is a multilinear map encoding the weights of the derivations for \(C(\square,\square,\ldots,\square)\) depending on the specific input states at the \(\square\)-nodes and the target state at the root \(\varepsilon\). We choose \(K\) such that the concatenated vectors \(\langle\kappa_{1},\nu_{1}^{(1)}\rangle,\ldots,\langle\kappa_{K},\nu_{K}^{(1)}\rangle\) form a generating set of the \(\mathbb{Q}\)-vector space spanned by \((\langle\kappa_{i},\nu_{i}^{(1)}\rangle)_{i\in\mathbb{N}}\). Then there are coefficients \(a_{1},\ldots,a_{K}\in\mathbb{Q}\) with \(\kappa_{K+1}=\sum_{i\in[K]}a_{i}\kappa_{i}\) and \(\nu_{K+1}^{(1)}=\sum_{i\in[K]}a_{i}\nu_{i}^{(1)}\). Thus, we have \[\nu_{K+1}^{(1)}N\nu_{K+1}^{(2)}+\mu_{K+1}=g(\kappa_{K+1},\kappa_{ K+1},\ldots,\kappa_{K+1}) =\sum_{i\in[K]}a_{i}g(\kappa_{i},\kappa_{K+1},\ldots,\kappa_{K+1})\] \[=\sum_{i\in[K]}a_{i}\nu_{i}^{(1)}N\nu_{K+1}^{(2)}=\nu_{K+1}^{(1)}N \nu_{K+1}^{(2)}\] which implies \(\mu_{K+1}=0\) and thus our desired contradiction.
2306.12225
Lifshitz transitions and angular conductivity diagrams in metals with complex Fermi surfaces
We consider the Lifshitz topological transitions and the corresponding changes in the galvanomagnetic properties of a metal from the point of view of the general classification of open electron trajectories arising on Fermi surfaces of arbitrary complexity in the presence of magnetic field. The construction of such a classification is the content of the Novikov problem and is based on the division of non-closed electron trajectories into topologically regular and chaotic trajectories. The description of stable topologically regular trajectories gives a basis for a complete classification of non-closed trajectories on arbitrary Fermi surfaces and is connected with special topological structures on these surfaces. Using this description, we describe here the distinctive features of possible changes in the picture of electron trajectories during the Lifshitz transitions, as well as changes in the conductivity behavior in the presence of a strong magnetic field. As it turns out, the use of such an approach makes it possible to describe not only the changes associated with stable electron trajectories, but also the most general changes of the conductivity diagram in strong magnetic fields.
A. Ya. Maltsev
2023-06-21T12:35:14Z
http://arxiv.org/abs/2306.12225v3
# Lifshitz transitions and angular conductivity diagrams in metals with complex Fermi surfaces ###### Abstract We consider the Lifshitz topological transitions and the corresponding changes in the galvanomagnetic properties of a metal from the point of view of the general classification of open electron trajectories arising on Fermi surfaces of arbitrary complexity in the presence of magnetic field. The construction of such a classification is the content of the Novikov problem and is based on the division of non-closed electron trajectories into topologically regular and chaotic trajectories. The description of stable topologically regular trajectories gives a basis for a complete classification of non-closed trajectories on arbitrary Fermi surfaces and is connected with special topological structures on these surfaces. Using this description, we describe here the distinctive features of possible changes in the picture of electron trajectories during the Lifshitz transitions, as well as changes in the conductivity behavior in the presence of a strong magnetic field. As it turns out, the use of such an approach makes it possible to describe not only the changes associated with stable electron trajectories, but also the most general changes of the conductivity diagram in strong magnetic fields. ## I Introduction In this paper, we will try to describe the most general relationship between the Lifshitz transitions (see [1; 2]), leading to a change in the topology of the Fermi surface, and angular diagrams that describe the behavior of the magnetic conductivity of a metal in strong magnetic fields. It must be said that at present topological Lifshitz transitions are actually represented by a very wide range of phenomena associated with topological properties of a Fermi surface and their changes, and the study of the variety of such phenomena is an interesting and rapidly developing area of condensed matter physics (see, for example, [3; 4]). Here, however, we will consider the most classical definition of the Lifshitz transitions ([1]), namely, a change in the topology of the Fermi surface when passing the critical points of the dispersion relation \(\epsilon({\bf p})\) (see Fig. 1). As is well known, the dispersion relation \(\epsilon({\bf p})\) can be considered either as a periodic function in the quasi-momentum space \(\mathbb{R}^{3}\), or as a smooth function on the three-dimensional torus \(\mathbb{T}^{3}\) obtained from \(\mathbb{R}^{3}\) by factorization with respect to the reciprocal lattice vectors. The singular points of the function \(\epsilon({\bf p})\) are defined by the condition \(\,\nabla\epsilon({\bf p})=0\,\), and the corresponding energy levels, as is known, correspond to arising of the Van Hove singularities in the density of electron states. The singularities of the function \(\epsilon({\bf p})\) include the points of its local minima and maxima, as well as saddle singular points (assuming that all singular points of \(\epsilon({\bf p})\) are non-degenerate). Saddle points of a function in three-dimensional space, as is well known, can have index 1 or 2, depending on whether the increment of the function near this point can be represented in the form \[d\epsilon({\bf p})\,=\,a^{2}dp_{1}^{2}+b^{2}dp_{2}^{2}-c^{2}dp_{3}^{2}\] or \[d\epsilon({\bf p})\,=\,a^{2}dp_{1}^{2}-b^{2}dp_{2}^{2}-c^{2}dp_{3}^{2}\] in some local Euclidean coordinate system. As is well known from the Morse theory, the number of saddle singular points of both types for a function on a three-dimensional torus is always at least three. In fact, for real dispersion laws, it is often larger, in particular, whenever Fermi surfaces of genus greater than 3 arise. Here, we are interested precisely in the saddle singularities of the relation \(\epsilon({\bf p})\). As was shown in [1], the passage of the Fermi level through critical points of \(\epsilon({\bf p})\) (for example, under strong external pressure) leads to singularities in the thermodynamic quantities of the electron gas in the crystal (the Lifshitz transitions), as well as possible abrupt changes in the behavior of the magnetic conductivity in strong Figure 1: Reconstruction of the Fermi surface and arising of new components when passing through the critical points of the relation \(\epsilon({\bf p})\) ([1]) magnetic fields. The latter circumstance is associated with a possible significant change in the geometry of the trajectories of the system \[\mathbf{\dot{p}} = \frac{e}{c}\,\left[\mathbf{v}_{\mathrm{gr}}(\mathbf{p})\,\times\, \mathbf{B}\right]\;\;=\;\;\frac{e}{c}\,\left[\mathbf{\nabla}\epsilon(\mathbf{p}) \,\times\,\mathbf{B}\right]\;\;,\] (I.1) describing the semiclassical dynamics of electrons in an external magnetic field, on the Fermi surface. The main effect here is a sharp change in the behavior of the magnetic conductivity due to arising (or disappearance) of open trajectories of system (I.1) during topological reconstructions of the Fermi surface. The important role of open trajectories of system (I.1) in description of the conductivity of metals in strong magnetic fields was also first revealed by the school of I.M. Lifshitz (see [2; 5; 6; 7]). Since the trajectories of system (I.1) are defined by the intersections of the surfaces \(\epsilon(\mathbf{p})=\mathrm{const}\) by planes orthogonal to the magnetic field, the geometry of such trajectories is essentially determined by the geometry and the topology of the Fermi surface. In particular, the question of whether the Fermi surface is bounded or unbounded in the \(\mathbf{p}\) - space is of great importance. As was shown in [5], the contributions of closed and open periodic trajectories to the conductivity tensor differ significantly in the limit \(\,\omega_{B}\tau\rightarrow\infty\,\) (i.e., in the limit of strong magnetic fields). In particular, if there are only closed trajectories on the Fermi surface, the conductivity decreases in all directions in the plane orthogonal to \(\mathbf{B}\) in the specified limit. The asymptotic behavior of the total conductivity tensor can then be represented in the form \[\sigma^{kl} \simeq \frac{ne^{2}\tau}{m^{*}}\,\left(\begin{array}{ccc}(\omega_{B} \tau)^{-2}&(\omega_{B}\tau)^{-1}&(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&(\omega_{B}\tau)^{-2}&(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&(\omega_{B}\tau)^{-1}&*\end{array}\right)\quad,\] (\(\omega_{B}\tau\rightarrow\infty\)). At the same time, the contribution of open periodic trajectories to the conductivity tensor is strongly anisotropic in the plane orthogonal to \(\mathbf{B}\) and can be represented in the leading order as \[\sigma^{kl} \simeq \frac{ne^{2}\tau}{m^{*}}\,\left(\begin{array}{ccc}(\omega_{B} \tau)^{-2}&(\omega_{B}\tau)^{-1}&(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&*&*\\ (\omega_{B}\tau)^{-1}&*&*\end{array}\right)\quad,\] (\(\omega_{B}\tau\,\rightarrow\,\infty\)). In the formulas (I.2) - (I.3), as everywhere further, it is assumed that the \(z\) axis is directed along the magnetic field. In the relation (I.3), it is also assumed that the direction of the axis \(x\) coincides with the mean direction of the open trajectories in the \(\mathbf{p}\) -space. The sign \(\simeq\) in both formulas means asymptotic behavior, i.e. each of the components actually contains some dimensionless factor of order 1. The quantity \(\omega_{B}\) plays the role of the electron cyclotron frequency in the metal, while the quantity \(\tau\) represents the mean free time of electrons. The quantity \(m^{*}\) has the meaning of the effective mass of an electron in a crystal. The relation \(\,\omega_{B}\tau\gg 1\,\), as is also well known, requires the use of sufficiently pure single-crystal samples at very low temperatures (\(T\leq 1K\)) and sufficiently strong magnetic fields (\(B\geq 1Tl\)). The value \(n\) usually plays the role of the concentration of current carriers in the metal. In the formulas (I.2) - (I.3), however, it is also proportional to the measure of the corresponding trajectories (closed or periodic) on the Fermi surface. The latter circumstance is especially important in the situation we are considering, since the measure of open trajectories can be determined by the proximity to the Lifshitz transition point \(\epsilon_{0}\). It is this situation that occurs, for example, in [1], where the arising and disappearance of periodic open trajectories on a "warped cylinder" surface is considered. In this situation, the leading term of the conductivity tensor in the presence of open trajectories on the Fermi surface was represented in [1] in the form \[\sigma^{kl}=\left(\begin{array}{ccc}\gamma^{2}a_{xx}&\gamma a_{xy}&\gamma a _{xz}\\ \gamma a_{yx}&\gamma^{2}a_{yy}+\beta^{1/2}b_{yy}&\gamma a_{yz}+\beta^{1/2}b_{ yz}\\ \gamma a_{zx}&\gamma a_{zy}+\beta^{1/2}b_{zy}&a_{zz}\end{array}\right)\] (I.4) (\(\omega_{B}\tau\,\rightarrow\,\infty\)), where \(\,\gamma=(\omega_{B}\tau)^{-1}\,\), \(\,\beta=|(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}|\,\), the quantities \(a_{kl}\) represent some constants, and the quantities \(b_{kl}\) have a weak (logarithmic) dependence on \(\beta\). As we can see, the formula (I.4) allows not only to observe the Lifshitz transition in the described situation, but also to determine the proximity to this transition when changing the parameters of influence on a sample. In the general situation, the Fermi surface is an arbitrary 3-periodic surface in \(\mathbf{p}\) - space (Fig. 2), and the problem of describing the trajectories of the system (I.1) is quite difficult. For the first time, the problem of complete classification of open trajectories of system (I.1) was set by S.P. Novikov in [8] and then intensively studied in his topological school (see [9; 10; 11; 12; 13; 14; 15; 16]). As a result of studying the Novikov problem, a number of deep topological results have been obtained, and by now a complete classification of the open trajectories of system (I.1) for arbitrary periodic dispersion relations \(\epsilon(\mathbf{p})\) has been obtained. Consequences from the topological theorems obtained in the study of the Novikov problem also led to a description of a number of physical effects associated with the behavior of open trajectories of (I.1) and, in addition, made it possible to give a complete classification of possible asymptotic behavior of conductivity in strong magnetic fields for metals with arbitrarily complex Fermi surfaces (see e.g. [17; 18; 19; 20; 21; 22]). Here we are interested in changes in the open trajectories of system (I.1) under the Lifshitz transitions, i.e. changes in the topology of the Fermi surface when passing singular points of the dispersion relation \(\epsilon(\mathbf{p})\). We will assume here that the Fermi surface has the most general form, and in describing the trajectories we will use the general classification obtained in the study of the Novikov problem. To describe the situation, we will use angular diagrams that specify the type of trajectories of system (I.1) on the Fermi surface depending on the direction of the magnetic field. The angular diagram is thus the unit sphere \(\mathbb{S}^{2}\), which parametrizes the directions of \(\mathbf{B}\) and determines the type of trajectories on the Fermi surface for each direction. Since the type of trajectories of system (I.1) determines the asymptotic behavior of the conductivity tensor in the limit of strong magnetic fields, it is natural to also call such diagrams conductivity (magnetic conductivity) diagrams for a given Fermi surface. We are mainly interested here in the changes in such diagrams that accompany the Lifshitz transitions. Experimental observation of changes (sharp jumps) in such diagrams can generally serve as one of the tools for studying the Lifshitz transitions in metals with complex Fermi surfaces. Here we present the general picture of changes in the conductivity diagrams when passing the singularities of the relation \(\epsilon(\mathbf{p})\), based on the general theory of such diagrams, constructed in the study of the Novikov problem. In the next section, we present the general classification of the diagrams corresponding to various Fermi surfaces and describe their connection with the angular diagrams for the entire dispersion relation \(\epsilon(\mathbf{p})\). In section 3, we will describe typical changes in angular diagrams corresponding to topological transitions of various types on Fermi surfaces of arbitrary complexity. ## II General facts about angular conductivity diagrams in metals The basis for classifying the open trajectories of system (I.1) is the description of its stable open trajectories. Here we call open trajectories of (I.1) stable if they do not vanish and retain their global geometry under small variations of all problem parameters, in particular, small variations of the level \(\epsilon_{F}\) and rotations of the direction of \(\mathbf{B}\). As follows from the results of [9; 10; 12], stable open trajectories of system (I.1) have the following remarkable properties. 1) Each stable open trajectory of system (I.1) lies in a straight strip of finite width in a plane orthogonal to \(\mathbf{B}\) and passes through it (Fig. 3). 2) The mean direction of all stable open trajectories of (I.1) for a given direction of the magnetic field is given by the intersection of the plane orthogonal to \(\mathbf{B}\) and some integral (generated by two reciprocal lattice vectors) plane \(\Gamma\), the direction of which is invariable for small variations of the problem parameters. Property (1) of stable open trajectories manifests itself directly in the behavior of the magnetic conductivity in strong magnetic fields. Namely, here, as in the case of periodic open trajectories, there is a strong anisotropy of the conductivity in the plane orthogonal to \(\mathbf{B}\), and the main term in the asymptotics of the conductivity tensor is also given by the formula (I.3). For special directions of \(\mathbf{B}\), stable open trajectories (I.1) can be periodic. In the generic case, however, they are quasi-periodic and have no periods in the \(\mathbf{p}\) -space. The direction of maximum suppression of conductivity belongs to the corresponding plane \(\Gamma\), which makes it experimentally observable ([17; 19]). An integral plane in \(\mathbf{p}\) - space can also be defined as a plane orthogonal to some integer direction of the original crystal lattice. The plane \(\Gamma\) can thus be defined by some irreducible integer triple \((m^{1},m^{2},m^{3})\). The numbers \((m^{1},m^{2},m^{3})\) were defined in [17] as topological numbers observed in the conductivity of normal metals. Each family of stable open trajectories is defined by some region (stability zone) \(\Omega\) on the angular diagram corresponding to the same values \((m^{1},m^{2},m^{3})\). In the general case, an angular diagram may contain some (finite or infinite) number of stability zones \(\Omega_{\alpha}\) corresponding to different values of \((m^{1}_{\alpha},m^{2}_{\alpha},m^{3}_{\alpha})\). The presence of stability zones and their location on the unit sphere \(\mathbb{S}^{2}\) is an important component of the diagram of conductivy of a metal in strong magnetic fields. In addition to stable open trajectories of system (I.1), there are also unstable open trajectories. First of all, they may include periodic trajectories, for example, those considered above. Periodic trajectories are, in a sense, semi-stable, namely, they are preserved under rotations of \(\mathbf{B}\) orthogonal to their mean direction, and collapse under all other rotations. These trajectories correspond to one-dimensional arcs on the angular diagram, which mark the Figure 3: Form of a stable open trajectory of system (I.1) in a plane orthogonal to \(\mathbf{B}\) (schematically) Figure 2: Trajectories of system (I.1) on a general periodic Fermi surface presence of such trajectories on the Fermi surface for the corresponding directions of \({\bf B}\). The set of corresponding arcs on the sphere \({\mathbb{S}}^{2}\) is also an important part of a metal magnetic conductivity diagram. Periodic open trajectories, however, are not the only type of unstable open trajectories of (I.1). Namely, there are open trajectories of system (I.1) of much more complex geometry, which are unstable both with respect to small rotations of \({\bf B}\) and small variations of the Fermi level \(\epsilon_{F}\) ([11; 14; 15]). Such trajectories can be conditionally divided into two main types, namely, Tsarev-type trajectories and Dynnikov-type trajectories. Tsarev-type trajectories can only be observed for partially irrational directions of \({\bf B}\), when the plane orthogonal to \({\bf B}\) contains one (up to proportionality) reciprocal lattice vector. On the contrary, Dynnikov-type trajectories can arise only for directions of \({\bf B}\) of complete irrationality (the plane orthogonal to \({\bf B}\) does not contain reciprocal lattice vectors). Unstable trajectories of both types have rather complex behavior on the Fermi surface, which in this case should itself have sufficient complexity. However, the behavior of Tsarev-type trajectories in planes orthogonal to \({\bf B}\) is much simpler than that of Dynnikov-type trajectories. Namely, Tsarev-type trajectories have an asymptotic direction that is the same in all planes orthogonal to \({\bf B}\) for a given direction of \({\bf B}\). As a consequence, the contribution of Tsarev-type trajectories to the conductivity tensor also has strong anisotropy in the plane orthogonal to \({\bf B}\) and is close in form to the contribution (I.3), although it differs from it in some details. Dynnikov-type trajectories have much more complex behavior in planes orthogonal to \({\bf B}\), wandering along them in a rather chaotic manner (Fig. 4). Among the features of the contribution of such trajectories to the magnetic conductivity tensor, one can distinguish the suppression of conductivity along the direction of the magnetic field (see [18]), as well as the arising of fractional powers of the parameter \(\omega_{B}\tau\) in the asymptotics of the tensor components in the limit \(\,\omega_{B}\tau\rightarrow\infty\,\) ([18; 21]). We note here that the study of arising and geometric properties of Dynnikov-type trajectories is an actively developing area at the present time (see, for example, [11; 13; 14; 15; 18; 20; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]). In view of the particular complexity of the geometry of trajectories of the Tsarev or Dynnikov type, such trajectories are usually called chaotic. Stable open trajectories of the system (I.1), as well as periodic trajectories, are called topologically regular. The arising of unstable trajectories of the Tsarev or Dynnikov type on the Fermi surface is associated with particularly complex angular diagrams, which we describe below. The location of the corresponding directions of \({\bf B}\) in such diagrams is perhaps the most interesting information about the Fermi surface. Before describing the types of diagrams corresponding to fixed Fermi surfaces, it is convenient to describe the angular diagrams corresponding to the entire dispersion relation \(\epsilon({\bf p})\). Such diagrams were introduced in [16] and are based on an important property of open trajectories of (I.1), namely, the type of open trajectories of system (I.1) for a given direction of \({\bf B}\) is the same for all energy levels \(\,\epsilon({\bf p})={\rm const}\,\) at which they appear. Moreover, the situation in the general case can be described as follows ([16]). Consider a smooth 3-periodic function \(\epsilon({\bf p})\) whose values lie in the interval \([\epsilon_{\rm min},\epsilon_{\rm max}]\). Consider some fixed direction of \({\bf B}\) and the corresponding system (I.1). Let us assume for simplicity that the direction of \({\bf B}\) is not rational. Then the following assertions can be formulated. 1) Open trajectories of system (I.1) appear either in some closed energy interval \[\epsilon_{\rm min}\,\,\,<\,\,\,\epsilon_{1}({\bf B})\,\,\,\leq\,\,\,\epsilon ({\bf p})\,\,\,\leq\,\,\,\epsilon_{2}({\bf B})\,\,\,<\,\,\epsilon_{\rm max}\] or only at one energy level \(\,\epsilon_{0}\,=\,\epsilon_{1}({\bf B})\,=\,\epsilon_{2}({\bf B})\,\). 2) In the case \(\epsilon_{1}({\bf B})<\epsilon_{2}({\bf B})\), all nonsingular open trajectories in the interval \([\epsilon_{1}({\bf B}),\epsilon_{2}({\bf B})]\) lie in straight strips of finite width in planes orthogonal to \({\bf B}\) and pass through them (Fig. 3). All of them have the same mean direction given by the intersection of the plane orthogonal to \({\bf B}\) with some integral plane \(\Gamma\) in the \({\bf p}\)-space. 3) For generic directions of \({\bf B}\) the values \(\epsilon_{1}({\bf B})\) and \(\epsilon_{2}({\bf B})\) coincide with the values of some continuous functions \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\) defined everywhere on \({\mathbb{S}}^{2}\). However, for directions of \({\bf B}\) corresponding to arising of periodic open trajectories, the values of \(\epsilon_{1}({\bf B})\) and \(\epsilon_{2}({\bf B})\) have "jumps" with the following inequalities \[\epsilon_{1}({\bf B})\,\leq\,\tilde{\epsilon}_{1}({\bf B})\,\leq\,\tilde{ \epsilon}_{2}({\bf B})\,\leq\,\epsilon_{2}({\bf B})\] 4) The property \(\tilde{\epsilon}_{1}({\bf B})<\tilde{\epsilon}_{2}({\bf B})\), and the integral plane \(\Gamma\) are stable with respect to small rotations of \({\bf B}\), so that each of the planes \(\Gamma_{\alpha}\) corresponds to a certain "stability zone" \(\widehat{\Omega}_{\alpha}\) in the space of directions of \({\bf B}\). Figure 4: Form of the Dynnikov chaotic trajectory in a plane orthogonal to \({\bf B}\) (schematically) According to [16], the picture of stability zones for an arbitrary dispersion relation \(\epsilon({\bf p})\) can correspond to only one of the following situations. 1) The entire unit sphere is the only stability zone \(\widehat{\Omega}\) corresponding to some integral plane \(\Gamma\). 2) The angular diagram contains an infinite number of stability zones whose union is everywhere dense on the sphere \(\mathbb{S}^{2}\) (see, for example, Fig. 5). Case (1), as a rule, is observed for dispersion relations of a rather special form, close to the dispersion relations in quasi-one-dimensional conductors. For the vast majority of real dispersion relations, however, case (2) takes place. We will call here the dispersion relations corresponding to case (1) dispersion relations with simple angular diagram. Similarly, the dispersion relations corresponding to case (2) will be called relations with a complex angular diagram. Here we are primarily interested in dispersion relations with complex angular diagrams. The complement \(\mathcal{M}\) to the union of stability zones is a rather complex set of fractal type on the sphere \(\mathbb{S}^{2}\). According to the conjecture of S.P. Novikov ([28]), this set has measure zero and the fractal dimension strictly less than 2. The first part of the Novikov conjecture was recently proved for dispersion relations satisfying the additional condition \(\epsilon(-{\bf p})=\epsilon({\bf p})\) (I.A. Dynnikov, P. Hubert, P. Mercat, and A.S. Skripchenko, in the process of publication). The second part of the conjecture is confirmed by serious numerical studies, but has not yet been proved rigorously. The points of the set \(\mathcal{M}\) represent accumulation points of decreasing stability zones. Moreover, the set \(\mathcal{M}\) can contain special rational directions of \({\bf B}\) (see [42]), as well as directions of \({\bf B}\) corresponding to arising of trajectories of the Tsarev or Dynnikov type. The set of special rational directions of \({\bf B}\), however, is only a countable subset of the set \(\mathcal{M}\), so that "almost all" points of the set \({\bf M}\) represent, in fact, "chaotic" directions of these two types. The values of the functions \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\) coincide on the set \(\mathcal{M}\), as well as on the boundaries of all the zones \(\widehat{\Omega}_{\alpha}\). It is easy to see that open trajectories appear on a fixed Fermi surface for a given direction of \({\bf B}\) only if the Fermi level falls within the corresponding interval \([\epsilon_{1}({\bf B}),\epsilon_{2}({\bf B})]\). In particular, each stability zone \(\Omega_{\alpha}\) at the conductivity diagram is a subdomain of some zone \(\widehat{\Omega}_{\alpha}\) defined for the entire dispersion relation. As a rule, most of the conductivity diagram of a metal is in fact the region corresponding to the presence of only closed trajectories on the Fermi surface. It can also be seen that when determining the zones \(\Omega_{\alpha}\), as well as the Tsarev and Dynnikov directions for a fixed Fermi surface, one can use the functions \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\), while to determine the directions corresponding to arising of unstable periodic trajectories, it is necessary to know the functions \(\epsilon_{1}({\bf B})\) and \(\epsilon_{2}({\bf B})\). The latter circumstance manifests itself, in particular, in a certain difference in the shape of zones \(\Omega_{\alpha}\) from the zones \(\widehat{\Omega}_{\alpha}\). Namely, the set of directions of \({\bf B}\) corresponding to arising of open trajectories associated with the zone \(\Omega_{\alpha}\) is somewhat larger than the zone itself and also contains an infinite set of segments adjacent to the boundary of \(\Omega_{\alpha}\) and corresponding to arising of periodic trajectories on the Fermi surface (Fig. 6). The periodic trajectories can then be considered stable for directions of \({\bf B}\) inside \(\Omega_{\alpha}\) and unstable on additional segments. Such an arrangement of the zones \(\Omega_{\alpha}\) actually leads to a rather complicated analytical behavior of the conductivity tensor near their boundaries, which makes it difficult to determine the shape of these zones in direct measurements of the conductivity (see, for example, [43]). At the same time, however, there are methods for experimentally determining the exact mathematical boundaries of the zones \(\Omega_{\alpha}\), which makes it possible to experimentally determine their exact shape (see [44]). The zones \(\Omega_{\alpha}\) represent regions with piecewise smooth boundaries on the sphere (see e.g. [16]). Other than that, we are not aware of any restrictions on their shape. In particular, there may be unconnected stability regions corresponding to the same values of \((m^{1},m^{2},m^{3})\). For simplicity, we agree here to consider the union of such domains as one disconnected stability zone on \(\mathbb{S}^{2}\). In this sense, each region \(\Omega_{\alpha}\) and its diametrically opposite one form the same stability zone. In addition, stability zones can also be non-simply connected (see e.g. [42]). The latter, however, takes place for very specific Fermi surfaces, which are special mathematical examples. For real dispersion laws, we will assume here that all the zones \(\Omega_{\alpha}\) are simply connected domains with piecewise smooth boundaries on \(\mathbb{S}^{2}\). Below we give a brief description of various types of angular conductivity diagrams in metals, as well as their changes with a change in the value of \(\epsilon_{F}\) in the interval \([\epsilon_{\rm min},\epsilon_{\rm max}]\) ([45]), which we will need later. Here, we will be primarily interested in conductivity diagrams that correspond to dispersion laws with complex angular diagrams, i.e., diagrams containing an infinite number of zones \(\widehat{\Omega}_{\alpha}\). It is easy to see that if the value of \(\epsilon_{F}\) is sufficiently close to the value \(\epsilon_{\rm min}\) or \(\epsilon_{\rm max}\), the Fermi surfaces are small ellipsoids, and open trajectories of system (I.1) are absent on them for any direction of \({\bf B}\). It can also be noted that the Hall conductivity is of the electronic type in the first case and of the hole type in the second one. The corresponding conductivity diagrams can be called zero-type diagrams and denoted by \(0_{-}\) or \(0_{+}\), depending on the sign of the Hall conductivity. In the general case, for a fixed dispersion relation \(\epsilon({\bf p})\), we can single out some values \(\epsilon_{1}^{A}\) and \(\epsilon_{2}^{A}{}^{\prime}\) such that the angular diagrams of the types \(0_{-}\) and \(0_{+}\) correspond to situations \[\epsilon_{F}\,\in\,(\epsilon_{\rm min},\epsilon_{1}^{A}{}^{\prime})\quad{\rm and }\quad\epsilon_{F}\,\in\,(\epsilon_{2}^{A}{}^{\prime},\epsilon_{\rm max})\] respectively. For generic dispersion relations, we can also single out the values \(\epsilon_{1}^{A}\) and \(\epsilon_{2}^{A}\), such that the situations \[\epsilon_{F}\,\in\,(\epsilon_{1}^{A}{}^{\prime},\epsilon_{1}^{A})\quad{\rm and }\quad\epsilon_{F}\,\in\,(\epsilon_{2}^{A},\epsilon_{2}^{A}{}^{\prime})\] correspond to conductivity diagrams containing only one-dimensional arcs corresponding to arising of unstable periodic trajectories on the Fermi surface. Diagrams of this type can be denoted by the symbols \(1_{-}\) and \(1_{+}\) depending on the type of the Hall conductivity observed for directions of \({\bf B}\) corresponding to the presence of only closed trajectories on the Fermi surface. The interval \((\epsilon_{1}^{A},\epsilon_{2}^{A})\) corresponds to conductivity diagrams containing stability zones \(\Omega_{\alpha}\). We can say that such diagrams have a sufficient level of complexity, and it is they that will be mainly of interest to us here. For generic dispersion relations with complex angular diagrams (with an infinite number of zones \(\widehat{\Omega}_{\alpha}\)), however, it is natural to divide this interval into three intervals (see [45]) \[\epsilon_{1}^{A}\ <\ \epsilon_{1}^{B}\ <\ \epsilon_{2}^{B}\ <\ \epsilon_{2}^{A}\] Conductivity diagrams corresponding to the situations \[\epsilon_{F}\,\in\,(\epsilon_{1}^{A},\epsilon_{1}^{B})\quad{\rm and}\quad \epsilon_{F}\,\in\,(\epsilon_{2}^{B},\epsilon_{2}^{A})\,\] can be called diagrams of the \(A_{-}\) and \(A_{+}\) types, respectively. For diagrams of this type, in all regions of directions of \({\bf B}\) corresponding to the presence of only closed trajectories on the Fermi surface, the Hall conductivity has the same type (electronic and hole, respectively). Conductivity diagrams corresponding to the situation \[\epsilon_{F}\,\in\,(\epsilon_{1}^{B},\epsilon_{2}^{B})\,\] can be called diagrams of type \(B\). These diagrams are specified by the fact that in the space of directions of \({\bf B}\) (on the unit sphere \(\mathbb{S}^{2}\)), among the regions corresponding to the presence of only closed trajectories on the Fermi surface, there are both regions of the electronic Hall conductivity, and regions of the hole Hall conductivity. In fact, there are also two additional important differences between diagrams of type \(A\) and diagrams of type \(B\) (see [45]). 1) Generic diagrams of type \(A\) contain a finite number of zones \(\Omega_{\alpha}\), while generic diagrams of type \(B\) contain an infinite number of stability zones. 2) Generic diagrams of type \(A\) do not contain directions of \({\bf B}\) corresponding to arising of Tsarev or Dynnikov trajectories, while generic diagrams of type \(B\) contain such directions. Fig. 7 (schematically) shows a possible evolution of the conductivity diagram in the situation we describe when \(\epsilon_{F}\) changes from \(\epsilon_{1}^{A}\) to \(\epsilon_{2}^{A}\). Diagrams in the interval \((\epsilon_{1}^{B},\epsilon_{2}^{B})\) contain also stability zones with a more complex boundary than in the intervals \((\epsilon_{1}^{A},\epsilon_{1}^{B})\) and \((\epsilon_{2}^{B},\epsilon_{2}^{A})\). Namely, here we have zones, part of the boundary of which is adjacent to the regions of the electronic Hall conductivity, and part to the regions of the hole Hall conductivity. We also note here that the parts of the boundaries of \(\Omega_{\alpha}\) adjacent to the electronic Hall conductivity regions are determined by the relation \(\tilde{\epsilon}_{1}({\bf B})=\epsilon_{F}\), and the parts of the boundaries adjacent to the hole Hall conduction regions are determined by the relation \(\tilde{\epsilon}_{2}({\bf B})=\epsilon_{F}\). The above picture corresponds to generic dispersion relations with angular diagrams containing an infinite number of zones \(\widehat{\Omega}_{\alpha}\). This picture, in fact, may have the following degenerations. Figure 6: The zones \(\widehat{\Omega}_{\alpha}\) (top) and the zones \(\Omega_{\alpha}\) (bottom) with adjoining segments corresponding to arising of unstable periodic trajectories on the Fermi surface (schematically, nets of directions of \({\bf B}\) inside the zones corresponding to stable periodic trajectories are also indicated). 1) \(\epsilon_{1}^{A\,\prime}=\epsilon_{1}^{A}\) or \(\epsilon_{2}^{A}=\epsilon_{2}^{A\,\prime}\), such that the corresponding interval \((\epsilon_{1}^{A\,\prime},\epsilon_{1}^{A})\) or \((\epsilon_{2}^{A},\epsilon_{2}^{A\,\prime})\) shrinks to a point. In the above picture, in this case, there are no diagrams of the type \(1_{-}\) or \(1_{+}\), such that diagrams of the type \(0_{-}\) and \(A_{-}\) or \(A_{+}\) and \(0_{+}\) (or both) immediately pass into each other. Such degeneracies often arise for dispersion relations with sufficiently high symmetry (for example, cubic). 2) Degeneration \(\epsilon_{1}^{\mathcal{B}}=\epsilon_{2}^{\mathcal{B}}\). In this case, there are no diagrams of type \(B\) in the above picture, and the diagram arising at the level \(\epsilon_{1}^{\mathcal{B}}=\epsilon_{2}^{\mathcal{B}}\) coincides with the angular diagram for the entire dispersion relation. The corresponding dispersion relations form a very special class (of infinite codimension in the space of periodic \(\epsilon(\mathbf{p})\)) and we will not consider them here. We emphasize here only that in this case we have in mind dispersion relations whose diagrams remain complex (contain an infinite number of stability zones), despite the presence of degeneracy. In addition to such cases, there are also deformations of the dispersion relations, under which the interval \((\epsilon_{1}^{\mathcal{B}},\epsilon_{2}^{\mathcal{B}})\) is infinitely narrowed, and the corresponding angular diagrams are simplified and become diagrams with one stability zone at the degeneracy point. Such degenerations, in a sense, correspond to the boundary between the dispersion relations of the two types described above and are observed in a much more general situation. Although we are primarily interested here in dispersion relations with an infinite number of stability zones, let us also present here a typical picture of the change in the conductivity diagram for relations corresponding to the presence of only one zone \(\widehat{\Omega}\). As before, we will assume here that all stability zones are simply connected, which corresponds to realistic dispersion relations that arise in real conductors. As in the previous case, for generic dispersion relations, here we can also introduce a set of reference points \[\epsilon_{\text{min}}<\hat{\epsilon}_{1}^{A\,\prime}<\hat{\epsilon}_{1}^{A}< \hat{\epsilon}_{1}^{\mathcal{B}}<\hat{\epsilon}_{2}^{\mathcal{B}}<\hat{ \epsilon}_{2}^{\mathcal{A}}<\hat{\epsilon}_{2}^{A\,\prime}<\epsilon_{\text{ max}}\,\] separating intervals with diagrams of different types. The intervals \([\epsilon_{\text{min}},\hat{\epsilon}_{1}^{A\,\prime})\) and \((\hat{\epsilon}_{2}^{A\,\prime},\epsilon_{\text{max}}]\), and also \((\hat{\epsilon}_{1}^{\mathcal{A}\prime},\hat{\epsilon}_{1}^{\mathcal{A}})\) and \((\hat{\epsilon}_{2}^{\mathcal{A}},\hat{\epsilon}_{2}^{\mathcal{A}\prime})\), as before, correspond here to diagrams of the types \(0_{-}\), \(0_{+}\), \(1_{-}\), and \(1_{+}\), respectively. The intervals \((\hat{\epsilon}_{1}^{\mathcal{A}},\hat{\epsilon}_{1}^{\mathcal{B}})\) and \((\hat{\epsilon}_{2}^{\mathcal{B}},\hat{\epsilon}_{2}^{\mathcal{A}})\) correspond to the diagrams \(A_{-}\) and \(A_{+}\) respectively. The only peculiarity here is that on such diagrams there is only one stability zone corresponding to a single set \((m^{1},m^{2},m^{3})\). The region that does not belong to the stability zone corresponds to the electronic Hall conductivity for diagrams of the type \(A_{-}\), and to the hole conductivity for diagrams of the type \(A_{+}\). The diagram appearing in the interval \((\hat{\epsilon}_{1}^{\mathcal{B}},\hat{\epsilon}_{2}^{\mathcal{B}})\) will be called here a diagram of type \(\widehat{B}\). It contains a single stability zone covering the entire unit sphere \(\mathbb{S}^{2}\). As in the previous case, the above picture admits degenerations. In particular, the situations \(\hat{\epsilon}_{1}^{\mathcal{A}\prime}=\hat{\epsilon}_{1}^{\mathcal{A}}\) and \(\hat{\epsilon}_{2}^{\mathcal{A}}=\hat{\epsilon}_{2}^{\mathcal{A}\prime}\) correspond here to the same types of degeneracy as in the case of complex angular diagrams. The degeneration \(\hat{\epsilon}_{1}^{\mathcal{B}}=\hat{\epsilon}_{2}^{\mathcal{B}}\) corresponds to the "boundary" between dispersion relations with complex angular diagrams and those with simple angular diagrams. ## III Lifshitz transitions and general principles of changing of conductivity diagrams Changing of the picture of open trajectories of system (I.1) can be quite simple and visual for fairly simple Fermi surfaces. An illustrative example is the classical reconstruction considered in [1] (Fig. 1), where the compact Fermi surface takes the form of a warped cylinder. It is easy to see that open trajectories arise in this case only for directions of \(\mathbf{B}\) orthogonal to the cylinder axis and are periodic. In the general case, however, the description of open trajectories on the Fermi surface is a rather complicated problem and often requires serious numerical studies (see e.g. [14; 26; 40]). In this paper, we will try to describe the most general features of the changes in angular diagrams during the Lifshitz transitions, based on the general topological results obtained in the study of the Novikov problem. As we will see below, such features can lead in this case to a number of very special regimes in the conduction behavior, which are observed experimentally and are inherent precisely to situations close to topological transitions. A natural indicator of the topological complexity of a Fermi surface is its rank, namely, the number of independent directions in which the surface extends in \(\mathbf{p}\) - space. It is easy to see that the rank of the Fermi surface can take on the values \(0\), \(1\), \(2\) and \(3\) (Fig. 8). Moreover, since the Fermi surface can also be considered as a compact surface in a three-dimensional torus, it also has topological genus \(g\), which can take the values \(0\), \(1\), \(2\), \(3\), \(4\),... and so on. (Fig. 9). For topological reasons, the rank of a Fermi surface cannot exceed its genus. It is also important that, in addition to the topological complexity, the Fermi surface can have a very complex geometric shape in the \(\mathbf{p}\) - space, which also has a significant effect on the shape of the trajectories of system (I.1). The passage of singular points of the relation \(\epsilon(\mathbf{p})\) with increasing energy \(\epsilon_{F}\) changes the topology of the Fermi surface. It is easy to see that the passage of the minima and maxima of \(\epsilon(\mathbf{p})\) leads to arising and disappearance of (small) components of the Fermi surface, while the passage of saddle singular points leads to the merging or decay of individual components, or to a change in their genus. If we talk about a reconstruction of a connected component of the Fermi surface, then passing a saddle singular point of index \(1\) increases its genus by one, while passing a singular point of index \(2\) decreases its genus by one. More generally, passing a singular point of index \(1\) can also lead to a merging of individual components, while passing a point of index \(2\) can lead to a splitting of one component into two. According to the Morse theory, a smooth function \(\epsilon(\mathbf{p})\) on the torus \(\mathbb{T}^{3}\) has at least three saddle singular points of both index \(1\) and index \(2\) (and, of course, at least one minimum and maximum). Quite often, however, the number of saddle singular points of \(\epsilon(\mathbf{p})\) exceeds the lower estimates given, and the genus of the Fermi surface is \(4\) or more. Certainly, a change in the topology of the Fermi surface often gives obvious indications of a possibility of arising of non-closed trajectories of system (I.1) on it. This is especially true for changes in the rank of the Fermi surface, as well as arising of periodic open trajectories. Usually, Figure 8: Examples of Fermi surfaces of rank \(0\), \(1\), \(2\) and \(3\) Figure 9: Topological surfaces of genus \(0\), \(1\), \(2\) and \(3\) in these cases, the Fermi surfaces have a fairly simple shape and correspond to fairly simple angular conductivity diagrams. It will be of interest to us here to consider the situation when the Lifshitz transitions occur on fairly complex Fermi surfaces, which also correspond to fairly complex conductivity diagrams. Since the structure of such diagrams is formed mainly by the pattern of stability zones on them, it will be of interest to us, first of all, to trace the changes in this pattern during topological transitions. Above, we described the evolution of the picture of stability zones on a complex conductivity diagram, starting from the moment they appear on the diagram until they completely disappear (Fig. 7). As the value of \(\epsilon_{F}\) increases, the diagram changes monotonically, such that the region corresponding to the presence of only closed trajectories on the Fermi surface and the electron Hall conductivity decreases monotonically (until it disappears), and the analogous region corresponding to the hole Hall conductivity increases monotonically (since its emerging). In particular, sections of the boundaries of \(\Omega_{\alpha}\) adjacent to the first region move outside the stability zones, and sections adjacent to the second region move inside the zones. As we said above, segments of the first type are defined by the relation \(\tilde{\epsilon}_{1}({\bf B})=\epsilon_{F}\), and segments of the second type are determined by the relation \(\tilde{\epsilon}_{2}({\bf B})=\epsilon_{F}\). In energy intervals that do not contain reconstructions of the Fermi surface, the evolution of the conductivity diagram is continuous. At the same time, as was pointed out in [16], the functions \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\) can be locally constant in some domains on the unit sphere. This phenomenon is associated precisely with topological reconstructions of the surface \(S_{F}\), and the values of these functions on such "plateaus" coincide with the energies at which the corresponding reconstructions (the Lifshitz transitions) are observed. As can be seen, the picture of stability zones can in this case "jump" in some region of \(\mathbb{S}^{2}\) corresponding to a "plateau" of the function \(\tilde{\epsilon}_{1}({\bf B})\) or \(\tilde{\epsilon}_{2}({\bf B})\). We will try to consider here in most detail possible changes in the conductivity diagram during the Lifshitz transitions, including a description of the regimes of behavior of the tensor \(\sigma^{kl}({\bf B})\) corresponding to such changes. As we said above, we will start with the cases corresponding to arising or disappearance of stable open trajectories on the Fermi surface. In connection with the study of open trajectories of system (I.1), we will be interested in the Lifshitz transitions associated with the passage of saddle singular points of the relation \(\epsilon({\bf p})\) as \(\epsilon_{F}\) changes. Fig. 10 shows the reconstructions of the Fermi surface when passing singular points of index 1 and 2 with increasing Fermi energy. Reconstructions in Fig. 10 look like mutually inverse, we must remember, however, that in both cases, as \(\epsilon_{F}\) increases, the region \(\epsilon({\bf p})<\epsilon_{F}\) increases and the region \(\epsilon({\bf p})>\epsilon_{F}\) decreases. All changes in the picture of open trajectories on the Fermi surface with increasing \(\epsilon_{F}\) can be associated with two processes in planes orthogonal to \({\bf B}\), namely, the formation of open trajectories from closed electron-type trajectories and the decay of open trajectories into closed hole-type trajectories (Fig. 11). Similarly, as the value of \(\epsilon_{F}\) decreases, these processes go in the opposite direction. It is easy to see that for directions of \({\bf B}\) close to the axis of the cone \[a^{2}dp_{1}^{2}+b^{2}dp_{2}^{2}-c^{2}dp_{3}^{2}\,=\,0\] (III.1) (in the coordinate system corresponding to the given saddle singular point) or, respectively, \[a^{2}dp_{1}^{2}-b^{2}dp_{2}^{2}-c^{2}dp_{3}^{2}\,=\,0\] (III.2) no changes in the picture of open trajectories of system (I.1) can occur. The equation (III.1) or (III.2) thus se Figure 11: Formation of open trajectories and their decay with increasing value of \(\epsilon_{F}\) Figure 10: Reconstructions of the Fermi surface when passing saddle singular points of \(\epsilon({\bf p})\) of index 1 (top) and index 2 (bottom) lects two ellipsoidal regions on the unit sphere, in which the conductivity diagram (in our sense) certainly does not change when passing through the corresponding singular point. A change in the picture of open trajectories on the Fermi surface can thus occur only in the circular region separating opposite ellipsoidal regions on \(\mathbb{S}^{2}\) (Fig. 12). It can also be noted that in the case of observing of sharp changes along the boundary of this region (or part of it), it is not difficult to determine the parameters of the corresponding singular point (more precisely, the quantities \(b/a\) and \(c/a\)). For further consideration, we need a brief description of the structure of system (I.1) on the Fermi surface in the presence of stable open trajectories on it (see [9; 12; 16]). We give here this description using a model Fermi surface. Consider in \(\mathbf{p}\) - space a periodic family of integral planes connected by cylinders (Fig. 13). As before, we call a plane in the \(\mathbf{p}\) - space integral if it is generated by two reciprocal lattice vectors. We assume that the surface under consideration is periodic with periods equal to the reciprocal lattice vectors. In addition, we assume that all planes are divided into even and odd ones, so that the even planes remain even, and the odd planes remain odd, when shifted by any period. It is easy to see that for directions of \(\mathbf{B}\) almost orthogonal to the direction of the planes, our cylinders contain closed trajectories, separating the planes from each other (Fig. 13). In this case, our planes contain stable open trajectories of system (I.1) with the mean direction given by the intersection of the plane orthogonal to \(\mathbf{B}\) and the integral direction of the planes. The carriers of open trajectories are in this case (periodically deformed) planes with holes formed after the removing of closed trajectories (Fig. 14). It is also easy to see that the directions of open trajectories are opposite to each other on even and odd planes. The above picture is stable with respect to small rotations of \(\mathbf{B}\) and is preserved as long as there are closed trajectories separating integral planes on the cylinders connecting these planes. The corresponding stability zone \(\Omega\) is obviously the larger, the higher and narrower the cylinders connecting the planes, and vice versa, is small for wide cylinders of low height. It can also be noted that the disappearance of a cylinder of closed electron-type trajectories corresponds to sections of the boundary of \(\Omega\) adjacent to the hole Hall conductivity regions, while the disappearance of a cylinder of closed hole-type trajectories corresponds to sections of the boundary adjacent to the regions of the electronic Hall conductivity. The presented picture is topological and geometrically it can look much more complicated. In particular, carriers of open trajectories can be deformed much more strongly, and the cylinders connecting them can have a Figure 14: Carrier of stable open trajectories of system (I.1) on the Fermi surface Figure 12: Changes in trajectories when passing through a singular point for different directions of \(\mathbf{B}\) and the area of possible changes in the conductivity diagram (shaded) Figure 13: Model Fermi surface carrying stable open trajectories of system (I.1) very small height and a rather complex shape. Nevertheless, the described topological representation of the Fermi surface always arises when it contains stable open trajectories of system (I.1) ([9; 12; 16]). This representation is not unique for a given Fermi surface; in particular, different such representations for the same surface arise in different stability zones. Let us now describe the possible changes in the picture of stability zones when passing a saddle point of index 1 (see Fig. 10), using the above structure. Let us first consider the case when the reconstruction of the Fermi surface leads to the formation of stable open trajectories (Fig. 11) for some direction of \({\bf B}\). Using the structure described above, we will show here that the stability zones \(\Omega_{\alpha}\) arising as a result of such a reconstruction have, in a certain sense, a special shape, and also a specific contribution to the conductivity tensor in the limit \(\,B\rightarrow\infty\,\). Since the arising of open trajectories occurs due to the reconstruction of the Fermi surface, all such trajectories must pass through a narrow neck that appears after the passage of the saddle singular point (Fig. 15). This means, in particular, that the cycle \(c\) shown in Fig. 15, must pass both through the carrier of open trajectories running in one direction and through the carrier of open trajectories running in the opposite direction. From this it follows then that it also passes from one base of a cylinder of closed trajectories separating these carriers to its other base. It can be seen, therefore, that the height of at least one cylinder of closed trajectories connecting two carriers of open trajectories is very small and tends to zero when approaching the topological transition point. In addition, this cylinder has a saddle singular point at each of its bases, which are adjacent to two different carriers of open trajectories. It is not difficult to show then that such points must lie on different necks in the \({\bf p}\) - space, and the cylinder itself, thus, passes through both these necks. It is easy to see then that quite small rotations of \({\bf B}\), except for those orthogonal to the vector connecting the indicated necks, will lead to the disappearance of the indicated cylinder of closed trajectories and, thus, to going beyond the zone \(\Omega_{\alpha}\). It can be seen, therefore, that the stability zone formed as a result of the reconstruction must be a very narrow region on the sphere \(\mathbb{S}^{2}\) (Fig. 16). As the transition point is approached, the width of the region \(\Omega_{\alpha}\) tends to zero, so that \(\Omega_{\alpha}\) tends to a one-dimensional arc on the unit sphere (Fig. 16). It is easy to see that this arc is a segment of a great circle orthogonal to some integer direction in the \({\bf p}\) - space (namely, to the vector connecting the two necks considered above). At the points of this segment, therefore, the plane orthogonal to \({\bf B}\) always contains some fixed reciprocal lattice vector. It is also not difficult to show that the width of the region \(\Omega_{\alpha}\) tends to zero according to the law \(\sim\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\) when approaching the transition point. The passage of a saddle point of index 1 can lead to arising of both a finite and an infinite number of narrow stability zones on the angular diagram. In the latter case, as can be seen, we should expect the arising of an angular diagram of type \(B\) described above. It can also be noted that the total area of the stability zones arising as a result of the topological reconstruction tends to zero when approaching the transition point. As a consequence, the passage of a singular point of index 1 does not cause an abrupt decrease in the area of the regions corresponding to the Hall conductivity of the electronic type (and the presence of only closed trajectories on the Fermi surface). The formation of stability zones when passing through a singular point of index 1 can occur on diagrams of the \(0_{-}\), \(1_{-}\), \(A_{-}\), and \(B\) types. In the very first case, as is easy to see, this leads to a change of the type \(0_{-}\) immediately to the type \(A_{-}\). As we have already said, such degeneracies are typical for dispersion relations of sufficiently high symmetry and arising of several singular points of the same type at one energy level. To describe the behavior of the conductivity tensor in our situation, we also need to discuss the measure of open trajectories that appear on the Fermi surface. In fact, for generic directions of \({\bf B}\) (maximal irrationality), the measure of such trajectories is small in the parameter of prox Figure 16: A stability zone formed as a result of passing a singular point of index 1 with an increase in the value of \(\epsilon_{F}\) (schematically) Figure 15: A neck formed after a topological reconstruction and a cycle intersecting the resulting open trajectories mity to the transition point. To show this, consider the Fermi surface immediately before passing through (one or several) singular points of index \(1\). For generic directions of \({\bf B}\), there are only closed trajectories on it in this case, and the Fermi surface itself represents a set of a finite number of (non-equivalent) cylinders of closed trajectories separated by singular trajectories (Fig. 17). Near the transition point at \(\,\epsilon_{F}>\epsilon_{0}\,\), thin necks appear on the Fermi surface, connecting its various parts. Considering such necks on each of the cylinders of closed trajectories, we can see that with a strong decrease in their diameter, most of the closed trajectories do not undergo any changes (Fig. 18). As a result, in the limit \(\,\epsilon_{F}\rightarrow\epsilon_{0}\,\) almost all trajectories for such directions of \({\bf B}\) remain closed. It is also easy to see that the fraction of trajectories changed on each of the cylinders is proportional to the ratio of the neck width to the height of the cylinder, i.e. \(\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\). The corresponding factor also arises for the contribution (I.3) of open trajectories to the conductivity tensor in the limit \(\,\omega_{B}\tau\rightarrow\infty\,\) (as well as a factor containing a weak logarithmic dependence on \((\epsilon_{F}-\epsilon_{0})/\epsilon_{F}\) due to the proximity of the trajectories to singular points of system (I.1) in the narrow necks). The above discussion, however, needs one important addition. Namely, in addition to the proximity to a topological transition point, in the situation under consideration, the proximity of the direction of \({\bf B}\) to the directions corresponding to arising of periodic open trajectories on the Fermi surface (even before the transition point) can also play an important role. Such trajectories can exist on both sides of the topological transition and occupy a finite area on the Fermi surface. For directions of \({\bf B}\) close to such directions, at least some of the cylinders of closed trajectories described above (Fig. 17) have small heights and large "transverse" sizes in \({\bf p}\) - space. In this case, the ratio of the neck diameter to the cylinder height can remain finite. The proximity of generic directions of \({\bf B}\) to the directions described above may be caused by the specific geometry of the stability zones (Fig. 16). This applies, first of all, to the limit segment inside the zone which can be a set of directions corresponding to arising of periodic trajectories even before the transition point. As we have seen above, the width of the region \(\Omega_{\alpha}\) is also proportional to \(\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\) and the same we can also say about the height of a part of the cylinders of closed trajectories that arise for our directions of \(\,{\bf B}\in\Omega_{\alpha}\,\) before the topological transition (when there are no open trajectories for these directions yet). As a consequence, the measure of open trajectories on the Fermi surface in the zone \(\Omega_{\alpha}\) can remain finite near the transition. In this case, however, the emerging stable open trajectories have a specific geometry. Namely, they are limited by straight strips of rather large width and repeat the geometry of periodic trajectories on small scales (see Fig. 19). For the described directions of \({\bf B}\), we can introduce a function \(\mu({\bf B})\), which determines the ratio of the minimal width of strips containing open trajectories to the size of the Brillouin zone. The contribution of open trajectories to the conductivity at \(\,\omega_{B}\tau\gg 1\,\) is different in the intervals \(\,1\ll\omega_{B}\tau\leq\mu({\bf B})\,\) and \(\,\omega_{B}\tau\gg\mu({\bf B})\,\). In the first case, this contribution can be approximated by the formula (I.3), provided that the direction of the \(x\) axis coincides with the direction of periodic open trajectories. Figure 17: An example of a Fermi surface consisting of cylinders of closed trajectories of the system (I.1) Figure 18: Thin necks adjacent to a cylinder of closed trajectories of finite height and reconstruction of trajectories on the Fermi surface (cylinders of conserved trajectories are shaded) n the second case, the direction of the \(x\) axis must coincide with the mean direction of stable open trajectories in \({\bf p}\) - space, and the total contribution of such trajectories to the conductivity tensor can be represented as \[\sigma^{kl} \simeq \frac{ne^{2}\tau}{m^{*}}\left(\begin{array}{ccc}\mu^{2}(\omega_{B }\tau)^{-2}&(\omega_{B}\tau)^{-1}&\mu(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&\mu^{-2}&\mu^{-1}\\ \mu(\omega_{B}\tau)^{-1}&\mu^{-1}&*\end{array}\right)\] (III.3) (\(\omega_{B}\tau\,\rightarrow\,\infty\)). The function \(\mu({\bf B})\) has a strong dependence on the direction of \({\bf B}\) and goes to infinity for directions corresponding to the arising of periodic trajectories that exist on both sides of the transition. The zones \(\Omega_{\alpha}\), which have additional (rotational) symmetry and appear as a result of emerging of several singular points of index \(1\) at the same level \(\epsilon_{0}\), also deserve special mention. The sizes of such zones tend to zero in all directions as the transition point is approached, and the measure of open trajectories on the Fermi surface at \({\bf B}\in\Omega_{\alpha}\) is proportional to \(\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\). Symmetric stability zones, however, have one more peculiarity. Namely, the vector \((m^{1},m^{2},m^{3})\) for such zones coincides with the direction passing through the center of the zone (see [16]), and for this direction of \({\bf B}\) there are no open trajectories on the Fermi surface. For directions of \({\bf B}\) lying in a symmetric zone of small sizes, stable open trajectories lie in straight strips of large width and have rather complex behavior on small scales (see Fig. 20). To describe the contribution to the conductivity given by open trajectories in symmetric stability zones arising during the Lifshitz transitions, it is also natural to consider the intervals \(1\ll\omega_{B}\tau\leq\mu({\bf B})\) and \(\omega_{B}\tau\gg\mu({\bf B})\) for the function \(\mu({\bf B})\), which has the same meaning as above. In the first interval, the behavior of the conductivity is more complicated (the arising of intermediate fractional powers of \(\omega_{B}\tau\) is possible), while the total contribution of such trajectories to the conductivity tensor also contains a small factor of the order of \(\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\). In the second interval, the contribution of open trajectories to the conductivity is similar to the contribution (III.3), and also multiplied by a factor of the order of \(\sqrt{(\epsilon_{F}-\epsilon_{0})/\epsilon_{F}}\). It is likely that the observation also of a weak logarithmic dependence on \((\epsilon_{F}-\epsilon_{0})/\epsilon_{F}\), due to the proximity to singular points of \(\epsilon({\bf p})\), is somewhat complicated here from the experimental point of view. It must be said that the conductivity in the region of the small stability zones, arising as a result of the Lifshitz transitions, as a whole, has a rather complex behavior, and it is probably more convenient to study the geometry of such zones using methods that differ from methods of direct measurements of conductivity (see, for example, [44]). At the same time, the arising of such zones plays a very important role in changing the structure of a general conductivity diagram, especially in the case of arising of complex diagrams of the type \(B\). Let us now consider the second possible situation, namely, the disappearance of stable open trajectories of system (I.1) when passing through a singular point of index \(1\) (Fig. 11). In this situation, therefore, we will talk about the disappearance of a stability zone or a part of it. The corresponding changes, obviously, can occur only on diagrams of the types \(A_{-}\), \(B\), and \(A_{+}\). As we have already seen, the decay of stable open trajectories of system (I.1) becomes possible when the carriers of such trajectories are no longer separated from each other and the possibility of "jumping" between such carriers appears. Thus, a topological reconstruction of the Fermi surface leads to the decay of stable open trajectories if, as a result of the reconstruction, a new cylinder is formed that connects two carriers and makes it possible to "jump" between them for a given direction of \({\bf B}\). This is exactly the situation that leads to the formation of plateaus in the values of the functions \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\) (in our case, of \(\tilde{\epsilon}_{2}({\bf B})\)) (see [16]), which, in turn, should lead to jumps in the conductivity diagrams, considered by us here. As we have already noted, a change in the conductivity diagram during our reconstructions can occur only in a special circular region (Fig. 12). With a change of considered type, we can observe an instantaneous disappearance of zones of finite sizes (or their parts) at the moment of the topological transition. In addition, the measure of open trajectories on the Fermi surface here also remains finite up to the moment of transition and instantly vanishes (for generic directions of \({\bf B}\)) when the surface is reconstructed. At the same time, the proximity to the Lifshitz transition affects the specifics of the trajectories of system (I.1), as well as the behavior of the conductivity tensor, also in the described situation. In this case, it is expressed in arising of very long closed trajectories of system (I.1) on the Fermi surface for generic directions of \({\bf B}\) lying in the disappeared stability zones or their disappeared parts (see Fig. 21). The average length of such trajectories tends to infinity near the transition and decreases Figure 20: Stable open trajectories arising for directions of \({\bf B}\) lying in a symmetric stability zone of small sizes away from it. In addition, as we noted above, the values of the functions \(\epsilon_{1}({\bf B})\) and \(\epsilon_{2}({\bf B})\) differ from the values of \(\tilde{\epsilon}_{1}({\bf B})\) and \(\tilde{\epsilon}_{2}({\bf B})\) for directions of \({\bf B}\) corresponding to arising of periodic open trajectories. As a consequence of this, the periodic trajectories of system (I.1) on the Fermi surface do not disappear immediately at the moment of transition, but persist for some time. As a result, the region of a vanished stability zone (or its vanished part) is covered by a net of one-dimensional arcs corresponding to the presence of periodic trajectories on the Fermi surface. The net of corresponding arcs becomes denser when approaching a topological transition and becomes infinitely dense at the moment of transition. The described features of the trajectories of (I.1) near the Lifshitz transition also lead to a rather complicated behavior of the conductivity tensor in strong magnetic fields. The corresponding behavior of \(\sigma^{kl}({\bf B})\) was described in [43], where it appeared in very narrow regions near the boundaries of the zones \(\Omega_{\alpha}\) in the conductivity diagrams. Here, however, such behavior occurs in rather large areas, namely, in the place of the disappeared zones \(\Omega_{\alpha}\) or their parts. In such domains, for generic directions of \({\bf B}\), it is natural to introduce an (approximate) function \(\lambda({\bf B})\), which determines the ratio of the average size of long closed trajectories to the size of the Brillouin zone. In addition, when considering the conductivity in these regions, it is natural to keep the coordinate system corresponding to the disappeared open trajectories, namely, choosing the mean direction of the former open trajectories in the \({\bf p}\) - space as the \(x\) axis. A natural consequence of the geometry of trajectories in the regions under consideration is that their contribution to the conductivity manifests itself as the contribution of closed trajectories under the much stronger condition \(\,\omega_{B}\tau\gg\lambda({\bf B})\,\), while in the interval \(\,1\ll\omega_{B}\tau\leq\lambda({\bf B})\,\) their contribution rather corresponds to that of open trajectories. However, even in the limit \(\,\omega_{B}\tau\gg\lambda({\bf B})\,\), the contribution of the resulting closed trajectories preserves anisotropy in the plane orthogonal to \({\bf B}\). In addition, it can be seen that the long closed trajectories are formed from open trajectories located on two different carriers. For dispersion relations satisfying the condition \(\,\epsilon({\bf p})=\epsilon(-{\bf p})\,\) (and Fermi surfaces of not too large genus), this actually implies the relation \(\,\langle v_{gr}^{z}\rangle\to 0\,\) at \(\,\lambda({\bf B})\to\infty\,\) for the trajectory-averaged electron group velocity along the direction of magnetic field. As a consequence, the contribution of such trajectories to the conductivity along the magnetic field actually tends to zero in the limit \(\,\omega_{B}\tau\gg\lambda({\bf B})\gg 1\,\). In the latter, such a contribution is similar to the contribution to the conductivity given by unstable Dynnikov's trajectories in the limit \(\,\omega_{B}\tau\gg 1\,\). In general, the total contribution of the long closed trajectories to the symmetric \(s^{kl}\) and antisymmetric \(a^{kl}\) parts of the conductivity tensor in the limit \(\,\omega_{B}\tau\gg\lambda({\bf B})\,\) can be represented as ([43]) \[s^{kl}(B) \simeq \left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&\sigma^{zz}(\lambda)\end{array}\right)\ +\] \[+ \frac{ne^{2}\tau}{m^{*}}\left(\begin{array}{ccc}(\omega_{B} \tau)^{-2}&\lambda(\omega_{B}\tau)^{-2}&\lambda(\omega_{B}\tau)^{-2}\\ \lambda(\omega_{B}\tau)^{-2}&\lambda^{2}(\omega_{B}\tau)^{-2}&\lambda^{2}( \omega_{B}\tau)^{-2}\\ \lambda(\omega_{B}\tau)^{-2}&\lambda^{2}(\omega_{B}\tau)^{-2}&\lambda^{2}( \omega_{B}\tau)^{-2}\end{array}\right)\] (where \(\,\sigma^{zz}(\lambda)\to 0\,\) at \(\,\lambda\to\infty\)), \[a^{kl}(B) \simeq \frac{ne^{2}\tau}{m^{*}}\left(\begin{array}{ccc}0&(\omega_{B} \tau)^{-1}&(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&0&(\omega_{B}\tau)^{-1}\\ (\omega_{B}\tau)^{-1}&(\omega_{B}\tau)^{-1}&0\end{array}\right)\] It must be said that the condition \(\,\omega_{B}\tau\gg\lambda({\bf B})\,\) can be quite strong, and in many cases some intermediate regime between the regime (I.3) and the dependency described above can be observed. We also note that, in the general case, in addition to the contribution described above, we must also add the contribution (I.2) from "ordinary" closed trajectories, which are also present on the Fermi surface in the described situation. The value \(\lambda({\bf B})\) goes to infinity at the topological transition point. For a fixed generic direction of \({\bf B}\), its behavior near \(\epsilon_{0}\) can be (approximately) described by the dependence \[\lambda\,\sim\,\sqrt{\epsilon_{F}/|\epsilon_{F}-\epsilon_{0}|}\] At the same time, the dependence of \(\lambda\) on the direction of \({\bf B}\) is quite complicated, in particular, \(\lambda\) goes to infinity on the (preserved) arcs corresponding to arising of periodic trajectories of (I.1). In general, vanishing stability zones (or parts of them) can be called regions of complex conductivity behavior in strong magnetic fields. In fact, both described effects (arising and disappearance of stability zones) can be observed simultaneously (in different parts of an angular diagram) when passing through a saddle singular point of index 1. Fig. 22 shows an example of one of such reconstructions of the Fermi surface. It is easy to see that before the reconstruction the angular diagram is rather simple (of the type \(A_{-}\)) and contains one stability zone (with a diametrically opposite one). After the reconstruction, a part of the stability zone disappears being replaced by a zone of complex conductivity behavior. In addition, many small zones arise that separate the region of the electron Hall conductivity from the region of the hole Hall conduction that appears in the diagram. It can be shown that, in the immediate vicinity of the topological transition, chains of small Figure 21: Long closed trajectories arising on the Fermi surface near a topological transition (schematically) stability zones are located very close to zones of complex conductivity behavior, while further away from the transition they shift towards the "equator". In general, the conductivity diagram after the transition is of type \(B\) and, in the generic case, must also contain directions of \(\mathbf{B}\) corresponding to arising of unstable trajectories of the Tsarev or Dynnikov type. We note here that the passage of singular points with an increase in the value of \(\epsilon_{F}\) can cause abrupt changes in the conductivity diagrams, however, preserving the general direction of their evolution, shown in Fig. 7. As a consequence, the disappearance of a stability zone or part of it leads in this situation to an abrupt increase in the area corresponding to the hole Hall conductivity (and presence of only closed trajectories on the Fermi surface). At the same time, the area of the region corresponding to the electronic Hall conductivity (and presence of only closed trajectories on the Fermi surface) immediately near the transition remains unchanged. It can be seen, therefore, that if the described effect (disappearance of stable open trajectories when passing through a singular point of index 1) takes place for a diagram of type \(A_{-}\), the type of the diagram changes to type \(B\). From the same considerations, we can conclude that the observation of the described effect on diagrams of type \(B\) does not change the type of a diagram. Observation of the described effect on a diagram of type \(A_{+}\) preserves the type of the diagram or changes it to type \(1_{+}\). In connection with what has been said above, we would like to note here another important circumstance. Namely, diagrams of type \(B\) are generic diagrams and should, generally speaking, be observed in the study of sufficiently rich families of Fermi surfaces of sufficiently complex shape. However, they have not yet been experimentally discovered. The same is also true for unstable trajectories of the Tsarev or Dynnikov type, which must accompany such diagrams. One of the reasons for this, in our opinion, may be a rather small width of the interval \((\epsilon_{1}^{\mathcal{B}},\epsilon_{2}^{\mathcal{B}})\) and, accordingly, a low probability of the value \(\epsilon_{F}\) falling into it for real dispersion relations. In this regard, it can be expected that the use of the Lifshitz transitions can provide good opportunities for observing such diagrams, as well as nontrivial regimes of conductivity behavior corresponding to arising of Tsarev or Dynnikov trajectories on the Fermi surface. In general, passing a saddle point of index 1 with increasing Fermi energy can produce changes in the conductivity diagrams of the types listed below, with the following possible changes in the type of a diagram \[\begin{array}{c}0_{-}\to A_{-}\\ 1_{-}\to A_{-}\\ A_{-}\rightarrow\{A_{-},B\}\\ B\to B\\ A_{+}\rightarrow\{1_{+},A_{+}\}\end{array}\] The effects described above correspond to the passage of a saddle singular point of index 1 in the "forward" direction, namely, as the Fermi energy \(\epsilon_{F}\) increases. Certainly, as a result of an external influence, the Lifshitz transitions can be performed both in the "forward" and "backward" directions. Moreover, an external action does not necessarily simply change the Fermi level, but generally changes the entire dispersion relation \(\epsilon(\mathbf{p})\). Obviously, in the general case, the topology of a reconstruction of the Fermi surface is actually determined by changes in the relation \(\epsilon(\mathbf{p})\) near the corresponding singular point. It is natural, however, to keep the terms "the passage of a singular point in the forward or backward direction", based on the coincidence of the topology of the corresponding transition with the topology of the transition with increasing or decreasing \(\epsilon_{F}\). Considering the change in the relation \(\epsilon(\mathbf{p})\) to be continuous, in a rather narrow neighborhood of a topological transition, the influence of the general change in \(\epsilon(\mathbf{p})\) can be neglected and it can be assumed that the main changes in the picture of stability zones are caused precisely by the reconstruction of the Fermi surface. In order to describe the corresponding changes in the angular diagram, we can use the picture obtained in the consideration given above. As we have already said, the above picture refers to the passage of a singular point of index 1 in the forward direction. In the general case, we are interested here in a similar description for passing the point of index 1 in the backward direction, as well as passing the point of index 2 in the forward and backward directions. The passage of a singular point of index 1 in the backward direction naturally leads to effects opposite to those described above. Namely, passing a saddle point of index 1 in the backward direction can produce changes in the conductivity diagrams of the types listed below, with the following possible changes in the type of a diagram \[\begin{array}{c}A_{-}\rightarrow\{0_{-},1_{-},A_{-}\}\\ B\rightarrow\{A_{-},B\}\\ A_{+}\to A_{+}\\ 1_{+}\to A_{+}\end{array}\] In this case, the picture of stability zones in the conductivity diagram can undergo the following specific changes 1) Reducing the size of a certain (finite or infinite) number of stability zones and their disappearance directly at the topological transition point. 2) Formation of regions of complex conductivity behavior of finite size in the region of the hole Hall conductivity when approaching a topological transition and their transformation into stability zones at the transition point (or their attachment to already existing zones). The passage of a saddle singular point of index 2 in the forward direction is in fact similar to the passage of a singular point of index 1 in the backward direction, however, with the replacement of the electronic Hall conductivity regions by the hole Hall conductivity regions, and vice versa. Thus, passing the saddle singular point of index 2 in the forward direction can produce changes in the conductivity diagrams of the types listed below, with the following possible changes in the type of a diagram \[\begin{array}{c}1_{-}\to A_{-}\\ A_{-}\to A_{-}\\ B\rightarrow\{A_{+},B\}\\ A_{+}\rightarrow\{0_{+},1_{+},A_{+}\}\end{array}\] In this case, the picture of stability zones in the conductivity diagram can undergo the following specific changes. 1) Reducing the size of a certain (finite or infinite) number of stability zones and their disappearance directly at the topological transition point. 2) Formation of regions of complex conductivity behavior of finite size in the region of the electronic Hall conductivity when approaching a topological transition and their transformation into stability zones at the transition point (or their attachment to already existing zones). Similarly, passing a saddle singular point of index 2 in the backward direction can produce changes in the conductivity diagrams of the types listed below, with the following possible changes in the type of a diagram \[\begin{array}{c}A_{-}\rightarrow\{1_{-},A_{-}\}\\ B\to B\\ A_{+}\rightarrow\{A_{+},B\}\\ 1_{+}\to A_{+}\\ 0_{+}\to A_{+}\end{array}\] In this case, the picture of stability zones in the conductivity diagram can undergo the following specific changes. 1) Arising (of a finite or infinite number) of new stability zones of zero size at the transition point and a gradual increase in their size with further distance from it. 2) Disappearance of some stability zones of finite size Figure 22: An example of a passage of a saddle singular point of index 1 and the corresponding change in the angular conductivity diagram (schematically, the change in the picture of stability zones and the formation of a region of complex conductivity behavior are shown) (or their parts) with the formation of regions of complex conductivity behavior with the electronic type of Hall conductivity. If this effect is observed on a \(A_{+}\) diagram, it turns into a \(B\) type diagram, with the arising of an infinite number of stability zones, as well as directions of \({\bf B}\) corresponding to the occurrence of Tsarev's or Dynnikov's trajectories on the Fermi surface. It can also be noted here that each of the above descriptions can also be used in the case of passing several singular points of the same type at once. Such a situation, in fact, can arise quite often for dispersion relations with additional (rotational) symmetry. As for the simultaneous passage of singular points of different types, such a situation is a nongeneric situation and is observed only for special dispersion relations. In particular, it may refer to relations separating relations with simple angular diagrams (with a single stability zone) and relations with complex angular diagrams. Each of the above descriptions of the changes in conductivity diagrams, certainly, can also be used for dispersion relations with simple angular diagrams, taking into account the peculiarities of the evolution of conductivity diagrams for such relations. In conclusion, we would like to note that although the above picture refers primarily to changes in the structure of stability zones in conductivity diagrams, it also, in fact, describes many features of arising and disappearance of unstable open trajectories of system (I.1) on the Fermi surface under the described reconstructions. Moreover, if we talk about trajectories of the Tsarev or Dynnikov type, their arising is uniquely related with diagrams of the type \(B\) and, thus, is completely determined by the picture of stability zones on the unit sphere. If we consider unstable periodic open trajectories of system (I.1), then, as can be seen, most of them are also associated with stability zones and appear either near their edges or in regions of complex conductivity behavior. As can also be seen, most of these trajectories do not change directly at the topological transition point, but undergo changes at some distance from it, following the changes in the corresponding stability zone. Among the unstable periodic trajectories of system (I.1), however, we should also especially note the trajectories that are not tied to any specific stability zone on the conductivity diagram. In fact, the corresponding directions of \({\bf B}\) almost always belong to some zones \(\widehat{\Omega}_{\alpha}\) for the entire dispersion relation, and the corresponding trajectories are consistent with stable open trajectories, occurring in these zones. However, when they appear far from the corresponding interval \((\tilde{\epsilon}_{1}({\bf B}),\tilde{\epsilon}_{2}({\bf B}))\), they usually are not very interesting from this point of view. Instead, however, they are usually closely related to the topology of a given Fermi surface and exhibit its geometric properties well. A representative example of such trajectories are the periodic trajectories considered in [1]. ## IV Conclusion The paper considers the topological Lifshitz transitions in metals and related changes in galvanomagnetic phenomena from the point of view of the general Novikov problem. Namely, the picture of possible changes in electron trajectories in a metal during topological reconstructions of the Fermi surface and the corresponding changes in the behavior of electrical conductivity in the presence of strong magnetic fields is considered. The consideration is based on the classification of non-closed electron trajectories arising on Fermi surfaces of arbitrary complexity, and of corresponding behavior of conductivity in strong magnetic fields. The main analysis is based on the description of possible changes in the picture of stable open trajectories on the Fermi surface during topological transitions. As shown in the paper, the Lifshitz transitions are accompanied by a certain number of such changes, which make it possible to determine the features of the transition topology based on the observation of the conductivity in strong magnetic fields. The results obtained in the work can serve as one of the tools for studying topological Lifshitz transitions in metals with complex Fermi surfaces. The study was supported by the grant of the Russian Science Foundation N* 21-11-00331, "Geometric methods in the Hamiltonian theory of integrable and almost integrable systems"
2303.17583
TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions
Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images. Used in combination with deep learning, such systems now offer state-of-the-art performance at monocular depth estimation, extended depth-of-field imaging, lensless imaging, and other tasks. Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically over time? We first prove that the set of PSFs described by static phase masks is non-convex and that, as a result, time-averaged PSFs generated by dynamic phase masks are fundamentally more expressive. We then demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance.
Sachin Shah, Sakshum Kulshrestha, Christopher A. Metzler
2023-03-30T17:51:07Z
http://arxiv.org/abs/2303.17583v1
# TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic Point-Spread-Functions ###### Abstract Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images. Used in combination with deep learning, such systems now offer state-of-the-art performance at monocular depth estimation, extended depth-of-field imaging, lensless imaging, and other tasks. Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically over time? We first prove that the set of PSFs described by static phase masks is non-convex and that, as a result, time-averaged PSFs generated by dynamic phase masks are fundamentally more expressive. We then demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance. ## 1 Introduction Extracting depth information from an image is a critical task across a range of applications including autonomous driving [26, 30], robotics [21, 31], microscopy [7, 18], and augmented reality [28, 14]. To this end, researchers have developed engineered phase masks and apertures which serve to encode depth information into an image [12, 23]. To optimize these phase masks, recent works have exploited deep learning: By simultaneously optimizing a phase mask and a reconstruction algorithm "end-to-end learning" is able to dramatically improve system performance [29, 24]. Most existing works have focused on learning or optimizing a single phase mask for passive depth perception. We conjecture that this restriction leaves much room for improvement. Perhaps by using an SLM to introduce a sequence of phase masks over time, one could do much better. Supporting this idea is the fact, which we prove in Theorem 2, that the set of PSFs described by a single phase mask is non-convex. This implies that time-averaged PSFs, which span the convex hull of this set, can be significantly more expressive. In this work, we exploit the PSF non-convexity by developing a multi-phase mask end-to-end optimization approach for learning a sequence of phase masks whose PSFs are averaged over time. This work's central contributions are as follows: * We prove the set of PSFs generated by a single phase Figure 1: **Time-averaged Dynamic PSFs** Top: Phase mask sequence that was optimized to perform simultaneous extended depth-of-field imaging and monocular depth estimation. Middle: Proposed TiDy PSFs at specific depths. Bottom left: Depth estimation and all-in-focus imaging performance improve as one averages over more phase masks. Bottom right: Depth-encoded image and reconstructed depth map. mask, is non-convex. Thus, dynamic phase-masks offer a fundamentally larger design space. * We extend the end-to-end learning optics and algorithm design framework to design a dynamic set of phase masks. * We demonstrate, in simulation, that time-averaged PSFs can achieve superior monocular depth estimation and extended depth-of-field imaging performance. ## 2 Background Image Formation Model.One can simulate the formation of an an image in a camera by discretizing an RGB image by depth, convolving each depth with it's the corresponding PSF, and compositing the outputs to form the signal on the sensor. This process can be represented by the equation \[I=\sum_{d=1}^{D}O_{d}\left(L*h_{d}\right), \tag{1}\] where \(L\) represents all-in-focus image, \(\{1,\cdots,D\}\) represent a set of discrete depth layers, \(O_{d}\) is the occlusion mask at depth \(d\), and the set \(\{h_{1},\cdots,h_{D}\}\) represent the depth-dependent PSF, i.e., the cameras response to point sources at various depths [9]. Other works assume no depth discontinuities [24] or add additional computation to improve blurring at depth boundaries [10]. Our model is similar to those used in [29, 3]. PSF Formation Model.A PSF \(h_{d}\) can be formed as a function of distance \(d\) and phase modulation \(\phi^{M}\) caused by height variation on a phase mask. \[h_{d}=|\mathcal{F}[A\exp(i\phi^{DF}(d)+i\phi^{M})]|^{2} \tag{2}\] where \(\phi^{DF}(d)\) is the defocus aberration due to the distance \(d\) between the focus point and the depth plane. Note that because this PSF depends on depth, it can be used to encode depth information into \(I\)[8]. The key idea behind PSF-engineering and end-to-end learning is that one can use the aforementioned relationships to encode additional information into a captured image \(I\) by selecting a particularly effective mask \(\phi^{M}\). ## 3 Related Work ### Computational Optics for Depth Tasks Optics based approaches for depth estimation use sensors and optical setups to encode and recover depth information. Modern methods have used the depth-dependent blur caused by an aperture to estimate the depth of pixels in an image. These approaches compare the blur at different ranges to the expected blur caused by an aperture focused at a fixed distance [25]. Groups improved on this idea by implementing coded apertures, retaining more high frequency information about the scene to disambiguate depths [12]. Similar to depth estimation tasks, static phase masks have been used to produce tailored PSFs more invariant to depth, allowing for extended depth-of-field imaging [6]. However, these optically driven approaches have been passed in performance by modern deep neural networks, allowing for joint optimization of optical elements and neural reconstruction networks. ### Deep Optics Many methods have engineered phase masks with specific depth qualities. By maximizing Fisher information for depth, the coded image theoretically will have the most amount of depth cues as possible [22] and by minimizing Fisher information, one may achieve an extended depth-of-field image [6]. Deep learning techniques can be used to jointly train the optical parameters and neural network based estimation methods. The idea is that one can "code" an image to retain additional information about a scene, and then use a deep neural network to produce reconstructions. By using a differentiable model for light propagation, back-propagation can be used to update phase mask values simultaneously with neural network parameters. This approach was demonstrated for extended depth-of-field imaging [24, 10, 13], depth estimation [29, 3, 10], and holography [5, 4]. While these previous approaches successfully improved performance, they focused on enhancing a single phase mask. We build on these works by simultaneously optimizing multiple phase masks, which allows us to search over a larger space of PSFs. ## 4 Theory Micro-ElectroMechanical SLMs offer high framerates but have limited phase precision due to heavy quantization [1]. As [4] noted, intensity averaging of multiple frames can improve quality by increasing effective precision to overcome quantization. Our key insight is that even as SLM technology improves, intensity averaging yields a more expressive design space than a single phase mask. This is supported by the claim that the set of PSFs that can be generated by a single phase mask is non-convex. We provide a rigorous proof for the claim as follows. **Definition 1**.: \(A\in\{0,1\}^{N\times N}\) _is some valid aperture with a non-zero region \(S\) such that there exists lines \(L_{1}\) and \(L_{2}\) where \(S\) can be contained between them, and \(L_{1}\parallel L_{2}\) and \(u=S\cap L_{1}\) and \(v=S\cap L_{2}\) are single points (Figure 2)._ This definition of \(A\) supports most commonly used apertures including but not limited to circles, squares, and \(n\)-sided regular polygons. See supplement for proof for all shapes. **Definition 2**.: _Let \(T_{A}(N)\) be the set of \(N\times N\) matrices in \(\mathbb{T}^{N\times N}\) with non-zero support \(A\), i.e. the matrix is supported only where \(A=1\), where \(\mathbb{T}\) is the complex unit circle._ The PSF induced by a phase mask \(M\) can be modeled as the squared magnitude of the Fourier transform of the pupil function \(f\)[29]. **Definition 3**.: _Let \(f:\mathbb{R}^{N\times N}\to T_{A}(N)\) be defined by_ \[f(M)=A\odot\exp(iD+icM) \tag{3}\] _where \(\odot\) denotes entry-wise multiplication, and \(D\in\mathbb{R}^{N\times N}\) and \(c\in\mathbb{R}-\{0\}\) (the reals except for \(0\)) are fixed constants._ **Definition 4**.: _Let \(g:T_{A}(N)\rightarrow\mathbb{R}^{N\times N}\) be defined by_ \[g(X)=\frac{|\mathcal{F}(X)|\odot|\mathcal{F}(X)|}{\|\mathcal{F}(X)\|_{F}^{2}} \tag{4}\] _where \(\mathcal{F}\) denotes the discrete Fourier Transform with sufficient zero-padding, \(|\cdot|\) denotes entry-wise absolute value, and \(\|\cdot\|_{F}\) denotes the Frobenius norm._ **Lemma 1**.: _From fourier optics theory [8], any single phase mask's PSF at a specific depth can be written as_ \[PSF=g\circ f.\] **Theorem 2**.: _The range of PSF is not a convex set._ Proof.: \(f\) is clearly surjective, so it suffices to argue the range of \(g\) is not convex. Assume by way of contradiction that the range of \(g\) is convex. Then, for all \(X^{(1)},\ldots,X^{(k)}\in T_{A}(N)\) there exists \(Y\in T_{A}(N)\) such that \(g(Y)=\frac{1}{k}\sum_{i=1}^{k}g(X^{(i)})\). By Parseval's Theorem, \[\|\mathcal{F}(X)\|_{F}^{2}=N^{2}\|X\|_{F}^{2}=N^{2}\sum_{i=0}^{N}\sum_{j=0}^{ N}A_{i,j} \tag{5}\] so the condition is \[|\mathcal{F}(Y)|\odot|\mathcal{F}(Y)|=\frac{1}{k}\sum_{i=1}^{k}|\mathcal{F}(X ^{(i)})|\odot|\mathcal{F}(X^{(i)})| \tag{6}\] or equivalently \[\mathcal{F}(Y)\odot\overline{\mathcal{F}(Y)}=\frac{1}{k}\sum_{i=1}^{k} \mathcal{F}(X^{(i)})\odot\overline{\mathcal{F}(X^{(i)})}. \tag{7}\] Then the cross-correlation theorem reduces it to \[\mathcal{F}(Y\star Y)=\frac{1}{k}\sum_{i=1}^{k}\mathcal{F}(X^{(i)}\star X^{(i )}) \tag{8}\] where \(\star\) denotes cross-correlation. Because the Fourier Transform is linear we finally have \[Y\star Y=\frac{1}{k}\sum_{i=1}^{k}X^{(i)}\star X^{(i)}. \tag{9}\] Therefore, the convexity of the range of \(g\) is equivalent to the convexity of the set \(\{X\star X:X\in T_{A}(N)\}\). We will show the set's projection onto a particular coordinate is not convex. \[(X\star X)_{s,r}=\sum_{i=0}^{N}\sum_{j=0}^{N}X_{i,j}\overline{X_{i+s,j+r}} \tag{10}\] where we adopt the convention that \(X_{s,r}=0\) when \(s,r>N\) or \(s,r<0\). Take the points \(u\) and \(v\) from the definition of \(A\) (1). Also observe that correlation can be represented geometrically as shifting \(\overline{X}\) over \(X\). In this representation, notice that as the shift \((s,r)\) approaches \(v-u\), the non-zero overlap between \(X\) and \(\overline{X}\) shifted by \((s,r)\) approaches \(1\) by construction. That is, when \(L_{1}\) is shifted to overlap \(L_{2}\), \(u\) and \(v\) will be the only non-zero overlaps between the shifted and original non-zero points (Figure 3). No other non-zero points can overlap above or below \(L_{2}\) by definition of \(S\). Therefore, \((X\star X)_{v-u}\) becomes \[X_{u}\overline{X_{v}}+\sum_{i=1}^{N^{2}-1}0. \tag{11}\] Figure 2: **Example aperture that satisfies constraints on A.** The aperture is fitted between parallel lines \(L1\) and \(L2\), which only intersect the aperture at one point each. Common aperture shapes fit into these constraints. Because \(X_{u}\overline{X_{v}}\in\mathbb{T}\), \((X\star X)_{v-u}\in\mathbb{T}\) which is a non-convex set. Therefore, the set of correlation's of values on the complex unit circle masked by \(A\) is also not convex, and so is \(PSF\). Time-averaged PSFs span the convex hull of the set of static-mask PSFs, meaning there exists some PSFs achievable only through intensity averaging PSFs from a sequence of phase masks. This implies multi-phase mask learning may reach a better minimum. ## 5 Multi-Phase Mask Optimization ### Optical Forward Model Similar to PhaseCam3D [29], we model light propagation using Fourier optics theory [8]. In contrast to previous work, we compute the forward model (1) for multiple phase masks, producing a stack of output images, which when averaged form our coded image. This coded image simulates the recorded signal from imaging a scene using a sequence of phase masks in a single exposure (Figure 4). ### Specialized Networks For the monocular depth estimation task, we use the MiDaS Small network [20]. This is a well known convolutional monocular depth estimation network designed to take in natural images and output relative depth maps. The network is trained end-to-end with the phase masks. A mean-squared error (MSE) loss term is defined in terms of the depth reconstruction prediction, \(\hat{D}\) and the ground truth depth map \(D\), \[L_{Depth}=\frac{1}{N}\|D-\hat{D}\|_{2}^{2} \tag{12}\] where \(N\) is the number of pixels. This process allows for the simultaneous optimization of the phase masks as well as fine tuning MiDaS to reconstruct from our coded images. For the extended depth-of-field task, we use an Attention U-Net [17] to reconstruct all-in-focus images. The network is optimized jointly with the phase mask sequence. To learn a reconstruction \(\hat{I}\) to be similar to the all-in-focus ground truth image \(I\), we define the loss term using MSE error \[L_{AiF}=\frac{1}{N}\|I-\hat{I}\|_{2}^{2} \tag{13}\] where \(N\) is the number of pixels. ### Joint Task Optimization We also present an alternative to the specialized networks: a single network jointly trained for monocular depth estimation and extended depth-of-field using a sequence of phase masks. This network has a basic Attention U-Net architecture outputting \(4\) channels representing depth maps as well as all-in-focus images. Similar to prior works, we use a combined loss function, adding a coefficient to weight the losses for each individual task: \[L_{total}=\lambda_{Depth}L_{Depth}+\lambda_{AiF}L_{AiF}. \tag{14}\] ## 6 Experiments ### Training Details We use the FlyingThings3D from Scene Flow Datasets [15], which uses synthetic data generation to obtain all-in-focus RGB images and disparity maps. We use the cropped \(278\times 278\) all-in-focus images from [29]. In total, we use \(5077\) training patches and \(419\) test patches. Both the optical layer and reconstruction networks are differentiable, so the phase mask sequence and neural network can be optimized through back-propagation. Each part is implemented in PyTorch. During training, we use the Adam [11] optimizer with parameters \(\beta_{1}=0.99\) and \(\beta_{2}=0.999\). The learning rate for the phase masks is \(10^{-8}\) and for the reconstruction network it is \(10^{-4}\), and the batch size was \(32\). Finally, training and testing were performed on NVIDIA Quadro P6000 GPUs. We parameterize \(23\times 23\) phase masks pixel-wise as [13] found pixel-wise parameterization to produce the best overall performance. The monocular depth estimation task uses a the MiDaS Small architecture pretrained weights for monocular depth estimation downloadable from PyTorch [20]. The extended depth-of-field task pretrains an Attention U-Net with a fixed Fresnel lens for \(300\) epochs. For the joint task, we set \(\lambda_{Depth}=\lambda_{AiF}=1\) to balance overall performance, and we pretrain the Attention U-Net for \(300\) epochs with a fixed Fresnel lens. In simulation, the red, blue, and green channels are approximated by discretized wavelengths, \(610\) nm, \(530\) nm, and \(470\) nm respectively. Additionally, the depth range is discretized into \(21\) bins on the interval \([-20,20]\), which is larger than previous works. Figure 3: **Geometric interpretation of correlation \((\mathbf{X}\star\mathbf{X})_{\mathbf{v}-\mathbf{u}}\). The figure represents the correlation step when the shift is \(v-u\). Notice that only \(u\) and \(v\) overlap once the shift is applied.** ### Evaluation Details For ablation studies on our method, we used the testing split of the FlyingThings3D set for both monocular depth estimation and extended depth-of-field imaging [15]. For comparisons to existing work, we also tested our monocular depth estimation network on the labeled NYU Depth v2 set [16]. The ground truth depth maps were translated to layered masks for the clean images by bucketing the depth values into \(21\) bins, allowing us to convolve each depth in an image with the required PSF. We use root mean squared error (RMSE) between ground truth and estimated depth maps for depth estimation and RMSE between ground truth and reconstructed all-in-focus images for extended depth-of-field imaging. We also use peak signal-to-noise ratio (PSNR) and structural similarity index [27] (SSIM) for extended depth-of-field imaging. ### Ablation Studies #### 6.3.1 Effect of Phase Mask Sequence Length For both all-in-focus imaging and depth estimation, we vary the phase mask count that the end-to-end system is trained with to gauge the benefits of using multiple phase masks. The forward model and initial phase masks were held standard while the phase mask count was varied. The resulting networks were evaluated at convergence. For the extended depth-of-field task, the masks were all initialized with random noise uniform from \(0\) to \(1.2\times 10^{-6}\). For the depth estimation task, the masks were initialized with the Fisher mask with added Gaussian noise parameterized by a \(5.35\times 10^{-7}\) mean and \(3.05\times 10^{-7}\) standard deviation. End-to-end optimization on each task with a specialized network yielded improved performance as the phase mask count increased, visualized in Figure 5. This result implies that sequences of phase masks are successful in making the PSF space more expressive. Additionally, even for the more complex joint task, learning a system that can produce both all-in-focus images and depth maps, error decreases with phase mask count until a plateau, visualized in Figure 6. #### 6.3.2 All-in-focus without Reconstruction Networks A phase mask generating a PSF of the unit impulse function at every depth would be ideal for extended depth-of-field as each depth is in focus. If possible, this phase mask would not require any digital processing. We optimize phase mask sequences of varying lengths to produce an averaged PSF close to the unit impulse function for all depths. For each sequence length, phase masks are optimized using MSE loss between the unit impulse function and the averaged PSF at each depth until convergence. We ran \(1000\) trials of random phase mask initialization for each length. Observe that a side-effect of longer phase masks is training stability. The range of RMSE between the simulated capture image and ground truth all-in-focus image decreases as the sequence length increases (Figure 7). This indicates training longer Figure 4: **Multi-phase mask forward model overview.** A sequence of phase masks are used to generate a sequence of depth-dependent PSFs. These PSFs are convolved with depth masked clean images to simulate depth dependent convolution. The images produced by each phase mask are averaged to create a coded image which is fed into an attention U-Net. The reconstruction loss is back-propagated end-to-end through the network and the optical model to design phase masks and algorithms capable of performing monocular depth estimation and extended depth-of-field simultaneously. Figure 5: **RMSE for specialized tasks** for each phase mask sequence length. RMSE decreases with respect to phase mask sequence length for both specialized extended depth-of-field imaging and monocular depth estimation tasks. \(0\) phase masks refers to a reconstruction neural network with a fixed Fresnel lens. sequences is more resilient to initialization. #### 6.3.3 Phase Mask Initialization for Depth Perception Deep optics for depth perception can be very dependent on the initialization of optical parameters before training [29]. To find the extent of the effect of mask initialization on performance, we varied the the initial phase masks while keeping number of masks, the optical model, and duration of training fixed. We trained for \(200\) epochs. We tested four initializations of sequences of \(5\) phase masks as shown in Figure 8. The first was uniformly distributed noise from \(0\) to \(1.2\times 10^{-6}\). The second was the first mask in the sequence set to a Fisher mask while the rest are uniform noise. The third is setting each mask to a rotation of the Fisher mask and adding Gaussian noise parameterized by a \(5.35\times 10^{-7}\) mean and \(3.05\times 10^{-7}\) standard deviation to \(4\) masks. Lastly, we set each mask to a rotation of the Fisher mask and added noise to only the last two masks in the sequence. Of the four initializations, it is clear that the \(3\) Fisher masks and \(2\) Fisher masks with noise performed the best (Table 1). #### 6.3.4 Modeling State Switching in SLMs Our optical forward model assumes an SLM can swap between two phase patterns instantly. In practice, however, some light will be captured during the intermediate states between phase patterns. These phase patterns, in the worst case, could be random phase patterns, effectively adding noise to our coded images. We model these intermediate states by averaging output images produced by phase masks and the randomized phase patterns weighted by the time that they are displayed for. We model the total exposure time as \(100\)ms, with various durations of switching times from \(1\) to \(16\)ms per swap. We evaluate our joint optimized network on these new, more noisy, coded images without any additional training (Figure 12). Observe that because the \(5\) phase mask system includes more swaps, performance degrades faster than fewer phase mask systems. However, for \begin{table} \begin{tabular}{|l|l|} \hline Initialization & RMSE\(\downarrow\) \\ \hline \hline \(1\) Fisher + All noise & 0.0329 \\ \(1\) Fisher + Fisher w/ Noise & 0.0271 \\ All noise & 0.0254 \\ \(3\) Fisher + Fisher w/ Noise & **0.0207** \\ \hline \end{tabular} \end{table} Table 1: **Quantitative evaluation of phase mask initializations.** Four sequence initializations are evaluated on the monocular depth estimation task. Ultimately, \(3\) Fisher masks and \(2\) noisy Fisher masks have the best performance after training. Figure 8: **Visualization of phase mask initializations.** Each row represents a different initial phase mask sequence. Figure 6: **RMSE for joint optimization of monocular depth estimation and extended depth-of-field imaging** for each phase mask sequence length. RMSE decreases with respect to phase mask sequence length for this complex joint task, demonstrating the benefit of multi-phase mask learning. \(0\) phase masks refers to a reconstruction neural network with a fixed Fresnel lens. Figure 7: **All-in-focus imaging RMSE distribution** for each phase mask length without a reconstruction network. The best RMSE for each phase mask count has low correlation with respect to phase mask sequence length, but the variance of RMSE decreases. short switching times, \(5\) phase masks still out performs the others without needing any fine tuning. ## 7 Results We compare our time averaged dynamic PSF method to the state-of-the-art methods for both extended depth-of-field imaging and monocular depth estimation. The relevant works we compare to are as follows: 1. PhaseCam3D [29] used a \(23\times 23\) phase mask based on \(55\) Zernike coefficients. The phase mask parameters were then end-to-end optimized with a U-Net reconstruction network to perform depth estimation. 2. Chang et al. [3] used a singlet lens introducing chromatic aberrations with radially symmetric PSFs. Similar to [29], the lens parameters were also then end-to-end optimized. 3. Ikoma et al. [10] used a radially symmetric diffractive optical element (DOE). The blurred image was preconditioned with an approximate inverse of the PSF depth dependent blur. The RGB image stack was fed into a U-Net to produce both an all-in-focus image and a depth map. The DOE and U-Net parameters were optimized in an end-to-end fashion. Figure 11: **Qualitative results of a joint optimized system for extended depth-of-field imagining and monocular depth estimation.** Both one and five phase mask networks are evaluated on the FlyingThings3D datasets. Notice that five masks has fewer artifacts than a single mask. Figure 10: **Qualitative results of a specialized networks on monocular depth estimation.** Performance using the five phase mask method outperforms one phase mask on both datasets. Figure 9: **Qualitative results of a specialized network on extended depth-of-field imaging.** Both \(1\) and \(5\) phase mask systems are evaluated on FlyingThings3D. Error is computed pixel wise between the ground truth all-in-focus image and the reconstructed output and is boosted by a factor of \(3\). Notice that the \(5\) phase mask system introduces minimal error. 4. Liu et al. [13] used various phase mask parameterizations with the same U-Net architecture as [10]. One method used pixel-wise height maps (PW) and the other introduced orbital angular momentum (OAM). 5. Sitzmann et al. [24] implements a single DOE based on Zernike coefficients, and solves the Tikhonov-regularized least-squares problem to reconstruct an all-in-focus image. 6. MiDaS [19] and ZoeDepth [2] are state of the art single shot monocular depth estimation methods with all-in-focus images as inputs. Because both [10] and [13] simultaneously learn all-in-focus images and depth maps, when comparing against our specialized methods, we take their best performing weighting of each task. Individual Tasks.For monocular depth estimation, our specialized method using a sequence of \(5\) phase masks trained for \(300\) epochs outperforms prior work on FlyingThings3D (Table 2). Additionally, our approach performs significantly better and achieves lower error than previous methods on NYUv2 without any additional fine tuning. For extended depth-of-field, our specialized method using a sequence of \(5\) phase masks out performs prior work on FlyingThings3D (Table 3). This demonstrates the benefit of multi-phase mask learning on computational imaging tasks. Multi-Objective Optimization.We also evaluate our method against other joint all-in-focus and depth map learning approaches. This problem is challenging because good depth cues to produce depth maps is antithetical to producing an all-in-focus image. Our combined \(5\) phase mask trained for \(300\) epochs approach outperforms prior jointly trained approaches (Table 4). model also simulates depths as layered masks over an image, which does not account for blending at depth boundaries. Additionally, our method assumes that scenes are static for the duration of a single exposure. Lastly, though their prices are falling, SLMs are still quite expensive and bulky. ## 9 Conclusion This work is founded upon the insight that the set of PSFs that can be described by a single phase mask is non-convex and that, as a result, time-averaged PSFs are fundamentally more expressive. We demonstrate that one can learn a sequence of phase masks that, when one dynamically switches between them over time, can substantially improve computational imaging performance across a range of tasks, including depth estimation and all-in-focus imaging. Our work unlocks an exciting new direction for PSF engineering and computational imaging system design. ## Acknowledgements C.M. was supported in part by the AFOSR Young Investigator Program Award FA9550-22-1-0208.
2305.14304
A Classical Architecture For Digital Quantum Computers
Scaling bottlenecks the making of digital quantum computers, posing challenges from both the quantum and the classical components. We present a classical architecture to cope with a comprehensive list of the latter challenges {\em all at once}, and implement it fully in an end-to-end system by integrating a multi-core RISC-V CPU with our in-house control electronics. Our architecture enables scalable, high-precision control of large quantum processors and accommodates evolving requirements of quantum hardware. A central feature is a microarchitecture executing quantum operations in parallel on arbitrary predefined qubit groups. Another key feature is a reconfigurable quantum instruction set that supports easy qubit re-grouping and instructions extensions. As a demonstration, we implement the widely-studied surface code quantum computing workflow, which is instructive for being demanding on both the controllers and the integrated classical computation. Our design, for the first time, reduces instruction issuing and transmission costs to constants, which do not scale with the number of qubits, without adding any overheads in decoding or dispatching. Rather than relying on specialized hardware for syndrome decoding, our system uses a dedicated multi-core CPU for both qubit control and classical computation, including syndrome decoding. This simplifies the system design and facilitates load-balancing between the quantum and classical components. We implement recent proposals as decoding firmware on a RISC-V system-on-chip (SoC) that parallelizes general inner decoders. By using our in-house Union-Find and PyMatching 2 implementations, we can achieve unprecedented decoding capabilities of up to distances 47 and 67 with the currently available SoCs, under realistic and optimistic assumptions of physical error rate $p=0.001 and p=0.0001, respectively, all in just 1 \textmu s.
Fang Zhang, Xing Zhu, Rui Chao, Cupjin Huang, Linghang Kong, Guoyang Chen, Dawei Ding, Haishan Feng, Yihuai Gao, Xiaotong Ni, Liwei Qiu, Zhe Wei, Yueming Yang, Yang Zhao, Yaoyun Shi, Weifeng Zhang, Peng Zhou, Jianxin Chen
2023-05-23T17:44:06Z
http://arxiv.org/abs/2305.14304v1
# A Classical Architecture For Digital Quantum Computers ###### Abstract Scaling bottlenecks the making of digital quantum computers, posing challenges from both the quantum and the classical components. We present a classical architecture to cope with a comprehensive list of the latter challenges _all at once_, and implement it fully in an end-to-end system by integrating a multi-core RISC-V CPU with our in-house control electronics. Our architecture enables scalable, high-precision control of large quantum processors and accommodates evolving requirements of quantum hardware. A central feature is a microarchitecture executing quantum operations in parallel on arbitrary predefined qubit groups. Another key feature is a reconfigurable quantum instruction set that supports easy qubit re-grouping and instructions extensions. As a demonstration, we implement the widely-studied surface code quantum computing workflow, which is instructive for being demanding on both the controllers and the integrated classical computation. Our design, for the first time, reduces instruction issuing and transmission costs to constants, which do not scale with the number of qubits, without adding any overheads in decoding or dispatching. Rather than relying on specialized hardware for syndrome decoding, our system uses a dedicated general-purpose multi-core CPU for both qubit control and classical computation, including syndrome decoding. This simplifies the system design and facilitates load-balancing between the quantum and classical components. We implement recent theoretical proposals as decoding firmware on a RISC-V system-on-chip that parallelizes general inner decoders. By using various inner decoders, including our in-house Union-Find and PyMatching 2 implementations, we can achieve unprecedented decoding capabilities of up to distances 47 and 67 with the currently available systems-on-chips (SoCs), under realistic and optimistic assumptions of physical error rate \(p=0.001\) and \(p=0.0001\), respectively, all in just 1 \(\mu\)s. ## I Motivations and Summary of Results As quantum computers become more sophisticated [1, 2, 4, 17], their demands on the classical control multiply accordingly. In this section, we analyze those challenges, then summarize our solutions. We confine this work to the superconducting-circuit platform, the focus of our team. We first review the setup as the starting point for our discussion. **Superconducting system setup.** Figure 1 illustrates a standard setup for a superconducting quantum computing system. Quantum information is stored physically on superconducting qubits on a quantum chip. To enable superconductivity and to suppress thermal noise, the quantum chip is cooled cryogenically inside a dilution refrigerator. To enable state evolution and measurement, the superconducting circuits are coupled to drive lines connecting to room-temperature control electronics, which in turn comprise of arbitrary waveform generators (AWGs), digitizers, IQ mixers, etc. The control electronics are further driven by a general-purpose processing unit such as a PC. **Quantum computing workflows.** Applications are the end goals of quantum computers, thus the origins of their design requirements. Most applications belong to one of the two main paradigms: noisy intermediate-scale quantum (NISQ) applications and fault-tolerant quantum computations (FTQC). NISQ applications operate on noisy, unprotected physical qubits, limited in scale and in prec Fig. 1: An experimental setup for qubit driving and measurement. The dilution refrigerator is depicted as the cyan box, with different temperature zones separated by dashed lines. the PC driving the control electronics is omitted. encoded logical qubits, each consisting of (likely) thousands of physical qubits. The logical qubits have drastically reduced sensitivity to physical-level noises, allowing computations of an arbitrary length and scale, thus consequently the ultimate quantum advantages. In NISQ, the PC sends the quantum circuit to the control electronics. The latter parse the circuit into microwave waveforms, play them synchronously on the drive lines to the qubits, process the measurement responses from the quantum chip, and finally, return the measurement results to the PC. The PC can then perform a classical post-processing, before possibly starting the next round of quantum circuit execution. FTQC differs from NISQ in several key aspects. First, it requires constant extraction and decoding of the classical error syndromes, which are constantly churned out by the faulty quantum circuits. The decoding in turn requires real-time and intense classical computation. Second, while NISQ executes a static circuit, FTQC requires dynamic quantum circuit generation according to the decoding results. Both NISQ and FTQC demand seamless coordination and collaboration between classical and quantum computational resources, which in turn require a co-design of classical and quantum architecture. We focus on the design and implementation of classical architectures. We analyze the challenges from two perspectives: scaling-up and actual implementation of a complete system. **Challenges in scaling up the classical architecture.** Maintaining a high precision in the control of quantum hardware is the primary requirement here as it would directly affect fidelities of the quantum operations involved. Failing it would result in performance loss that eventually needs to be compensated by the quantum hardware, compounding the difficulty for the latter. Specifically, for superconducting qubits, microwave pulses played on different AWG channels and the sampling window of digitizer channels need to be synchronized at the picosecond level to ensure high-fidelity physical operations [41]. A second set of challenges are caused by the large number of instructions -- the efficiency of their issuance, transmission, and execution as the number of qubits grows. These problems have been recognized by several authors [7, 11, 19], and we refer to them together as "instruction stresses". In FTQC, dynamic quantum instructions need to be issued and transmitted fast enough to keep in pace with the rapid quantum execution, posing a hard constraint on the classical architecture. This may not be required for NISQ, but is still desirable as it would decrease the total running time. Syndrome decoding is yet another major bottleneck to FTQC classical architecture [36]. For surface code schemes on present-day superconducting qubits, one round of syndrome extraction takes roughly 1\(\upmu\)s [3], and generates \(O(d^{2})\) bits of syndrome information in parallel, for \(d\) being the code distance. Against this increasing syndrome size, the decoding algorithm needs to keep up with the constant syndrome extraction time and in order to avoid exponential syndrome backlog. Multiple decoding schemes were proposed to tackle this problem [12, 13, 18, 22, 37], but can only handle code distances no more than \(11\), even with specialized hardware. Recently, a new parallel decoding scheme was proposed independently in [33, 34]. An implementation of the scheme achieved a code distance of \(11\) for physical error rate \(p=0.4\%\)[31]. A fourth set of challenges originate from a desirable feature that we call "permissiveness", which means the ability to accommodate evolving requirements by other components of a quantum computer. Our field experiences indicate that implementing a complete classical architecture is time-consuming and labor-intensive. On the other hand, in this early stage of quantum computing, changes are rapid in applications, hardware characteristics, and error-correction schemes. Thus a stable yet permissive classical architecture would be cost-effective in the classical-quantum co-design process. **Challenges for implementing a complete system.** Many researchers have proposed innovative solutions addressing one or a few of the above problems. Ultimately, a single system needs to be built for a real quantum computer. Building such a complete system has the additional challenge of balancing competing objectives with currently available and compatible technologies. To our knowledge, there has not been a system implementation addressing all the aforementioned challenges in scalability. **Our contributions.** We present and implement a classical architecture to address all the scalability challenges mentioned above in one single system. 1. Our system provides _high-fidelity qubit control_ by interconnecting one-chassis PXIe systems through a star-like hierarchy with high-density connectors. This design synchronizes, with high accuracy, pulses from different control electronics, enabling precise qubit control even as the system size increases, thereby maintaining high-fidelity. This conclusion is supported by the extensive testing of the channel-to-channel and phase jitter of the AWG outputs. 2. To address _instruction stresses_, we develop an efficient "quantum instruction pipeline" that combines Single-Instruction-Multiple-Data (SIMD) with a broadcasting mechanism. This pipeline enables parallel application of the same type of gate on arbitrarily-sized qubit groups. The operation types and the qubit groups of application-specific instructions can be easily configured either prior to the execution of the quantum program or during runtime. Additionally, the costs across instruction issuing, transmission, dispatching and execution does not scale with the size of each qubit group. 3. Our system's _permissiveness_ is achieved through a combination of features, including a reconfigurable quantum instruction set, Memory-Mapped IO (MMIO) in the microarchitecture, and a portable general-purpose CPU. The instruction set and the underlying MMIO-based microarchitecture facilitate the incorporation of new quantum instructions. 4. We achieve unprecedented performances on _decoding throughput_ for surface codes, a mainstream approach that our architecture and the implemented system are nevertheless not restricted to. More specifically, we implement the surface code and a parallel decoding firmware based on the recent theoretical proposals [31, 34] in a dedicated CPU and benchmark its performance on a development board. By leveraging our in-house Union-Find and PyMatching 2 [21] as inner decoders, we can decode up to distances \(13\) and \(31\) on SiFive P650 [32] or T-head C910 [10], or \(41\) and \(67\) with ET-SoC-1 [16], all in just 1 microsecond for physical error rate \(p=0.0001\). The proposed classical architecture is implemented fully in an end-to-end quantum computer system by integrating a multi-core, vectorized RISC-V CPU with our in-house control electronics. Our system also features an MLIR-based compiler to support the proposed reconfigurable quantum instruction set and enables optimization possibilities on various abstraction layers. We highlight the following features among the many of our implemented system. 1. _Low communication latency_ A key metric for FTQC is the "decoding latency", i.e. the time between the completions of syndrome generation and decoding. Such latency consists of the decoding algorithm latency and the communication latency between the control system and the quantum device. In our design, we use on-board communication to reduce the latter. This design also enables other capabilities where latency plays a critical role, such as on-the-fly calibration [20, 27, 29], and just-in-time compilation [39, 40]. 2. _Load balancing through multi-core CPU_ The bottleneck in classical computation is not always syndrome decoding, and can vary during the computational process. To accommodate different scenarios, we use a dedicated multi-core CPU in our system, allowing dynamic allocation of cores to syndrome decoding, qubit control or other computation-heavy tasks. This design allows us to achieve optimal performance while avoiding the unnecessary complexities and cost of using specialized hardware for syndrome decoding. **Comparison with previous work.** To the best of our knowledge, our architecture proposal and the resulting actual implementation represent the first attempt to address, in a single system, the above comprehensive list of scaling challenges for the classical architecture. Instruction stresses have been known for long, with various mitigating approaches proposed [7, 11, 19]. Those include using Single-Instruction-Multiple-Data (SIMD) and Very-Long-Instruction-Word (VLIW) to reduce the instruction issuance rate [19], and multiprocessors to increase quantum operation and circuit-level parallelism [42]. However, those methods provide only constant-factor improvements and are insufficient to cope with the increasing overhead that scales with the code distance in surface code quantum computing. The QuEST proposal [35] addresses the instruction bandwidth problem by employing dedicated programmable micro-code engines. While it shows promises for enabling real-time instruction issuing, it crucially relies on an assumption from the underlying primeline microarchitecture [23]: that all qubits driven at a given time must share the same frequency. This may hold for some quantum computing platforms, such as cold atoms or trapped ions, but not for superconducting qubits, where frequency differences are likely inevitable and sometimes a design preference. Furthermore, the absence of scaling analysis makes it unclear how the frequency requirement would affect the performance in an actual implementation. Syndrome decoding has been another well-known concern in the quantum computing community for over a decade [36], with proposals ranging from efficient algorithms to specific microarchitectures [12, 14, 15, 21]. Before our work, it remained an open problem if a general-purpose CPU with on-board communication to the control electronics would be sufficient to provide the required decoding throughput. We answer this question affirmatively for the first time by combining the recent parallel decoding schemes [31, 34] with an efficient in-house implementation for the Union-Find decoder and a recent implementation of the Minimum Weight Perfect Matching (MWPM) algorithm [21]. ## II Architecture Design and System Implementation ### _Architecture Design_ See Figure 2 for a block diagram of our system design. The MCU contains a dedicated CPU. Besides controlling the qubits via the electronics driver, the CPU can also execute classical tasks offloaded from the host PC, using dedicated cores labeled the classical computing unit (CCU). Such tasks naturally arise from logical quantum program execution, dynamic calibration, and just-in-time compilation, etc. The offloading greatly shortens the communication latency with the QPU. A quantum program generally comprises both quantum and classical components that collaborate to solve a problem. In Section IV, we will introduce our front-end language and the corresponding compilation support. However, the design and workflow of our system are not restricted to specific quantum programming languages. When a quantum program is executed on a host PC, the quantum subroutines and, depending on the implementation, potentially some classical subroutines will be sent to the MCU. The MCU then processes these quantum or quantum-classical hybrid tasks by issuing both classical and quantum instructions. Classical instructions are carried out on the dedicated CPU for classical control and computations, while quantum instructions are executed through requests to our in-house quantum electronics (IQE) driver. The IQE driver dispatches corresponding "IQE instructions", or "commands", to IQE, which in turn drives the quantum processor. At the CPU level, the "quantum instructions" are implemented as pseudo-instructions that expand to MMIO load/store instructions. These MMIO instructions interact with a special memory region, and the electronics driver decodes them and dispatches "electronics-level instructions", which will be explained shortly, through broadcasting for communication with the control electronics. The electronics-level instructions specify the pulse sequences and their corresponding timing information to the control electronics. The latter parse the instructions and feed the pulse sequence information into a local queue. The pulse sequence is not played until a special "trigger" signal arrives at the control electronics, which then plays the pulse sequence through its ports and empties the queue, waiting for the next round of pulse instructions. We use various "instruction" terminologies. For clarity, Figure 4 exhibits a taxonomy, with more details in the main text. In addition to the aforementioned general setup, a key feature of our architecture is a _quantum instruction pipeline_ that naturally supports a large number of parallel repetition of a same gate, and allows for easy reconfiguration. This is enabled jointly by several components, which we elaborate below. **Reconfigurable quantum instruction set.** Exploiting MMIO's flexibility, our modular quantum instruction set comprises of "pulse-level instructions" for qubit control and calibration, and "gate-level instructions" for quantum circuit execution. By having both levels available, it allows for flexibility in implementing quantum algorithms and calibrating quantum devices, similar to other systems [7, 19, 42]. We distinctly exploit what we call the _brickwork structure_ found in typical quantum circuits: there is a small number of single-layer sub-circuit of the form \(\bigotimes_{i}G^{S_{i}}\), for some partition \(\{S_{i}\}_{i}\) of either the whole set or a large subset of the qubits into equal-sized subsets, and an identical gate \(G\) acting on each subset. We allocate different MMIO addresses for the partition identifiers, and specify the gate type via the message written to the address. Decoding and dispatching of the instruction are left to the underlying microarchitecture. This allows a lightweight specification of application-specific instructions on user-defined qubit partitions, which in turn significantly alleviates the cost of instruction issuing and data transmission. **Instruction dispatching via broadcasting.** Some designs may prioritize certain aspects of instruction processing at the expense of others. For instance, adding complex instructions to reduce the instruction issuing rate can lead to more complex decoding and dispatching. However, our microarchitecture support does not come with any hidden costs. This means that we have successfully reduced costs at every stage of the instruction processing pipeline. When dispatching a single gate instruction to multiple control electronics, one-to-one communication would scale the cost linearly with the number of control electronics, impeding scalability. We avoid this problem by exploiting the few-distinct-partition property of the brickwork structure through the built-in broadcasting Fig. 2: Block diagram of the proposed classical architecture. The architecture consists of a host PC, a main control unit (MCU), control electronics (In-house Quantum Electronics). The quantum chip is connected with the control electronics via drive lines, while the host PC, the MCU and the electronics are jointly connected via PXIe. Additionally, the MCU connects with all electronics via a star-like connection. A dedicated unit in the MCU is responsible for driving the control electronics. The MCU is equipped with a portable CPU, and a portion of it, called the classical computing unit, is allocated for run-time computation-heavy tasks such as syndrome decoding. In our implementation, the digital-analog and analog-digital units are made in-house and are called in-house quantum electronics (IQE), and their corresponding driver unit in the MCU is called the IQE driver. Please note that all other modules can be configured via the command parser; however, we have omitted the corresponding arrows in the diagram for the sake of simplicity. mechanism of the star-like connection. Each signal transmitted from the electronics driver broadcasts automatically through the star-like connection, thus each IQE instruction is sent to a collection of control electronics simultaneously, regardless if a control electronic is meant to be involved in the instruction. To utilize this, each electronic device holds a "partition mask" specifying which partitions it is in. Each IQE instruction broadcast from the electronics driver comes with a partition identifier. Upon receiving an instruction, each electronic device checks whether the partition identifier of the broadcast instruction matches one of the partition identifiers in its partition mask. If so, it proceeds to process the instruction, and ignores it otherwise. The MCU thus can issue at once a same instruction to all devices with a common partition identifier, realizing microarchitecture-level single-instruction-multiple-destination (SIMD). The partition masks are stored in local registration entry (REG) files on each electronic device. They are easily reconfigurable, either using PXIe between runs or in real-time via the same star-like connection. **Instruction decoding.** To decode a quantum instruction received from the CPU, the electronics driver extracts the electronics-level instruction type determined by the value written through MMIO, and appends it with the partition identifier determined by the MMIO address. Both mappings are stored in a local REG file that can be reconfigured if necessary. The assembled electronics-level instruction is then dispatched through the broadcasting system mentioned before. This microarchitecture does not incur extra overhead on either decoding or dispatching when the partition size increases (as in the case of more qubits). Apart from the above quantum instruction pipeline problems, we highlight some design choices that address scaling. **Pulse synchronization via triggers.** All of the ADCs and the DACs are driven in the same clock domain through a phase-locked loop and a star-like connection, with one rubidium oscillator used as the system root clock. Our design further synchronizes pulses on different control electronics via a dedicated trigger mechanism. The pulses are not played through DACs immediately upon processing of the electronics-level instructions, but rather are stored in a local queue. When a trigger instruction is issued from the MCU, the trigger signal arrives at each control electronic device at the same time, guaranteeing pulse-level synchronization. For further information on IQE, please refer to [41]. **Portable, tightly integrated but loosely coupled dedicated CPU.** Unlike previous schemes [7, 11, 19, 42] that handle the communication of the control electronics and the MCU by new and dedicated CPU instructions, ours aims to avoid substantial CPU modifications thus works solely with the unmodified classical instruction set instead. The MCU and the electronics driver are coupled only through MMIO instructions. This loose coupling provides portability and extensibility, as the same communication scheme can in principle be used with all CPUs supporting the same underlying ISA, or even other classical ISAs, with little to no modification. On the other hand, the dedicated CPU is tightly integrated to the control electronics through onboard communication which significantly reduce the communication cost. ### _System implementation_ Our design can in principle be implemented over various classical and quantum hardware. For our particular implementation, we assume room-temperature, as opposed to cryogenic, electronics as they are more widely deployed today. While there is no fundamental reason to prefer RISC-V, ARM, or other instruction set architectures, we choose RISC-V for its potential in future system evolution. For instance, we anticipate that integrating the required quantum instruction pipelines into the RISC-V IP core would be less limited due to its open license business model. **Hardware setup.** We implement a prototype by integrating a RISC-V IP core with our room-temperature electronics, which include a timing control module (TCM), four-channel AWGs, four-channel data acquisition modules, a local oscillator, amplifiers, mixers, and a high-precision voltage source. As mentioned above, the in-house AWGs and the digitiazers are collectively referred to as IQE. We implement a real-time digital signal processing system on built-in FPGAs of the IQE, featuring precise timing control, arbitrary waveform generation, and parallel IQ demodulation for qubit state discrimination. The FPGA in TCM serves as the master FPGA running the MCU and the IQE driver. The master FPGA communicates with the AWGs and the digitizers through high-speed digital backplane transmissions. In the aforementioned configuration depicted in Figure 2, we have assumed an unrestricted number of connections in the star-like topology. Now, we will explain how we can scale up from chassis-based systems that have a limited number of connections. A standard chassis with \(18\) slots meet the requirements of \(10\) qubits' control and readout. In such a one-chassis PXIe system, the master FPGA with a soft RISC-V IP core is used to provide triggers and instructions to other AWGs and digitizers. In order to control more qubits, the master FPGA in each one-chassis PXIe system can be interconnected through high-density connectors via a star-like expansion, as illustrated in Figure 3. Only one master FPGA needs to implement the soft RISC-V IP core as the MCU of the whole system. The MCU broadcasts the instructions to the master FPGAs of all those one-chassis PXIe subsystems by a daisy chain interface or star-like interface, and then each master FPGA broadcasts to AWGs and digitizers in the same chassis. We will now provide a comprehensive overview of various types of instructions present in our architecture, including the IQE instructions as well as the RISC-V quantum instructions. Together they facilitate seamless control over the quantum processor. **IQE instructions.** We specify the electronic-level instructions, or commands, broadcast by the IQE driver via the star-like connection, hereafter referred to as the "IQE instructions" for convenience (note that those "instructions" are not directly related to any CPU-level instructions). Currently, there are three types of IQE instructions, as summarized in Table I: * _"Trigger"_: As mentioned above, the "Trigger" instruction tells the IQE driver to actually start executing all quantum operations in the queue. To facilitate repeated measurements frequently occurring in qubit calibration, we ensure that the IQE driver has built-in functionality to _repeat_ all quantum operations in the queue, with a specified number of repetitions and time intervals. * "_Wait_": The "Wait" instruction controls the relative timing between quantum operations in the same trigger, giving the user program full control on scheduling. * "_Play_": The "Play" instruction is the most basic quantum instruction. It plays a predefined waveform or a predefined combination of waveforms on one or more channels, which corresponds to quantum operations like qubit reset, 1- or 2-qubit gates, or qubit measurement under the computational bases. Quantum measurement instructions differ from other operations in that they yield a result; these two cases are differentiated by the corresponding waveform indices, where indices \(i\geq 128\) correspond to measurements, while indices \(<128\) are for no return values. The digitizer decodes the measurement instructions and sets the sampling window according to the parameters specified in the instruction. After IQ channel demodulation and data processing, the digitizer transmits the result to the CPU's system RAM. **Quantum instruction set.** We here present an instantiation of a modular set of pseudo-instructions at the CPU level, consisting of pulse-level instructions for device calibration and gate-level instructions for algorithm implementations. These pseudo-instructions are not implemented directly, but are subsequently expanded to RISC-V MMIO operations via built-in load/store instructions. We also provide an example MMIO layout compatible with existing RISC-V architectures. Table II illustrates the current design of the quantum instruction set and its corresponding expansion into MMIO load/store instructions. The instruction set features a hierarchical design, consisting of pulse-level, gate-level, and application-specific instructions. Each higher-level instruction can be decomposed into lower-level instructions with the same functionality, but using higher-level instructions reduced the decoding and dispatching overhead. * "Pulse-level Instructions" play, qwait and trig: specify pulses, their relative timing, and the issuance of the trigger signal, respectively. More precisely, play specifies the actual control pulse sequence, qwait specifies the scheduling of the corresponding pulses, and trig triggers the actual execution of previously issued instructions. Additionally, fmr loads the qubit measurement results from previous runs from the predetermined addresses. * "Gate-level Instructions" sq and tq: correspond to single-qubit and two-qubit operations, respectively. They share a similar expansion as the pulse-level 'play' instructions, with different address offsets. As a single-qubit operation (e.g., a gate, a measurement, the qubit reset) usually consists of pulse plays on different physical channels with different timing constraints, a gate-level instruction is usually decoded into multiple IQE instructions by the IQE driver. This enables a relatively decoupled design of the MCU and the quantum architecture, as the exact interpretation of gate operations is only defined at the IQE driver level. * "Application-specific Instructions" app: shares the same instruction format as the pulse-level play, but its decoding into IQE instructions is entirely left to the user. The user can design the decoding of app instructions as different combinations of IQE instructions, as long as it does not incur too much overhead on the IQE driver side. As the use cases of quantum processors in the near and far future remains largely uncertain, such customizable instruction design provides freedom of exploration with different potential use cases, including NISQ and fault-tolerant quantum computation. We show in Table III an example MMIO layout supporting 0x4000, or \(16,384\), physical qubits, compatible with existing RISC-V architecture designs, including Si-Five, T-head, etc. The allocated address space supports individual qubit control over quantum memory experiments on a surface code patch with a code distance of \(90\), or lattice surgery on two qubits with a code distance of up to \(64\). Note that neither distances is a hard constraint as the MMIO address space is easily extendible to support a larger-scale computation. ## III Evaluation methodology A full demonstration of scalability requires a large-scale quantum processor yet to be built. We thus focus on imple \begin{table} \begin{tabular}{l|l} Type & Operands \\ \hline Trigger & **Repeat count**, _repeat interval_ \\ Wait & **Time** \\ Play & Channel. **waveform index**, _parameters_ \\ \end{tabular} \end{table} TABLE I: Summary of instructions to the IQE driver. In the “Operands” column, underlined text indicates the “address operand” (an operand that is determined by the memory address rather than a value written); bold text indicates the “main operand” (i.e., writing this operand issues the instruction to the IQE driver); all other operands are in italic text, indicating that writing to them does not issue the instruction and that their values are preserved in the IQE driver memory. Fig. 3: Expansion scheme via star-like connection. menting essential features over surface code quantum computing using our architecture, to argue that known scalability challenges can be resolved through our design. More specifically, we reach our conclusion by focusing on components involved in large-scale computation or communication, and analyzing how the incurred costs scale with the code distance and the quality of the quantum device. Our architecture design is not specific to surface-code-based quantum computing, thus can be readily generalized to other quantum error correcting codes or fault-tolerant schemes. ### _Surface code quantum computation_ Surface code encodes the logical information of one qubit into a patch of \(d\times d\) physical qubits, such that any error happening on at most \(\lfloor(d-1)/2\rfloor\) physical qubits can be detected through intermediate measurements and be corrected accordingly. A popular approach to realize logical Clifford operations for the surface code is "lattice surgery" [24]. Specifically, patches of logical qubits are arranged on a large grid, with additional physical qubits positioned in the "routing space" [8] between them. Then, lattice surgery allows measuring logical Pauli jointly over multiple patches, using interactions only between pairs of nearest-neighbor physical qubits. There are alternatives to lattice surgery for realizing logical operations on surface codes (see [6] and references therein). In \begin{table} \begin{tabular}{c|l|l|l} Pseudo-instruction & Base instruction(s) & Meaning of parameters\({}^{1}\) & \\ \hline \multirow{6}{*}{Pulse-level} & trig rd, rs1, rs2 & sw rd, ADDR\_TRIGGR+8 & rd — bit mask of channels \\ & trig rd, rs1, AS2 & sw rs1, ADDR\_TRIGGR+4 & rs1 — repeat count \\ \cline{2-3} & sw rs2, ADDR\_TRIGGR & rs2 — repeat interval \\ \cline{2-3} & qwait rs1 & sw rs1, ADDR\_WAIT & rs — time \\ \cline{2-3} & play rd, imm12(rs1) & sb rd, ADDR\_PLAY + imm12(rs1) & rd — waveform index \\ & & imm12(rs1) — memory offset for the channel \\ & & fmr rd, imm12(rs1) & rd — destination register \\ & & imm12(rs1) — result storage address \\ \hline \hline \multirow{2}{*}{Gate-level} & sq rd, imm12(rs1) & sb rd, ADDR\_GATE1Q + imm12(rs1) & rd — gate index \\ & & imm12(rs1) — memory offset for the qubit \\ & & tq rd, imm12(rs1) & sb rd, ADDR\_GATE2Q + imm12(rs1) & rd — gate index \\ & & imm12(rs1) — memory offset for the qubit pair \\ \hline \hline \multirow{2}{*}{Application-specific} & app rd, imm12(rs1) & sb rd, ADDR\_APP + imm12(rs1) & rd — operation index \\ & & imm12(rs1) — memory offset for \\ & & the operation grouping \\ \end{tabular} \end{table} TABLE II: Summary of RISC-V pseudo-instructions designed for communicating with the AQE driver. ADDR_* are memory addresses that are determined at design time and thus are constant in the assembler. Fig. 4: Taxonomy of instructions. \begin{table} \begin{tabular}{l|l|l} Name & Address & Type \\ \hline ADDR\_TRIGGR & 0x40001000 & im32 \\ ADDR\_WAIT & 0x40002000 & int32 \\ ADDR\_FNM & 0x40003000 & int32[0x1400] \\ ADDR\_SQ & 0x40010000 & uint[0x4000] \\ ADDR\_TO & 0x40014000 & uint[0x8000] \\ ADDR\_PLAY & 0x4001c000 & uint[0x8000] \\ ADDR\_APP & 0x40024000 & uint[0x4000] \\ \end{tabular} \end{table} TABLE III: An example of MMIO address layout. this work, by "surface code quantum computation" (SCQC), we refer to the approach through lattice surgery. Figure 5 shows a schematic workflow of SCQC from the perspective of classical control. Upon receiving a pre-compiled quantum program, the MCU issues quantum instructions to an "instruction decoding and dispatching unit" (IDDU) through MMIO when needed. The IDDU then decodes the quantum instructions into pulse-level instructions readily executable on each of the electronics and dispatches them accordingly. The control electronics interact with the quantum hardware and return the raw measurement results to a dedicated memory region. The incoming syndrome information is fed to and decoded by a classical firmware, called the "syndrome decoding unit" (SDU), that runs on the dedicated CPU. Once decoded, the logical measurement results are fed to the MCU for adaptive real-time generation of the future instructions required by fault-tolerant quantum computing. In our implementation, the IDDU, the electronics and the SDU correspond respectively to the IQE driver, the IQE and part of the CCU. Two essential subroutines of the SCQC are the "quantum memory experiment" and the "Bell-state experiment". Their quantum circuits are illustrated in Figure 6. The quantum memory experiment benchmarks the capability of the classical architecture for preserving quantum information, and the Bell-state experiment benchmarks that for essential steps in lattice surgery. As SCQC comprises mostly these two components (in addition to the preparation of a physical magic state and a single-patch logical measurement), we use them to validate our architecture. Besides real-time execution of quantum circuits with large-scale parallel gates, these SCQC subroutines also require fast processing of classical information in "syndrome decoding". "Syndromes" are mid-circuit measurement results indicating errors occurring during the FTQC process. To identify the actual errors and correct them, a dedicated syndrome decoder is needed to deduce the most likely error given the syndrome information. Ideally, the syndrome decoder needs to have a low error rate of inference, and be fast enough in order not to cause exponential syndrome backlog [36]. Developing and implementing such a low-error, low-latency and high-throughput syndrome decoder is vital to experimental realization of fault-tolerant quantum computation. ### _Validation of scalability_ We first establish the feasibility of our design by implementing an end-to-end prototype quantum computing system, and validating that it functions properly with test qubit calibration programs. In addition, we examine the time variation ("jitter") of pulse control with increasing size of the star-like connection, confirming that our design admits scalable pulse synchronization. A low jitter ensures high-precision synchronization of pulses played on different AWG ports, ensuring high-fidelity controls. To verify that our design resolves the instruction stress, we execute the aforementioned SCQC subroutines. We profile the running time of the classical controller against the running time of the quantum processor. The classical running time is estimated based on the instruction counts of an in-house CPU profiling tool over the QEMU RISC-V simulator. The running time of the quantum processor is estimated based on previously reported running time of each operation on a comparable Fig. 5: Schematic workflow supporting surface code quantum computation (SCQC). The shaded area illustrates the blurred boundary of “classical” and “quantum” architecture, the former being our main focus. superconducting platform [30]. We also quantitatively analyze the cost of instruction decoding and dispatching. Although neither is a scaling-up matter under our architecture design, we quantitatively show that the bandwidth of the differential pairs [25] can easily afford parallel gate instruction dispatching even under our proof-of-concept ISA implementation. Real-time classical decoding was previously a hard problem, and even dedicated hardware struggled to achieve real-time decoding for code distance \(d\) larger than \(11\)[14, 38, 5]. However, recent advances [21, 31, 24] have made real-time decoding much more realistic even on general-purpose CPU. In particular, the sliding-window decoding schemes, introduced independently in [34] and [31], parallelize in scale: they split the decoding task evenly into an arbitrary number of parallel threads, with only a small constant overhead factor independent of the number of threads. We implement such a parallelized SDU on a RISC-V development board, and benchmark its throughput on increasing code distances. We also give a rough estimation of the bandwidth required for syndrome transmission from the IQE digitizers to the SDU, finding it unlikely to become a bottleneck for our architecture. ## IV System Evaluation ### _Real system demonstration_ We implement a prototype system by integrating a RISC-V IP core with our room-temperature electronics, and a demo program to validate the end-to-end quantum computing system consisting of the prototype system, a quantum chip and compilation toolchain. The demo program characterizes a qubit's relaxation time, i.e., the so-called T1 experiment. We compile an OpenQASM 3.0 front-end code to a RISC-V executable using our in-house compilation toolchain, and test its correctness both on a pulse-level quantum simulator, and on our in-house superconducting quantum processor. The result of the physical experiment is shown in Figure 7, demonstrating a successful run of the calibration routine. ### _Scalability of maintaining high-fidelity quantum operation_ We now evaluate the feasibility of high-fidelity quantum operations when the chassis-based system is scaled up using a star-like connection. Skew and jitter, which are crucial for system synchronization, can directly affect the accuracy of quantum operations. Skew, caused by variations in electrical connection length, can usually be compensated for as it remains constant. Jitter, on the other hand, is a greater concern as its effects cannot be calibrated. To verify the fidelity of quantum operations in a larger system, we set up a 5-layer IQE platform with one MCU, one AWG, and one digitizer in each layer. The main trigger and the root system clock were generated by the MCU in the first layer and transmitted to the MCU in the second layer, and so on for the subsequent layers. In each experiment, we pick the first layer and one other layer to test the jitter. The two AWG output channels from the chosen layers were then connected to a digitizer. We then use fixed-point phase analysis to calculate the jitter between these two signals, as a proxy to evaluate pulse synchronization in larger systems. The critical aspect to consider is whether the jitter varies with the layer distance. As depicted in Figure 8, the histograms display the jitter performance at different layer distances. Our measurements of layer-to-layer jitter show that the standard deviation is approximately 6ps, and jitter does not increase with layer distance, indicating effective pulse synchronization within the system. Based on the \(5\)-layer results, we conclude that synchronization imprecision of microwave pulses from control electronics across different layers due to phase jitters is minuscule and will not become a major bottleneck for quantum Fig. 6: Illustration of the quantum memory and the Bell-state experiments. A quantum memory experiment on a \(d\times d\) lattice initiates \(n\) rounds of syndrome extractions. A Bell measurement experiment on two patches of \(d\times d\) lattices with routing space length \(m\) initiates \(n_{1}\) rounds of syndrome extractions on each patch, then initiates \(n_{2}\) rounds of syndrome extractions on the joint patch by merging the two patches with the routing space, and finally initiates \(n_{3}\) rounds of syndrome extractions on the two split patches. All data qubits are measured under the \(Z\)-basis before and after their corresponding syndrome extraction cycles. Fig. 7: T1 measurement on the prototype system. computation. With a standard chassis that has \(18\) slots, the star-like expansion scheme is capable of supporting up to \(10^{4}+10^{3}+10^{2}+10+1=111111\) chassis and \(111110\) qubits based on the reasonable assumption that a single chassis can drive \(10\) additional chassis. ### _Scalability of the instruction pipeline_ With the MMIO-based custom instruction design, we can test custom CPU-level instructions with different levels of abstraction against the quantum hardware execution time. In particular, we consider the following hierarchy of custom instruction abstraction, illustrated in Figure 9. In addition to pulse- and gate-level instructions, the abstraction includes the following instructions. * _Parallel-gate instructions_ encode a layer of identical gates acting on a disjoint collection of qubits in a single instruction. Circuits involved in surface code quantum computing have the clear signature of the brickwork structure. Consequently, each round of syndrome extraction only costs a constant number of parallel-gate instructions. In this case, the number of parallel-gate instructions required per unit time no longer scales with the code distance \(d\). * _Logical-level instructions_ further compresses all operations within a "logical cycle" [26] into a single instruction. A "logical cycle" refers to a repeated structure with \(d\) copies of identical syndrome extraction sub-routines, each consisting of constant layers of parallel operations with fixed patterns, with optional single-layer parallel operations before or after the repeats. Such a repeated structure is necessary for fault-tolerance against measurement errors, and serves as building blocks for the SCQC. In this case, the total number of instructions throughout a quantum application stays a constant, regardless of the code distance, leaving more room for improvement when dealing with other scaling factors, such as the number of logical qubits. We estimate the execution time of custom instructions at each abstraction level, scaling in code distance, in Figure 10. It can be seen that the logical-level instructions stay constant with respect to code distance, and the parallel-gate level instructions scale linearly albeit with a smaller coefficient compared to the quantum running time. The pulse-level instructions scale with \(\Theta(d^{3})\) and quickly grow into the millisecond region, thus making them infeasible for surface code with reasonable sizes beyond a proof-of-principle demonstration. For both the memory experiment and the Bell-state experiment, the majority of the quantum execution time is spent on syndrome extraction. Each syndrome cycles takes about 1\(\upmu\)s and takes 15 parallel-gate level instructions. This requires a throughput of 0.96 Gbps on the differential pair. This is well below the theoretical limit of the bandwidth of differential pairs [25]. As this estimation does not scale with respect to the code distance, the transmission of IQE instructions to the control electronics would not become a bottleneck for SCQC. ### _Scalability of the syndrome decoder_ We implement an SDU with a parallel decoding firmware based on the "Sandwich Decoder" algorithm [31, 34]. This firmware splits the decoding task evenly into an arbitrary number of parallel threads, with a small constant overhead factor independent of the thread number, as long as the number of surface code rounds is sufficiently large. We benchmark its throughput on the aforementioned SCQC subroutines. As a sanity check, we test the SDU implementation on a RISC-V development board, and observe an agreement in results with our QEMU simulator. Benchmarking results are also used to extrapolate the expected throughput if implementing the SDU on other RISC-V IPs. In deploying the Sandwich Decoder over the integrated multi-core CPU, the underlying inner decoders are an in-house Union-Find implementation and a recent PyMatching v2 [21]. We make several implementation-level improvements on the efficiency for our Union-Find decoder. To benchmark the performance of our SDU, we apply the Sandwich Decoder to the Bell state experiment; this goes a step further than the memory experiment as in [34]. For simplicity, we assume that the routing space between the two Fig. 8: Histogram of the channel-to-channel jitter of two AWGs across chassis in different layers. Fig. 9: A hierarchical instruction set design for SCQC. The applicability of this hierarchical design is made possible by the MMIO-based infrastructure and the brickwork structure of the syndrome extraction circuits. logical qubits is small compared to the code distance \(d\) and use a single large window to cover the two-qubit measurement part in the overall decoder graph (see Figure 11). All other windows are the same windows used in memory experiments. In our simulation experiments, we input the description of the large window as well as the weight of each edge in that window to the CCU, which randomly generates error syndromes before invoking the decoding module. We use the benchmarking results on the development board, shown in Figure 12, to estimate the decoding time when implementing our SDU on various RISC-V SoCs. With Union-Find and PyMatching 2 as inner decoders, the implemented SDU can decode up to distances \(13\) and \(31\) on SiFive P650 [32], T-head C910, or comparable alternatives [10] (16 cores at 2.5GHz), or \(67\) and \(57\) with ET-SoC-1 [16](1088 cores at 1GHz), all within just 1 microsecond for physical error rate \(p=0.0001\). Our evaluation of PyMatching 2 shows that its performance is constrained by the limited 1GB memory available on the tested development board. We expect that PyMatching 2 could achieve even better results on a higher-end development board with a larger memory. Besides the decoding throughput, another constraint for the decoder architecture is that the large amount of syndrome information generated throughout the SCQC process must not saturate the communication bandwidth. It is known that raw syndrome measurement results can quickly become a bandwidth bottleneck [14], thus must be compressed. One approach is to record the "detection events", i.e., changes in a sequence of syndrome bits, rather than all the syndrome bits. For a quantum memory experiment with a code distance \(d\) and a syndrome extraction cycles \(n\), each ancilla qubit needs to generate \(p_{\text{detect}}\cdot n\log_{2}n\) bits on average, assuming that each detection event happens with a probability \(p_{\text{detect}}\). Roughly, the bandwidth requirement would become 100 Mbps for \(p_{\text{detect}}=0.02\) and \(n=d=33\). This compression can be done on each digitizer separately before transmission to the IQE driver. More advanced compression algorithms may achieve a better compression rate, but may require conjoined processing from different digitizers. Such an algorithm can be placed in the IQE driver should there be a bottleneck in the MMIO bandwidth. ## V Discussion and outlook We present a scalable design for the classical architecture of quantum computing. Our design aims towards easy scaling with no significant overhead. We evaluate its scalability on two basic subroutines over a prominent fault-tolerant scheme, and validate its practical feasibility with a prototype implementation. Fig. 11: Illustration of a division of the overall three-dimensional decoder graph of the Bell-state experiment (Figure 6) into windows. Under the assumption of a small routing space, even though the center window is large, its size is still \(O(d)\times O(d)\times O(d)\). Note that the center window covers more layers than other windows; an alternative scheme (not depicted) is to divide the center window further so that every window covers the same number of layers, which gives rise to more complicated windows. Fig. 10: Comparison of the estimated classical execution time on the MCU against the quantum hardware execution time. The latter is estimated based on \(20\)ns for each single-qubit gate, \(40\)ns for each two-qubit gate, and \(600\)ns for each measurement and reset. The classical run time consists of two parts: 1) For classical instructions, the run time is estimated with the master frequency of the MCU CPU is 1 GHz, and cycle counts from our in-house CPU profiling tools; 2) quantum instructions are executed via expansion into RISC-V base instructions for MMIO, and it takes up to \(17\) cycles for each MMIO communication between the MCU and the IQE driver through the system bus. The run time is scaled piece-wise to reflect different running time scaling. The quantum execution time is marked separately with white hatches; note that the proportion of the quantum execution time versus the total classical execution time is distorted owing to the scaling distortion. A natural next step is to implement the real system with quantum processors of a much larger scale than that in our study. The current design is estimated to scale up easily over thousands of qubits. Such an estimation is based on the size of the allocated MMIO addresses, the picosecond accuracy in the synchronization of the trigger signal across different electronics, and the physical size of the electronics stacks. Although most of the limiting factors can be lifted through a more careful design, it remains uncertain if unforeseen problems may arise with a larger-scale quantum processor. A possible further scaling-up through modularization is to let each MCU control one or a few logical components, such as a single logical qubit or a patch of the routing space, and let an upper-level control unit issue logical instructions to these logical components while maintaining synchronization. In this study, we assume room-temperature devices for their wide adoption at the time of writing. However, our design in principle is not limited to such, and may in particular work well for cryogenic electronics, such as cryo-CMOS [9] or single-flux-quantum [28], as long as the component functionalities can be implemented. Such demonstrations would be an interesting future direction. Another important direction is to demonstrate through more sophisticated tasks than our two "toy-model" subroutines. Such experiments may lead to the discovery of currently unknown limiting factors for classical architecture in SCQC. Classical architecture is just half of the story, as numerous challenges still to be addressed in quantum architecture. Beyond the quantum processor's scale, hurdles such as input/output (I/O) management, interconnection, packaging, and heat and power dissipation must be overcome. Previous research in quantum architecture has often more focused on the feasibility of qubit control than the potential demands of intensive classical computation. Conversely, studies on classical architecture have primarily examined the viability of specific classical computation tasks such as syndrome decoding, either in-fridge or out-of-fridge, under the bold assumption that high-fidelity qubit control can be realistically achieved. A holistic evaluation of the FTQC workflow, encompassing both classical and quantum architectures, will aid in identifying potential bottlenecks and determing the most effective steps to move forward. ## Acknowledgements We would like to thank all the members of DAMO Quantum Laboratory who contributed to the development of the quantum hardware used to demonstrate the end-to-end workflow in this study. This work was supported by Alibaba Group through Alibaba Research Intern Program, and conducted when Yihuai Gao and Liwei Qiu were research inters at Alibaba Group.
2304.03551
Femtosecond laser induced creation of G and W-centers in silicon-on-insulator substrates
The creation of fluorescent defects in silicon is a key stepping stone towards assuring the integration perspectives of quantum photonic devices into existing technologies. Here we demonstrate the creation, by femtosecond laser annealing, of W and G-centers in commercial silicon on insulator (SOI) previously implanted with 12C+ ions. Their quality is comparable to that found for the same emitters obtained with conventional implant processes; as quantified by the photoluminescence radiative lifetime, the broadening of their zero-phonon line (ZPL) and the evolution of these quantities with temperature. In addition to this, we show that both defects can be created without carbon implantation and that we can erase the G-centers by annealing while enhancing the W-centers' emission. These demonstrations are relevant to the deterministic and operando generation of quantum emitters in silicon.
Hugo Quard, Mario Khoury, Andong Wang, Tobias Herzig, Jan Meijer, Sebastian Pezzagna, Sébastien Cueff, David Grojo, Marco Abbarchi, Hai Son Nguyen, Nicolas Chauvin, Thomas Wood
2023-04-07T09:16:09Z
http://arxiv.org/abs/2304.03551v1
# Femtosecond laser induced creation of G and W-centers in silicon-on-insulator substrates ###### Abstract The creation of fluorescent defects in silicon is a key stepping stone towards assuring the integration perspectives of quantum photonic devices into existing technologies. Here we demonstrate the creation, by femtosecond laser annealing, of W and G-centers in commercial silicon on insulator (SOI) previously implanted with \({}^{12}\)C\({}^{+}\) ions. Their quality is comparable to that found for the same emitters obtained with conventional implant processes; as quantified by the photoluminescence radiative lifetime, the broadening of their zero-phonon line (ZPL) and the evolution of these quantities with temperature. In addition to this, we show that both defects can be created without carbon implantation and that we can erase the G-centers by annealing while enhancing the W-centers' emission. These demonstrations are relevant to the deterministic and operando generation of quantum emitters in silicon. ## I Introduction Point defects in silicon have been intensively studied over the past few years for the creation of silicon-based quantum devices[1]. Thanks to the device-friendly environment, single photon sources or spin-photon interfaces could be readily integrated within existing electronic and photonic devices. Such components are key devices for quantum computing, quantum networks or for the implementation of quantum cryptography protocols. Among all the different optically-active defects in silicon [2; 3; 4; 5; 6; 7; 8; 9; 10], two often studied are W [11; 12; 13; 14; 15; 16] and G-centers [1; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. W-centers are composed of three interstitial Si atoms. Recent theoretical works revealed that these defects probably have a \(I_{3}\)-V configuration [36] and the optically active form of a G-center is made of two C atoms in substitutional sites linked to an interstitial Si atom [17] as represented in Fig 1a). Numerous procedures can be used to fabricate W or G-centers such as silicon [12] or carbon implantation followed by a proton irradiation [37; 22], electron irradiation [3], Focused Ion Beam (FIB) implantation [8], pulsed ion beams [38], and reactive ion etching [39; 40]. All these methods involve the use of ion implanters or accelerators. They are relatively bulky and expensive approaches that require several steps (e.g. for G-centers: carbon implant, annealing, and proton implant) in order to create a light emitter. Furthermore, the most advanced methods using focused ion beams[8] are intrinsically stochastic and the number of quantum emitters per ion impact is not precisely controlled. A further method, that can be used to create point defects, is laser annealing also called laser doping. The use of a laser has been shown to be a viable approach for integrating quantum emitters [41] in \(\delta\)-doped Si [41] or in p-doped Si [42]. Both W and G-centers can be obtained in p-doped Si [16] as well as in n-type Si implanted with \({}^{29}\)Si\({}^{+}\) ions [2]. However, the creation of these defects was usually unintentional, induced with laser pulses longer than 30 ns and in Si substrates. In this paper we address the creation of W and G-centers by femtosecond laser annealing, with the process being investigated for the first time on SOI substrates, taking both C-doped and pristine SOI wafers as starting points. The quality of these defects is confirmed by CW and time resolved photoluminescence measurements. The temperature dependence of the emitters, their broadening and lifetimes are comparable to those reported with standard fabrication methods, accounting for the high quality of our approach based on fs laser pulses. We also demonstrate that the C-implant step is not necessary to create the light emitters. Finally, by low temperature annealing, we can selectively erase the G centers while improving the quality of the W centers' emission. ## II Sample description and experimental setup The samples are SOI wafers featuring a 220 nm top silicon layer and a 2 um buried oxide layer, some of which were implanted with \({}^{12}\)C\({}^{+}\) ions. The beam energy was set to 34 keV in order to implant the carbon ions halfway into the top silicon layer and two different doses have been explored, namely \(1\times 10^{12}\) and \(1\times 10^{13}\) ions/cm\({}^{2}\). The implantation was followed by a flash annealing under N\({}_{2}\) atmosphere during 20 s at 1000\({}^{\circ}\)C to remove lattice damage [37]. Then the samples were irradiated by laser pulses focused with a 750mm plano-convex lens under ambient conditions, differing from location to location by the number of pulses used (from 1 to 5) and by the energy of the pulses (95,143,175 and 218 \(\mu\)J), to supply the energy necessary to reorganise the crystal lattice and create emitting defects in silicon. This step is schematically represented in Fig 1a). The Gaussian laser beam used was centered at 1030 nm with pulses of a duration inferior to 200 fs. Its waist was \(w_{0}=178\)\(\mu\)m and the repetition frequency of the pulses was 1kHz. The samples have been created in pairs, sharing the same carbon implantation parameters, of which only one of the two underwent a second annealing under N\({}_{2}\) atmosphere during 5 min at 125 \({}^{\circ}\)C after the implantation. PL measurements were performed at low temperature, the samples being cooled down to 12 K with a closed-cycle liquid-helium cryostat. The optical pumping was performed with a continuous wave laser diode at 405 nm focused onto the sample with a spot diameter of \(\approx\)75 \(\mu\)m. A Cassegrain objective was used to collect the PL emission with a numerical aperture between 0.15 and 0.4. The collected signal was focused onto an optical fiber connected to a spectrometer coupled with a liquid nitrogen-cooled InGaAs detector enabling spectral detection from 900 to 1600 nm. For time-resolved PL, a 515 nm laser emitting 200 fs pulses and a 54 MHz repetition rate was used for the optical pumping and the detection was performed with an InGaAs photodiode. ## III Results ### Creation of G and W-centers For the highest pulse energy investigated (218 \(\mu\)J), the laser pulses created a ring visible under an optical microscope at the surface of the irradiated parts of the sample around the point of impact of the laser beam, as shown in Fig.1b). The creation of this circle is specific to ultrafast quenching conditions accessible with femtosecond laser irradiation. It results from the melting of the top layer of Si which resolidifies in different states due to the temperature gradient which spatially varies the solidification kinetics. The ring corresponds to amorphous Si whereas as the central part is constituted of recrystallized Si [42; 43]. The outer contour of the ring shown in Fig1 (a) corresponds to a local fluence of about 330 mJ/cm\({}^{2}\) that matches well with the measured threshold for Si surface amorphization in a previous work using the same laser [44]. The PL signal from these areas reveals the creation of both G and W-centers, as shown by the orange curve in Fig.1c), exhibiting the zero phonon lines (ZPLs) of the two emitters, at 1.019 eV for W-centers and 0.97 eV for G-centers, as well as their typical phonon sidebands[22; 23; 45]. On the same sample we also collected the PL emission in areas that were not targeted by the laser (blue curve in the Fig.1 c)). The corresponding spectrum exhibits only the typical Si signal at 1.1 eV accompanied with Figure 1: **(a)** Schematic representation of the laser irradiation process used to create G and W-centers.**(b)**Optical microscope image of an area irradiated with 3 pulses having an energy of 218 \(\mu\)J. **(c)**Comparison of the PL spectra at 12K of an area not irradiated by laser pulses and an area irradiated by 3 pulses. the relative phonon sidebands. For lower laser fluences, the ring is not observed (Supplementary Information [46]) along with a lack of PL emission from W- and G-centers, suggesting the crucial role of the melting and recrystallization steps in rearranging the atoms to form the emitters. We also studied the influence of the number of pulses on the intensity, the position and the width of the ZPLs (Supplementary Information [46]). No trend can be seen from one sample to another, which suggests that at each pulse there is melting of the silicon and therefore the destruction of pre-existing defects. New defects are then created during recrystallization. ### Influence of the carbon dose In this section we focus on the influence of the dose of implanted carbon on the PL of the centers. Fig. 2a) presents the PL spectra of three samples which differ by the carbon dose used during the implantation. For each spectrum the Si signal is of the same order of magnitude which allows a direct comparison between spectra. The ZPL intensity of the G-centers increases when raising the carbon dose, which is consistent with previous findings[37] for emitters created after proton irradiation. For pristine samples (ie. no carbon implantation), the signals of the G- and W-centers appear with an amplitude that is about an order of magnitude lower with respect to their implanted counterparts. We interpret the creation of G-centers in carbon-free samples as an effect of incorporation of residual C present in small quantities in the upper layers of Si (e.g. deposited during the manufacturing process of the SOI wafer). We verified that, by oxidation of the top Si layer in a rapid thermal processor, the signal from G-centers disappears, confirming that the C contamination comes from the sample surface (not shown). As for the W-centers, this result shows that the implantation step disrupts the crystalline organisation and create more interstitial Si. However, even if defects are created in smaller numbers in the pristine sample, they have the same optical properties as those obtained with the implantation step; namely ZPL with the same full width at half maximum (FWHM) (see Supplementary Information [46]). This is not the case for G-centers for which the ZPL of the non implanted sample is broader and slightly blue-shifted (see Fig. 2a)). ### Effect of annealing We now investigate the effect of a flash annealing on the PL signal of these laser-created emitting centers. Non-implanted samples after flash annealing display only W-center emission that is enhanced by a factor of four with respect to non-annealed samples whereas that of G-centers disappears (Fig. 2 b)). The annihilation of G-centers after a flash annealing is also observed for carbon implanted samples and the resulting spectrum is comparable to that one obtained for pristine samples when normalised by the maximum of the ZPL of W-centers. Therefore, we demonstrate that we can create and isolate W-centers without any implantation step with optical properties comparable to those obtained with implantation. ### Recombination dynamics of G-centers We performed time-resolved PL measurements on the brighter sample, namely the sample with the highest dose of C implanted, filtering the PL signal with a short-pass filter at 0.99 eV to eliminate the major contribution of W-centers (Fig.3). The PL decay is well fitted by a mono-exponential function (plus a constant) providing a characteristic lifetime of 5.9 ns, consistent with conventional Figure 2: All the spectra were measured at 12 K with excitation by a laser diode at 405 nm. **(a)** Macro-PL spectra obtained for samples irradiated by 3 laser pulses with an energy of 218 \(\mu\)J. Each sample is differentiated by the dose of implanted carbon \(D_{C}\). The inset represents the normalized ZPL of G-centers. **(b)** Comparison between macro-PL spectra obtained for two samples irradiated by 3 laser pulses with an energy of 218 \(\mu\)J. The two samples have not undergone carbon implantation and one of them underwent a second annealing under N\({}_{2}\) atmosphere for 5 min at 125\({}^{\circ}\)C. G-centers obtained by co-implant of carbon and proton irradiation for which the lifetime is about 5-6 ns [22; 33]. Therefore, G-centers created with both protocols have the same excited state lifetime. Note that the radiative yield of G-centers is less than 10% at 30K [33]. Therefore, the measured decay rate is strongly related to non-radiative channels even at cryogenic temperatures. The constant used to adjust the fit is ascribed to the contribution of a longer decay time corresponding to the phonon-sideband of the W-centers: these defects have lifetimes ranging from 3 ns to 30 ns [36]. Owing to our excitation pulses being repeated every 18 ns, an overlap of the PL decays coming from the phonon-sideband of W-centers can contribute to the time resolved spectra. We cannot measure the lifetime of W-centers because of the overlap issue previously described. ### Temperature dependence of the ZPL The ZPL energy as a function of temperature is represented in Fig.4 b). The experimental data are fitted with the expression proposed by Passler [47]: \[\Delta E(T) =E_{ZPL}(T)-E_{0}\] \[=-\frac{\alpha\Theta_{p}}{2}\left[\sqrt[n]{1+\left(\frac{2T}{ \Theta_{p}}\right)^{p}}-1\right] \tag{1}\] where \(E_{0}\) (meV) is the limit of the ZPL energy when \(T\to 0\) K, \(\alpha\) (meV/K) is the slope of the curve, namely the entropy, for \(T\rightarrow\infty\), \(\Theta_{p}\) (K) is the average temperature of the phonons and p is a dimensionless parameter. The thermal redshift for W-centers evolves proportionally to \(T^{4}\)[48] whereas for G-centers to \(T^{3}\), consistent with previous observations [22]. These fits provide the average temperature of the phonons coupled with the defects. If we convert the obtained values to energies we find \(\mathrm{E}_{phW}=7\pm 1\) meV for W-centers and \(\mathrm{E}_{phG}=16\pm 2\) meV for G-centers. For the latter, the obtained energy is close to that of TA phonons at the X point of the Brillouin zone [9], namely 20 meV. Therefore we conclude that the defect coupled preferentially with TA(X) phonons. As for W-centers, the obtained value does not correspond to a typical phonon energy in silicon and it is lower than the TA(X) energy. A possible explanation for this observation will be given in the following paragraphs. Fig.4 c) represents the FWHM of the ZPLs as a function of temperature. The experimental data are well fitted by the following model that describes the broadening of a ZPL with heating [49]: \[\Gamma=\Gamma_{0}+a\left[\exp(\frac{-\Omega}{k_{B}T})-1\right]^{-1} \tag{2}\] where \(\Gamma_{0}\) is the zero-temperature limit of the FWHM and the second term accounts for the coupling between phonons and the emitters, \(a\) represents the intensity and \(\Omega\) the typical energy of this coupling. The zero-temperature limit obtained for G-centers with our procedure, namely \(\Gamma_{0G}=0.54\pm 0.03\) meV, is of the same order of magnitude as that obtained for an ensemble of G-centers created by proton irradiation (\(0.3\pm 0.03\) meV) [50]. The slightly larger value obtained in our case can be explained by the simultaneous presence of G-centers and Si self-interstitials which could slightly broaden the signal obtained for the ensemble of defects. For ensembles of W-centers, to date, no study has been conducted for ZPL broadening. The study of individual W-centers [32] showed a zero-temperature limit below 0.1 meV. However, the energy differences between the ZPLs from one defect to another is within 1 meV owing to local variations in the emitter environment. Therefore, the value of \(\Gamma_{0G}=0.78\pm 0.03\) meV we obtained is consistent with single defect investigation when taking into account the ZPLs' dispersion in energy. For G and W-centers, the typical energies \(E_{ph}\) and \(\Omega\) we obtained with the fits of the broadening and the redshift of ZPLs overlap, taking into account uncertainties. This allow us to give a probable explanation for the low value obtained for \(E_{phW}\). Indeed, the typical energy obtained with the fit of the broadening (\(\Omega_{W}\)) could also be interpreted as the activation energy of the excitonic transition to the first excitonic state from the ground-state [51], which could explain why we obtain a value lower than the typical energy of phonons in Si. ## IV Discussion The relevance of our work lies in the novel possibility to deterministically create quantum emitters in Si with near unity yield : in analogy with SiC and diamonds, fs laser pulses can be, in principle, used to form the emitters _in situ_ and _operando_, while monitoring the emission from Figure 3: Time-resolved PL signal obtained at 12 K with a pulsed laser at 515 nm with a shortpass filter in energy (0.99 eV) to eliminate the contribution of the ZPL of W-centers and a large part of their phonon sideband. The experimental data (blue points) are fitted by a mono-exponential function adjusted by a constant \(b\) (red curve). newly formed defects on a pulse-by-pulse basis [52; 53]. This method also allows to reduce the area of creation of the emitters down to the size the laser spot as shown in Fig. 1 which demonstrates that the emitting centers are created only in the irradiated areas. Even if we do not reach the precision of FIB, this method is cheaper and easier to implement which is promising for the large scale creation of emitters. Moreover we demonstrate that W and G-centers created in C-implanted Si have optical properties and recombination dynamics in adequacy with the literature, which proves that we obtained emitters with the same quality as those obtained by others fabrication methods. We also demonstrate that the implantation step is not necessary to create W and G-centers, as we collect the PL signature of both centers from pristine samples. Such samples contain only residual carbon incorporated in the Si during the manufacturing of the wafer, which implies a low concentration of carbon. This leads to the creation of low-density G-centers, which is confirmed by the low intensity of the ZPL of G-centers but also by its broadening and blue-shift compared to the implanted samples (see inset of Fig. 2). Indeed Zhiyenbayev _et al._ demonstrates that the increase of internal strain due to an high dose of implanted carbons leads to a redshift the ZPL of G-centers and reduces their inhomogeneous broadening [35]. As for W-centers, they have the same optical properties in implanted and non-implanted samples. We demonstrate a way to annihilate G-centers while slightly enhance the PL of W-centers by carrying out an annealing at \(125^{o}\)C during 5 min. This phenomenon is opposite to what was expected, since it has been demonstrated that the intensity of the ZPL of G-centers created with carbon implantation followed by proton irradiation can be increased by a factor of 8 with such thermal treatment [21]. It is worth noting that in the sample of Berhanuddin _et al._ only G-centers were created whereas in our case we have both W and G-centers which could explain the difference in behavior of G-centers. ## V Conclusion In this paper, we demonstrate for the first time the creation of G and W-centers simultaneously in carbon-implanted SOI by femtosecond laser annealing. The quality of these defects are comparable to those obtained by the usual, well established methods in the literature. We also demonstrated that we can create G and W-centers without any implantation step. However, G-centers created with this method have a ZPL broader than those obtained with carbon implantation. Furthermore, we demonstrated that an annealing at low temperature annihilates G-centers while slightly enhancing the PL emission of the W-centers. Therefore, we proved that we can create and purify W-centers of good quality without any implantation steps, which allows us to create W-centers at low cost, on-demand Figure 4: **(a)** Temperature-dependant PL spectra centred around the ZPLs of G and W-centers in the range of temperature from 12 K to 80 K. **(b)** Variation of the energies at the center of the ZPLs of G-centers (blue triangles) and W-centers (red squares) as a function of temperature. The symbols represent the experimental data and the lines the fits given by the equation (1). The fitting parameters are \(E_{0W}=1018.69\pm 0.02\) meV, \(\alpha_{W}=0.06\pm 0.01\) meV/K, \(\Theta_{W}=76\pm 15\) K, \(P_{W}=4.2\pm 1.1\) for W-centers and \(E_{0G}=970.194\pm 0.003\) meV, \(\alpha_{G}=0.07\pm 0.01\) meV/K, \(\Theta_{G}=186\pm 29\) K, \(P_{W}=3.0\pm 0.1\) for G-centers. **(c)** Evolution of the FWHM of the ZPLs of G-centers (blue triangles) and W-centers (red squares) as a function of temperature. The symbols represent the experimental data and the lines the fits given by the equation (2). The fitting parameters are \(\Gamma_{0W}=0.78\pm 0.03\) meV, \(a_{W}=5\pm 1\) meV, \(\Omega_{W}=9\pm 1\) meV for W-centers and \(\Gamma_{0G}=0.54\pm 0.03\) meV, \(a_{W}=20\pm 5\) meV, \(\Omega_{W}=17\pm 1\) meV for G-centers. and in a restricted area of a size close to the cross section of the laser spot. This represents a step forward for the deterministic creation of W-centers in photonic structures. Indeed, with this method it is possible to precisely position the defects in the structures and it is also conceivable to have a control on the density of defects created by studying in more detail the influence of the number of pulses and their energy. This last aspect could even lead to the study of single emitters, assuming that we manage to create the W centers in sufficiently low density. _Acknowledgement:_ This research was funded by the EU H2020 FET-OPEN project NARCISO (No. 828890), the French National Research Agency (ANR) through the projects ULYSSES (No. ANR-15-CE24-0027-01), OCTOPUS (No. ANR-18-CE47-0013-01), European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 724480). The authors thank the Nanotecmat platform of the IM2NP institute.
2308.12789
Robotic Scene Segmentation with Memory Network for Runtime Surgical Context Inference
Surgical context inference has recently garnered significant attention in robot-assisted surgery as it can facilitate workflow analysis, skill assessment, and error detection. However, runtime context inference is challenging since it requires timely and accurate detection of the interactions among the tools and objects in the surgical scene based on the segmentation of video data. On the other hand, existing state-of-the-art video segmentation methods are often biased against infrequent classes and fail to provide temporal consistency for segmented masks. This can negatively impact the context inference and accurate detection of critical states. In this study, we propose a solution to these challenges using a Space Time Correspondence Network (STCN). STCN is a memory network that performs binary segmentation and minimizes the effects of class imbalance. The use of a memory bank in STCN allows for the utilization of past image and segmentation information, thereby ensuring consistency of the masks. Our experiments using the publicly available JIGSAWS dataset demonstrate that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread, and improves context inference compared to the state-of-the-art. We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance.
Zongyu Li, Ian Reyes, Homa Alemzadeh
2023-08-24T13:44:55Z
http://arxiv.org/abs/2308.12789v1
# Robotic Scene Segmentation with Memory Network for ###### Abstract Surgical context inference has recently garnered significant attention in robot-assisted surgery as it can facilitate workflow analysis, skill assessment, and error detection. However, runtime context inference is challenging since it requires timely and accurate detection of the interactions among the tools and objects in the surgical scene based on the segmentation of video data. On the other hand, existing state-of-the-art video segmentation methods are often biased against infrequent classes and fail to provide temporal consistency for segmented masks. This can negatively impact the context inference and accurate detection of critical states. In this study, we propose a solution to these challenges using a Space-Time Correspondence Network (STCN), STCN is a memory network that performs binary segmentation and minimizes the effects of class imbalance. The use of a memory bank in STCN allows for the utilization of past image and segmentation information, thereby ensuring consistency of the masks. Our experiments using the publicly-available JIGSAWS dataset demonstrate that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread, and improves context inference compared to the state-of-the-art. We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance. ## I Introduction Robot-assisted surgery has transformed the field of minimally invasive surgery by allowing surgeons to operate with greater dexterity and precision and improving patient outcomes. Characterizing the interactions among the surgical instruments and important objects and anatomical structures within the surgical scene can provide context awareness [1], which is crucial for various downstream tasks, such as cognitive assistance [2], skill evaluation [3, 4, 5, 6] and error detection [7, 8, 9, 10, 11]. However, accurate detection of surgical context from video is a challenging task. Various deep learning methods [12, 13, 14] have been proposed to infer tool tissue interactions from surgical videos. These black-box models, however, suffer from lack of transparency and dependency on large labeled datasets. Recently, [15] proposed logic operations on the masks of different objects in the surgical scene to infer surgical context. This method provides interpretability and enables efficient integration of expert knowledge in a domain the data is usually limited. However, this method requires precise segmentation masks to detect interactions between objects and instruments, such as contact and hold. For surgical scene segmentation, multiclass segmentation methods which focus on classification of each pixel are commonly used. This means that each pixel is assigned one class label with the highest probability. However, these methods have primarily focused on identifying graspers and common objects in porcine procedures [16, 17, 18, 19, 20, 21] and can have difficulty identifying small objects such as needles and rarely used instruments due to class imbalance. Other approaches focus on thread segmentation with fine-tuning [22] or performing 3-D computation through a calibrated stereo camera system [23]. Another challenge in the recent state-of-the-art models is to correctly identify segmentation masks when the image deviates from the common viewpoint (e.g. the bending of graspers) or when there are occlusions (e.g. the interactions between needle and graspers) [24]. This problem can be potentially solved by ensuring mask consistency for an instrument through time by incorporating a temporal prior [21, 25]. The recent development of Space-Time Memory Networks (STM) [26, 27] have achieved top performance in the semi-supervised video object segmentation (VOS) tasks on benchmark dataset DAVIS 2017 [28] and YouTubeVOS 2018 [29]. For these models, a memory bank is created for each object and the query frames are matched to these banks to retrieve information. This method effectively reduces the effect of label imbalance through binary segmentation and ensures label consistency over time. STM models perform well on videos containing objects commonly appearing in everyday life in an offline manner. However, these models have not been applied for segmentation of multiple objects in robotic scenes and have never been examined if they can perform segmentation at runtime, limiting their potential applications in tasks such as error detection [10, 8]. In this paper, we adapt a lightweight STM, the Space-Time Correspondence Network (STCN) [27], by changing the first image/mask pair, batching the input frames and fine-tuning on the robotic surgical dataset (JIGSAWS [30]) to perform runtime surgical scene segmentation. We then use the segmentation masks to perform surgical context inference and show that improved segmentation performance can lead to more accurate context inference. Specifically, we make the following contributions. * Adapt the STCN in video object segmentation to perform _runtime_ surgical scene segmentation. * Show the superior performance of the STCN model in comparison to the state-of-the art single frame models for segmenting surgical instruments. * Demonstrate that the STCN Network achieves good segmentation performance even if the first image/mask pair does not come from a prior frame of the video. * Show that more precise segmentation masks lead to improved context inference, in particular more accurate detection of the interactions/states of the objects/instruments that are hard to segment. * Demonstrate that STCN segmentation and context inference can be performed within runtime constraints with minimal influence on the performance. ## II Related work **Semantic segmentation** involves classifying each pixel in an image into a specific object or background. Various robotics scene segmentation challenges [16, 17] have focused on the task of semantic segmentation. One of the most popular segmentation frameworks to perform semantic segmentation is the UNet structure [31, 18, 19, 20, 21]. But the deep learning models for semantic segmentation can suffer from the label imbalance problem where the model can be biased against small objects. **Instance segmentation** involves detecting the presence of objects of interest in an image and segmenting each object instance from the background. By first identifying the instrument candidates and then assigning a unique category to them, the Mask R-CNN based methods focus on providing a binary mask for each specific type of instrument and could be a good solution to address the data imbalance problem. In recent works, Mask R-CNN was adapted to perform fine-grained instrument segmentation [32, 33]. However, instance segmentation models perform instance segmentation on a specific frame and do not consider the evolution of the masks through time, resulting in inconsistent labels. **Semi-supervised video object segmentation** tasks [28] focus on estimating object masks in all video frames given the ground truth mask of the target object in the first frame. Space-Time Memory Networks (STM) are the top-performing models on challenge datasets such as DAVIS 2017 [28] and YouTubeVOS 2018 [29]. The STM performs binary segmentation by decoding the memory readout and integrates prior image and mask information to segment objects of the current frame. This effectively eliminates the need to perform multi-class classification on the pixel and improves temporal consistency of the masks. However, STM has been primarily used for offline video segmentation and has not been applied for _runtime surgical scene segmentation_. **Surgical context inference** focuses on detecting the values of a set of state variables that describe the surgical task status and interactions among the surgical instruments, objects, and anatomical structures in the physical environment [7, 1]. The definition of surgical context is similar to the tool-tissue interactions (TTI) in action triplets, defined as fine-grained activities in surgical process modeling, which consist of an action verb, a surgical instrument, and the target anatomy [12]. In the CholecTriplet2021 benchmark challenge for action triplet recognition from surgical videos [34], several competing deep-learning methods were developed, including transformer-based with self-attention approaches (e.g., Rendezvous [13] and SIRNet [14]), convolutional LSTMs, and multi-task learning. In this work, we use context definitions for dry-lab surgical tasks from [1] to perform rule-based context inference based on the masks generated by an STCN model. Similar to previous works mentioned above, we only use video data for context inference because labeled video data is more accessible than robot kinematic data. ## III Methods Figure 1 shows our overall pipeline for runtime surgical scene segmentation and context inference. ### _Problem Statement_ We have a sequence of input frames \(I\) to segment and the first image/mask pair \(I_{init}\&M_{init}\). For each frame \(I_{i}\) of size \(H\times W\), the memory network's task is to assign a label \(m_{hw}\in\{0,1\}\) to each pixel in the image to indicate if pixel \((h,w)\) belongs to an object mask \(O\). In our case, we identify the object classes that are important for context inference, \(O\)=\(\{\) left grasper, right grasper, needle, thread, ring\(\}\). With the same model but different masks of different objects in the first image/mask pair, multiple networks can be initialized to run in parallel to perform binary segmentation for each object. Through aggregating the binary outputs of all object models, we obtain the segmentation \(M\) for each frame in our input images. Then we use the segmented frames to generate surgical context \(T\), which is defined as a set of state variables \(S_{1},S_{2},S_{3},S_{4},S_{5}\), each describing the status of a task and interactions among the surgical tools and objects in the physical environment [1]. As shown in Figure 1, the first four state variables are used to describe the objects that are being held or are in contact with surgical instruments and are applicable to all tasks. The fifth state variable is specific to the task, such as the position of the needle relative to the fabric or ring in the Suturing and Needle Passing tasks, or the status of the knot in the Knot Tying task. ### _Space-Time Correspondence Network_ Space Time Correspondence Network (STCN) [27] takes a set of frames \(I\) from a video and the first image/mask pair \(I_{init}\&M_{init}\), then proceeds to process the frames one by one while keeping a collection of key and value in the memory. For every image to segment, the query key is first generated with ResNet50, \(E_{k}(I)\), to obtain the query key \(k_{Q}\). Using the keys \(k_{M}\) encoded from prior frames, which are reused from the previous query keys, an affinity matrix can be obtained that describes the similarities between the current query key \(k_{Q}\) and the memory keys \(k_{M}\). The affinity function, defined as the negative \(L2\) similarity, and the normalized affinity matrix \(W\) are shown in the following equations: \[S_{ij}=-||k_{i}^{Q}-k_{j}^{M}||_{2}^{2} \tag{1}\] \[W_{ij}=\frac{exp(S_{ij})}{\sum_{t}exp(S_{nj})} \tag{2}\] Value features are generated with an encoder (ResNet18) that takes in both an image and mask \(E_{v}(I,M)\). The memory network can retrieve the corresponding value features \(vQ\) from the previous frames' value features \(v^{M}\) in the memory bank by matrix multiplication as shown in Equation 3: \[v^{Q}=v^{M}W \tag{3}\] Then we can obtain the mask with a decoder \(D\) that takes in value features from the matrix multiplication. We follow the same setting as the original paper [27] where every fifth frame's value is stored in the memory bank. However, we adapt model post-processing to aggregate the binary segmentation masks of different objects. This is unlike the STCN in [27] that passes all the binary masks through an aggregation network module to assign an individual class per pixel, which could introduce unnecessary operation time. We also change the input so that the model can process a non-overlapping moving window of frames to enable runtime inference. Since the same model can be used to segment different objects given different first image/mask pairs, there is no temporal dependency between segmenting the masks of different objects. We can have multiple models running in parallel to segment different objects. ### _Context Inference_ The tool and object interactions, such as "Left Grasper holding the Needle" as depicted in Figure 1, can be detected by analyzing intersections and distances between object masks within a given frame. In this paper, we specifically focus on detecting the five state variables that characterize the surgical context in the dry-lab surgical tasks of Suturing, Needle Passing, and Knot Tying [1, 15]. As shown in 1, the first four states describe what the Left Grasper is holding (S1) or in contact with (S2) and what the Right Grasper is holding (S3) or contacting (S4). These variables can take on values representing interactions with nothing (0), the needle (2), the thread (3), or other objects in the surgical scene. The last variable (S5) is task-specific and describes the progress within a particular trial. For example, in Needle Passing and Suturing, the Needle can be "not touching", "touching" or "in" with respect to the canvas or ring. We use our previously proposed rule-based method from [15] for context inference. In this method, first a pre-processing step removes noise around needle and thread masks. Then the contour extraction step removes rough edges and reduces \(M\) to a list of points \(p\) as polygons for each object class. We use these simplified polygons to calculate intersections and distances between objects for each frame. We drop polygons with areas under 15 pixels to remove segmentation artifacts and smooth the polygons using the Ramer-Douglas-Peucker (RDP) algorithm [35, 36]. \[\text{Left Hold}\begin{cases}2&\text{if }D(LG,N)<1\wedge\neg \alpha\\ 3&\text{if }Inter(LG,T)>0\wedge\neg\alpha\\ 0&\text{otherwise}\end{cases} \tag{4}\] \[\text{Left Contact}\begin{cases}2&\text{if }D(LG,N)<1\wedge\alpha\\ 3&\text{if }Inter(LG,T)>0\wedge\alpha\\ 0&\text{otherwise}\end{cases}\] (5) \[\text{Needle}\begin{cases}2&\text{if }(Inter(Ts,N)>0\wedge N.x<Ts.x)\\ 1&\text{if }(Inter(Ts,N)=0\lor N.x\geq Ts.x)\wedge\\ &(D(RG,T)>1\lor D(LG,N)>1)\\ 0&\text{otherwise}\end{cases} \tag{6}\] Overlap between masks is detected by calculating a feature vector \(v\) of distances and intersection areas between pairs of input masks, including Left Grasper (\(LG\)), Right Grasper (\(RG\)), Thread (\(T\)), Needle (\(N\)), Tissue Points (\(Ts\)), and Rings (\(R\)). The distance and intersection functions \(D(I,J)\) and \(Inter(I,J)\) are defined as the pixel distance and area of intersection between two object masks \(I\) and \(J\). Specifically, for any object polygon \(I\) which is comprised of several polygon segments \(i_{1},i_{2},...,i_{n}\), the distance to any other object \(J\) can be calculated as: \(D(I,J)=\text{average}([d(i,j)\text{ for }i\in I\text{ and }j\in J])\). The \(Inter(I,J)\) function uses a geometric intersection algorithm from the Shapely library [37] to calculate the intersection between two object masks. We use \(I.x,I.y\) for an object \(I\) as the horizontal and vertical coordinates of the midpoint of its polygon \(I\), calculated as the average of every point in \(I\). The Tissue Points \((Ts)\) represent the markings on the tissue where the needle makes contact in the Suturing task. To determine the Boolean variable (\(\alpha\)), representing open or closed status of grasper, we use an experimentally found threshold of 18 pixel units to the distance between the grasper jaw ends. Logic vectors are constructed with the Equations 4-6, which can then be used to estimate the values of state variables. These equations are for the left hand in Fig. 1: Pipeline for Gesture Segmentation and Context Inference with STCN (Train-FF) Setup the Suturing task. Similar sets of equations are used for the right hand and for the Needle Passing and Knot Tying tasks. ## IV Experimental Evaluation ### _Experimental Setup_ We evaluate our proposed approach on a 80/20 train/test split of the JIGSAWS dataset [30] in comparison to the state-of-the-art surgical scene segmentation models and a baseline Deeplab V3 model [15]. The Deeplab model performs binary segmentation by classifying each pixel as the background vs. an object class. It is the most recent model evaluated for all the objects and tasks in the JIGSAWS dataset in [15]. Binary masks for the tools and objects were obtained through manual labeling at 2Hz from the original 30Hz videos (640*480 pixels per frame) and were used to train and test the segmentation models. We use the context labels (obtained at 3Hz) from [1] to evaluate the context inference performance. In our experiments, we used pretrained weights from the STCN model in [27] which was trained with the static image datasets and DAVIS [28] and YouTubeVOS [29]. We first trained the model with all the data from the Suturing, Needle Passing, and Knot Tying tasks for 500 epochs. Then we fine-tuned individual models for each task (Suturing, Needle Passing and Knot Tying) and evaluate the respective segmentation performance for each task with 200 epochs. The training process follows the main training process in [27]. Specifically, a set of three frames are arranged in chronological order, with the first frame being the ground-truth mask. The second frame is predicted using the first frame as a memory reference. The information of the first and second frames along with their prediction is stored in the memory bank. The third frame is subsequently predicted by utilizing the combination of the first and second frames. During runtime inference, unlike the main training process, the fifth frame is stored in the memory bank. The model training was done on a 64-bit PC with an Intel Core i9 CPU @ 3.70GHz and 32GB RAM running Linux Ubuntu 20.04 and an NVIDIA RTX2080 Ti 1GB GPU. The standard metric, mean Intersection over Union (IOU), is used to evaluate segmentation and context inference performance [38]. Each predicted segment or context is matched to a corresponding segment in the ground truth. Then, the average IOU for each class or context state is calculated. We assume that during deployment the model will segment video data with the same object classes and similar physical context as the ones in the first image/mask pair from training. Several STCN models corresponding to different object classes can run in a parallel to segment different objects. ### _Different First Image/Mask Pairs_ To understand the effect of the first image/mask pairs on the network performance, we experimented with various ways of generating first image/mask pairs. Specifically, we examined the STCN model performance in the following setups for the first frame/mask (FF) pair: DeepLab-FF, Train-FF and GT-FF. In the Deeplab-FF setup, we used the image and the corresponding mask generated by the baseline Deeplab model from the frame that an object first appears. In the Train-FF setup, image/ground-truth mask pairs are from the training set. To create the sets of image/mask pairs for the Train-FF setup, we visually inspected the image to ensure that the appearance of the object is representative of the object in the training set. In the GT-FF setup, similar to setup in [27] the first image/mask pairs are from the first ground-truth mask in the test set. Note that in real deployment settings for runtime segmentation ground-truth masks for test data will not be available. We only used the GT-FF setup for performance comparison. These preliminary experiments showed that the Train-FF consistently achieves the same (for graspers) or slightly better (for Needle and Thread) performance than the other setups and is more suitable for real-world deployment, so we used this setup for our STCN model in the rest of the experiments. Deeplab-FF setup could be also suitable for real-world deployment, in scenarios where the test images might be significantly different from the train images. ### _Context Inference Performance_ To evaluate the effect of the segmentation performance on the performance of the context inference, we fed the ground truth segmentation masks as well as the masks generated by the STCN and the baseline Deeplab models from Section IV-B to the context inference component. The context inference performance from the ground truth segmentation masks can serve as a baseline for the max possible performance we can get from the rule-based approach. We calculated the average IOU for each context state and compared the results with respect to the ground truth context labels from [1]. ### _Batched Performance & Time Analysis_ The original memory network was designed to perform offline and process the whole video at once. However, to perform runtime inference, the memory network should be able to process a sequence of frames within the video timing constraints without significant performance degradation. In this experiment, we evaluated the runtime performance of the network by analyzing the computation time for segmentation and context inference given different batch sizes. Based on our analysis, the smallest interval between a change of context in the JIGSAWS videos is about 333 milliseconds, which corresponds to a batch size of 10 given a 30Hz video. We explored different batch sizes, ranging from 5 to 25 frames per batch and examined the time taken to perform the segmentation along with the context inference for the last frame in the batch. The context changes in between the frames are ignored because within each batch the duration between consecutive images can be within milliseconds, which is too short for context changes to happen. We kept the duration between each batch within 1 second to ensure we can capture the network behavior when the STCN memory bank only stores the key, value pairs of a few prior frames, which in this case, 1, 2, 3, 4, and 5 prior frames' information are stored in the memory bank, respectively. ## V Results and Discussion ### _Comparison to the State of the Art_ Our results in Table I show that STCN with the image/ground-truth mask pairs from the training set (Train-FF) have better performance than the baseline Deeplab. We can see that STCN has over 20% performance improvement for the left grasper, right grasper, needle, thread and ring across all the three tasks of Suturing, Needle Passing and Knot Tying. There is in particular significant improvement for segmentation of more difficult object classes, including needle, thread and ring. We observe over 200% IOU improvements for the needle class in the Suturing and Needle Passing tasks. There are similar improvements for the thread and ring classes. The needle class has more performance improvement than the thread class, but the overall IOU for the needle class in Suturing and Needle Passing tasks (0.57 and 0.32) is still lower than the thread (0.83 and 0.58) class. This is because the needle masks are generally smaller than the thread masks, so any inaccuracies in the masks for the needle results in a lower IOU than the thread. We also compare the performance of our method with the state-of-the-art surgical scene segmentation models developed using the MICCI Endovis 2018 dataset [17] and the JIGSAWS dataset [30]. The Endovis 2018 does not have the left and right grasper class, so we use the clasper class that has the closest resemblance to the JIGSAWS's graspers. Although the Endovis 2018 does not differentiate left and right graspers, we compare the performance of our left and right grasper classes with the single clasper class in the Endovis 2018. The graspers, needle, and thread performance are better in our model in the the Suturing, Needle Passing, and Knot Tying tasks in comparison to the Deeplab v3 as well as U-net in the Endovis 2018. For the JIGSAWS dataset, there are no publicly available segmentation labels for the objects in the video. The Mobile-U-net in [39] uses the Suturing segmentation labels annotated by the authors and does not differentiate between left and right grasper classes. Our network achieves better performance in the grasper class (0.88/0.84 vs. 0.69) and comparable performance for the needle class in the Suturing task (0.57 vs. 0.56). However, the Mobile-U-net has not been evaluated on the thread class in the Suturing task and has not been evaluated on the Needle Passing and Knot Tying tasks. We should note that the Mobile-U-net and our network are not evaluated on the same set of images from the Suturing videos, and there could be discrepancies in the labels. One recent work [6] trained different networks to perform segmentation on the left/right graspers along with the shaft using labels generated from an optical flow method for the Suturing task. Here we show two of their top performing networks, including the UNet (0.66) and the LinkNet (0.80). Our method also has better performance than these two networks. ### _Context Inference_ In this section, we compare the results of context inference given the segmentation masks in Table II and the ground-truth masks. In task Suturing, we observe that for Left Contact and Right Contact states, we achieve slightly better performance with the STCN model than the baseline Deeplab. The contact states involve logic to calculate the distance and intersection between the grasper masks and the needle and thread masks. Since our model generates segmentation masks that are closer to the ground truth, we see an improvement in detecting these states. The most obvious improvement is in detecting the needle state (0.083). This is expected because we have the needle's performance improved by \(\sim\)200% in comparison to the baseline Deeplab. In other tasks, we see a similar trend as Suturing. In Needle Passing, we see that our model has comparable performance in detecting Left Hold, Left Contact, and Right Contact states as the baseline Deeplab and the ground truth and achieves better performance for the Needle State. In task Knot Tying, our model achieves comparable or better performance than the baseline Deeplab across all five states. To provide an illustrative example, Figure 2 shows the segmentation masks and inferred context from Deeplab, STCN, and ground-truth. We see that the STCN segmentation mask in 2b can segment the lower part of the needle which the baseline Deeplab in 2a misses. Therefore, the STCN mask helps to infer the Left Hold - Needle state correctly (\(D(LG,N)<1\land\neg\alpha=True\)), which is not the case for the baseline Deeplab mask. Rather, the lower part of the grasper for the baseline Deeplab mask is falsely segmented to be the thread. As a result, the context is generated incorrectly (\(Inter(LG,T)>0\land\neg\alpha=True\)) as Left Hold - Thread. The needle state is also inferred incorrectly for the baseline Deeplab mask, which the STCN segmentation mask corrects. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Tasks} & \multirow{2}{*}{Setup} & Left & Left & Right & Right & Needle/ & \multirow{2}{*}{Avg.} \\ & & Hold & Contact & Hold & Contact & Knot & \\ \hline \multirow{4}{*}{Suturing} & STCN & \multirow{2}{*}{0.417} & **0.774** & 0.561 & **0.869** & **0.383** & **0.601** \\ & & Baseline & **0.478** & 0.751 & **0.603** & 0.866 & 0.3 & 0.6 \\ \cline{2-8} & & Ground Truth & 0.524 & 0.765 & 0.605 & 0.869 & 0.388 & 0.63 \\ \hline \multirow{4}{*}{Needle Passing} & STCN & \multirow{2}{*}{0.374} & **0.967** & 0.259 & 0.939 & **0.416** & **0.577** \\ & & Baseline & \multirow{2}{*}{**0.398**} & **0.967** & **0.658** & **0.946** & 0.393 & **0.577** \\ & & Ground Truth & 0.415 & 0.968 & 0.648 & 0.942 & 0.411 & 0.586 \\ \hline \multirow{4}{*}{Knot} & STCN & \multirow{2}{*}{**0.778**} & **0.782** & **0.597** & **0.801** & 0.582 & **0.708** \\ & & Baseline & \multirow{2}{*}{0.746} & 0.724 & 0.571 & 0.783 & **0.588** & 0.682 \\ \cline{1-1} & & Ground Truth & 0.825 & 0.766 & 0.606 & 0.791 & 0.619 & 0.721 \\ \hline \end{tabular} \end{table} TABLE I: Tool and Object Segmentation Performance (Mean IOU per Object Class) for the MICCAI Endovis 18 (M) and JIGSAWS Suturing (S), Needle Passing (NP), and Knot Tying (KT) tasks. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Tasks} & \multirow{2}{*}{Setup} & Left & Left & Right & Right & Needle/ & \multirow{2}{*}{Avg.} \\ & & Hold & Contact & Hold & Contact & Knot & \\ \hline \multirow{4}{*}{Suturing} & STCN & \multirow{2}{*}{0.417} & **0.774** & 0.561 & **0.869** & **0.383** & **0.601** \\ & & Baseline & \multirow{2}{*}{**0.478**} & 0.751 & **0.603** & 0.866 & 0.3 & 0.6 \\ & & Ground Truth & 0.524 & 0.765 & 0.605 & 0.869 & 0.388 & 0.63 \\ \hline \multirow{4}{*}{Needle Passing} & STCN & \multirow{2}{*}{0.374} & **0.967** & 0.259 & 0.939 & **0.416** & **0.577** \\ & & Baseline & \multirow{2}{*}{0.398} & **0.967** & **0.668** & **0.946** & 0.393 & **0.577** \\ & & Ground Truth & 0.415 & 0.968 & 0.648 & 0.942 & 0.411 & 0.586 \\ \hline \multirow{4}{*}{Knot} & STCN & \multirow{2}{*}{**0.778**} & **0.782** & **0.597** & **0.801** & 0.582 & **0.708** \\ & & Baseline & \multirow{2}{*}{0.746} & 0.724 & 0.571 & 0.783 & **0.588** & 0.682 \\ \cline{1-1} & & Ground Truth & 0.825 & 0.766 & 0.606 & 0.791 & 0.619 & 0.721 \\ \hline \end{tabular} \end{table} TABLE I: Tool and Object Segmentation Performance (Mean IOU per Object Class) for the MICCAI Endovis 18 (M) and JIGSAWS Suturing (S), Needle Passing (NP), and Knot Tying (KT) tasks. For the Hold states in Suturing and Needle Passing, the baseline Deeplab achieves better context inference performance, even though we have more accurate masks for the left and right graspers, needle, and thread. In the rules for detecting these states, the distance and intersection are evaluated based on whether the distance is smaller than a specific threshold \(D(LG,N)<1\) and if there is an intersection between the masks \(Inter(LG,T)>0\). The thresholds were selected based on the ground truth in the training set, which could be biased against mask outputs from the model, so having hard thresholds may not be appropriate. Although the rules for the Contact states also rely on the distance and intersection logic, they are less affected by this problem. ### _Batched Performance & Time Analysis_ Table III presents the 90% confidence intervals for the IOU performance with the batched inputs of 5 to 25 frames. We see that the batched performance is approximately the same as the IOU performance of the offline model that processes the video of the whole trial all at once. However, the Suturing needle class, the Knot Tying right grasper, and the needle class have slightly lower performance than the offline model. Because batched input reduces the number of image and mask pairs encoded in the memory bank, the performance can be reduced especially the object already has less presence in the train set such as the needle class for the Suturing and Needle Passing tasks. The right grasper class of the Needle Passing task also has less performance than the offline model. This could be due to the Needle Passing task having lower-quality videos. However, despite having a smaller memory bank, our batched performance is still better than the baseline Deeplab model in Table I. We also observe that using batched input results in a small variance between the performance of different input batched sizes. This means that our method's performance can be robust when processing batched inputs. In Table IV, we show the time to perform segmentation for different batch sizes and the time to perform context inference for the last frame. We observe an approximately linear increase in the total time to perform both segmentation and context inference. In the JIGSAWS dataset, the video is captured at 30Hz, so the batch sizes of 5, 10, 15, 20, and 25 correspond to 167, 333, 500, 667, and 833 milliseconds of data, respectively. A batch size of 10 is the best for runtime context inference because it has more frames stored in the memory and can capture the smallest change in context. We observe that the segmentation and context inference can be efficiently completed within the runtime constraints. For example, for a batch size of 10, when each batch arrives, we have 333 ms to process the whole batch before the next batch arrives. We see that the segmentation and context inference take, respectively, total of 205, 207 201 ms to complete for the Suturing, Needle Passing and Knot Tying tasks, which are well within the total time window of 333 ms. ## VI Conclusion In this work, we improve the current state-of-the-art surgical scene segmentation with an STCN memory network to better segment difficult objects (e.g., needle and thread) and provide temporal consistency for the masks. Our experiments using data from dry-lab simulation tasks demonstrate that the STCN model can achieve superior performance compared to several baselines and can process smaller batches of data at runtime with minimal impact on performance. We also show that the STCN model does not necessarily require an image/mask pair from the first frame of the video. Instead, selecting a frame that represents the object's appearance in the training set can lead to similar or better performance. Further, the improved segmentation performance has positive influence on context inference, particularly detection of needle and thread states. Our time analysis confirms that both the segmentation and context inference can be performed within the runtime constraints, opening up possibilities for runtime applications like surgical workflow analysis, skill assessment, and error detection. Future work will focus on evaluating this method using data from real surgical procedures. ## Acknowledgment This work was supported in part by the National Science Foundation grant CNS-2146295. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Batch Size & All Tasks & \multicolumn{2}{c|}{Suturing} & \multicolumn{2}{c|}{Needle Passing} & \multicolumn{2}{c|}{Knot Tying} \\ \hline (Time ms) & Segmentation & Context & Total & Context & Total & Context & Total \\ \hline 5 (1/6 ms) & 82 & 29 & 111 & 46 & 128 & 47 & 119 \\ \hline 10 (333 ms) & 180 & 26 & 206 & 27 & 207 & 21 & 201 \\ \hline 15 (500 ms) & 286 & 22 & 308 & 26 & 312 & 19 & 305 \\ \hline 20 (667 ms) & 424 & 22 & 446 & 25 & 448 & 17 & 441 \\ \hline 25 (833 ms) & 576 & 21 & 597 & 24 & 600 & 18 & 594 \\ \hline \end{tabular} \end{table} TABLE IV: Segmentation and Context Inference Time per Batch \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline IOU & \multicolumn{2}{c|}{Left Grasper} & \multicolumn{2}{c|}{Right Grasper} & \multicolumn{2}{c|}{Needle} & \multicolumn{2}{c|}{Thread} & \multicolumn{2}{c|}{Ring} \\ \hline & CI (_+/_) & std & CI (_+/_) & std & CI (_+/_) & std & CI (_+/_) & std & CI (_+/_) & std \\ \hline Suturing & 0.874 & 0.879 & 0.0015 & 0.840 & 0.846 & 0.0017 & 0.532 & 0.552 & 0.0061 & 0.812 & 0.825 & 0.0037 & & \\ \hline Needle Passing & 0.825 & 0.828 & 0.001 & 0.712 & 0.742 & 0.0093 & 0.228 & 0.292 & 0.0197 & 0.560 & 0.585 & 0.0077 & 0.648 & 0.665 & 0.005 \\ \hline Knot Tying & 0.834 & 0.842 & 0.0024 & 0.815 & 0.823 & 0.0025 & & & 0.787 & 0.799 & 0.0038 & & \\ \hline \end{tabular} \end{table} TABLE III: 95% Confidence Interval (CI) and Standard Deviation for IOU for Batched Input Fig. 2: Comparison of Deeplab Baseline to STCN Outputs
2303.16845
Collisionally Stable Gas of Bosonic Dipolar Ground State Molecules
Stable ultracold ensembles of dipolar molecules hold great promise for many-body quantum physics, but high inelastic loss rates have been a long-standing challenge. Recently, it was shown that gases of fermionic molecules can be effectively stabilized through external fields. However, many quantum applications will benefit from molecular ensembles with bosonic statistics. Here, we stabilize a bosonic gas of strongly dipolar NaCs molecules against inelastic losses via microwave shielding, decreasing losses by more than a factor of 200 and reaching lifetimes on the scale of 1 second. We also measure high elastic scattering rates, a result of strong dipolar interactions, and observe the anisotropic nature of dipolar collisions. Finally, we demonstrate evaporative cooling of a bosonic molecular gas to a temperature of 36(5) nK, increasing its phase-space density by a factor of 20. This work is a critical step towards the creation of a Bose-Einstein condensate of dipolar molecules.
Niccolò Bigagli, Claire Warner, Weijun Yuan, Siwei Zhang, Ian Stevenson, Tijs Karman, Sebastian Will
2023-03-29T16:51:07Z
http://arxiv.org/abs/2303.16845v2
# Collisionally Stable Gas of Bosonic Dipolar Ground State Molecules ###### Abstract Stable ultracold ensembles of dipolar molecules hold great promise for many-body quantum physics, but high inelastic loss rates have been a long-standing challenge. Recently, it was shown that gases of fermionic molecules can be effectively stabilized through external fields. However, many quantum applications will benefit from molecular ensembles with bosonic statistics. Here, we stabilize a bosonic gas of strongly dipolar NaCs molecules against inelastic losses via microwave shielding, decreasing losses by more than a factor of 200 and reaching lifetimes on the scale of 1 second. We also measure high elastic scattering rates, a result of strong dipolar interactions, and observe the anisotropic nature of dipolar collisions. Finally, we demonstrate evaporative cooling of a bosonic molecular gas to a temperature of 36(5) nK, increasing its phase-space density by a factor of 20. This work is a critical step towards the creation of a Bose-Einstein condensate of dipolar molecules. ## I Introduction Ultracold gases of atoms and molecules have revolutionized the experimental exploration of many-body quantum systems [1; 2; 3; 4; 5]. In recent years, systems with dipolar long-range interactions have gained rapid traction, enabling the creation of new types of strongly correlated and highly entangled quantum matter. Magnetic atoms [6], operating in a regime of relatively weak dipole-dipole interactions, have allowed the realization of quantum ferrofluids [7; 8] and the creation of droplet [9; 10] and supersolid phases [11; 12]. Rydberg atoms [13], operating in a regime of extremely strong interactions, have given rise to phases with crystalline order [14; 15], simulation of quantum magnetic models [16; 17; 18], and the controlled creation of entanglement [19; 20]. Dipolar molecules [21] are expected to operate in an intermediate regime where kinetic energy and interaction energy are on a similar scale, giving rise to nontrivial correlations and self-organization. Predicted phases include strongly interacting superfluids [22; 23; 24], supersolids [25; 26; 27], dipolar crystals [28; 29], and Mott insulators with fractional filling [30]. However, molecules have been found to be prone to strong inelastic loss [31; 32; 33]. Suppression of losses has emerged as a critical prerequisite to realize stable many-body quantum systems of dipolar molecules. Quantum statistics plays an important role in molecular loss dynamics [34]. Fermionic molecules are intrinsically less prone to inelastic loss than bosonic ones. For indistinguishable fermions, the probability of reaching short range in a two-body collision is suppressed by the \(p\)-wave centrifugal barrier [35; 36; 37]. For bosons, such a barrier is absent and the rate of two-body loss is typically one to two orders of magnitude larger than for fermions [32]. To reduce loss below the natural rate, shielding techniques have been proposed that utilize external electric fields to engineer a repulsive barrier for intermolecular collisions, leveraging the rich internal state structure of molecules [38; 39; 40; 41; 42]. Microwave shielding was demonstrated in a proof-of-principle experiment for two bosonic CaF molecules in an optical tweezer trap [43]. For bulk gases of fermionic molecules, shielding with d.c. electric fields was shown for KRb [44; 45; 46] and microwave shielding for NaK [47; 48], suppressing inelastic loss by about an order of magnitude, sufficient to demonstrate evaporative cooling. Whether loss in bosonic molecular gases can be sufficiently suppressed to enable evaporative cooling has remained an open question. Here, we demonstrate the stabilization of a gas of bosonic sodium-cesium (NaCs) molecules against inelastic loss via microwave shielding. NaCs is strongly dipolar, with a permanent dipole moment of \(d_{0}=4.75(20)\) D [49], which enhances losses of unshielded molecules [32], but in turn makes shielding more effective. We suppress two-body loss by more than a factor of 200, increasing the lifetime of dense ensembles, with an interparticle spacing of about 1 \(\mu\)m, from 16(2) ms to 1.0(1) s. The microwave field also induces a dipole moment of up to 1.3 D, leading to dipolar interactions that enhance elastic collisions. In a cross-thermalization experiment, we measure strong elastic interactions and obtain a ratio of elastic-to-inelastic collisions of up to \(\gamma=4(1)\times 10^{3}\). Under these conditions, we demonstrate the evaporative cooling of the bosonic molecular gas, increasing its phase-space density (PSD) by a factor of 20 and reaching a temperature of 36(5) nK. ## II Shielding Our experiment begins with a gas of \(3.0(5)\times 10^{4}\) NaCs molecules prepared in their rovibrational ground state at a temperature of 750(50) nK. The molecules are held in an optical dipole trap (ODT) with trap frequencies \((\omega_{x},\omega_{y},\omega_{z})/(2\pi)=(60,65,140)\) Hz at a magnetic field of about 864 G (Fig. 1A) which sets the quantization axis. Details on the sample preparation can be found in [50; 51; 52]. Initially, the molecules are in the rotational ground state \(|J,\ m_{J}\rangle=|0,\ 0\rangle\), where \(J\) denotes the total angular momentum excluding nuclear spin and \(m_{J}\) its projection onto the quantization axis. Then, a circularly polarized microwave field is applied at a frequency that is blue-detuned by an amount \(\Delta\) from the resonance \(\omega_{\text{res}}\) with the excited state \(|J,\ m_{J}\rangle=|1,\ 1\rangle\) (Fig. 1B). The field is created by a phased-array antenna [53], ensuring a high purity of polarization. The intensity of the microwave field is adiabatically increased within 40 \(\mu\)s, transferring each molecule into the state \(|+\rangle=\cos(\phi)|0,\ 0\rangle+\sin(\phi)|1,\ 1\rangle\), where the mixing angle \(\phi\) is defined by \(\sin(2\phi)=1/\sqrt{1+(\Delta/\Omega)^{2}}\) and \(\Omega\) denotes the Rabi frequency. The orthogonal dressed state, \(|-\rangle=\sin(\phi)|0,\ 0\rangle-\cos(\phi)|1,\ 1\rangle\), remains unpopulated. Fig. 1C shows the energy splitting between the dressed states as a function of Rabi coupling. In a semiclassical picture, the dressed states represent dipoles rotating in the \(xy\)-plane at a frequency \(\omega_{\text{res}}+\Delta\). Due to the superposition of opposite parity states \(|0,0\rangle\) and \(|1,1\rangle\), the dressed states feature an induced dipole moment. The effective dipole moment, \(d_{\text{eff}}\), as a function of \(\Delta/\Omega\) is shown in Fig. 1D. At large intermolecular distances, shielded molecules interact through the long-range dipole-dipole interaction \(V_{\text{dd}}=d_{\text{eff}}^{2}(3\cos^{2}\theta-1)/(4\pi\epsilon_{0}R^{3})\)[54; 55], where \(\theta\) denotes the angle between the rotation axis of the dipole and the intermolecular axis, \(\epsilon_{0}\) the vacuum permittivity, and \(R\) the intermolecular distance. When approaching each other, molecules in the \(|+\rangle\) state mutually align the orientation of their dipoles and repel each other [41]. This is illustrated by the dressed intermolecular potentials shown in Fig. 1E. The repulsion prevents the molecules from reaching short range and suppresses loss from inelastic collisions [41]. The shielding efficiency is limited by residual loss channels, such as tunneling of the molecule pair through the microwave barrier, reaching short range, as well as non-adiabatic transitions to other scattering channels between \(|+\rangle\) and \(|-\rangle\) and the spectator states \(|J,\ m_{J}\rangle=|1,\ 0\rangle\) and \(|1,\ -1\rangle\), collectively labeled \(|0\rangle\). For optimal parameters of microwave shielding, we observe an enhancement of the lifetime from 16(2) ms to 1.0(1) s for ensembles with an initial peak density of \(1.0(2)\times 10^{12}\) cm\({}^{-3}\). Fig. 2A displays lifetime data of unshielded and shielded molecules, illustrating this dramatic change. As a function of hold time in the dipole trap, \(t_{\text{hold}}\), we track both the molecule number and temperature, as shown in Fig. 2A and B, and fit the data with a kinetic model that includes one-body, two-body and evaporative losses [52]. ## III Inelastic collisions We record lifetime data under different microwave parameters, which allows us to extract the two-body loss rate coefficient, \(\beta_{\text{2B}}\), as a function of \(\Delta/\Omega\). Our study is conducted at \(\Omega/(2\pi)=4.0(4)\) MHz. This value was chosen after we measured a plateau in the shielding quality as a function of Rabi frequency between \(\Omega/(2\pi)=4\) MHz and \(\Omega/(2\pi)=10\) MHz, with loss rates increasing on either side of this range [52]. The measured loss rate coefficients are shown in Fig. 2C. For \(\Delta/\Omega>1\), the trend of the data agrees well with the results of a coupled-channel calculation that takes into account our measured microwave ellipticity of \(\xi=3(2)^{\circ}\)[52; 53]. At \(\Delta/\Omega=1\), the data shows a 225-fold reduction in the loss Figure 1: Microwave shielding of NaCs molecules. (**A**) Illustration of the trapped molecular gas. Molecular dipoles are set into rotation by the electric field of a \(\sigma^{+}\)-polarized microwave field generated by an antenna array. Vertical beams for stimulated Raman adiabatic passage (STIRAP) allow for time-of-flight expansion of the microwave shielded gas up to 50 \(\mu\)m cloud waist, enabling precise measurement of temperature. (**B**) Rotational levels of NaCs at 864 G. The states \(|J,m_{J}\rangle=|0,0\rangle\) and \(|1,1\rangle\) are split by an energy \(\hbar\omega_{\text{res}}\) with \(\omega_{\text{res}}=2\pi\times 3.471323(2)\) GHz. The microwave field is blue-detuned with respect to the resonance by an amount \(\Delta\). (**C**) Energy of the single-molecule dressed states, \(|+\rangle\) and \(|-\rangle\), as a function of Rabi frequency. The states are split by an energy \(\hbar\Omega_{\text{eff}}=\hbar\sqrt{\Omega^{2}+\Delta^{2}}\). (**D**) Effective dipole moment in the lab frame of the \(|+\rangle\) state as a function of \(\Delta/\Omega\). (**E**) Potential energy curves of a pair of microwave dressed molecules approaching in the \(s\)-wave channel for \(\Omega/(2\pi)=4\) MHz and \(\Delta/(2\pi)=6\) MHz. The adiabatic potentials for \(|++\rangle\) (blue solid line), \(|+0\rangle\) (grey solid line), and \(|+-\rangle\) (red solid line) are shown. Molecules are either (1) reflected by the repulsive potential, (2) lost to non-shielded states, or (3) reach short range. The inset shows the difference between bosonic NaCs (\(s\)-wave scattering, solid line) and a hypothetical fermionic NaCs molecule (\(p\)-wave scattering with \(m_{l}=\pm 1\), dashed line) for which the \(p\)-wave barrier provides further shielding. rate coefficient compared to the unshielded case, going from \(\beta_{\rm 2B}=450(50)\times 10^{-12}\) cm\({}^{3}\)/s to \(\beta_{\rm 2B}=2.0(5)\times 10^{-12}\) cm\({}^{3}\)/s. For \(\Delta/\Omega<1\), the data deviates from the theoretical expectations. We compare the data taken at 750(50) nK to a second run at 160(10) nK. While for \(\Delta/\Omega>2\) the two runs are practically indistinguishable, there is a marked difference for \(\Delta/\Omega<2\) with a significant uptick of the inelastic rate coefficient for colder temperatures. Based on the coupled-channels calculation, which only takes into account two-body physics, such a temperature dependence is not expected. Three-body effects, not accounted for in the calculation, may be a driver of this physics, potentially in conjunction with heating caused by microwave-induced loss. A detailed understanding of these effects in future work may allow reaching even lower loss rates, as predicted by theory. ## III Elastic collisions In addition to the suppression of inelastic collisions, the dipole moment induced by the microwave field enhances elastic collisions. The effective dipole moment depends on the microwave parameters as \(d_{\rm eff}=d_{0}/\sqrt{12(1+(\Delta/\Omega)^{2})}\) (see Fig. 1D), and the resulting dipole-dipole interactions vary as a function of the shielding parameter, \(\Delta/\Omega\). For example, for small \(\Delta/\Omega\) the dressed state approaches an equal superposition of \(|0,0\rangle\) and \(|1,1\rangle\), leading to a maximal induced dipole moment and an enhancement of the dipolar contribution to the elastic collisions. As the shielding parameter \(\Delta/\Omega\) is varied, the molecular gas probes two different regimes of dipolar scattering depending on the relative magnitude of the thermal energy, \(k_{\rm B}T\), and the dipolar energy, \(E_{\rm d}=d_{\rm eff}^{2}/(4\pi\epsilon_{0}\,a_{\rm d}^{3})\)[57], where \(a_{\rm d}=Md_{\rm eff}^{2}/(8\pi\hbar^{2}\epsilon_{0})\) is the dipolar length and \(M\) the molecular mass. The elastic scattering cross section, \(\sigma_{\rm el}\), varies in magnitude, temperature dependence, and anisotropy depending on which of these energies is dominant. For \(k_{\rm B}T\gg E_{\rm d}\), collisions are semiclassical and \(\sigma_{\rm sc}=8\pi\,a_{\rm d}/(3k)\), where \(k=\sqrt{\pi Mk_{\rm B}T}/\hbar\) is the thermally-averaged collisional \(k\)-vector. For \(k_{\rm B}T\leq E_{\rm d}\), i.e. as the thermal deBroglie wavelength, \(\lambda_{\rm th}=h/\sqrt{2\pi Mk_{\rm B}T}\), approaches or exceeds the length scale of dipole-dipole interactions, the collisional properties are modified and the cross section enters the threshold regime, becoming \(\sigma_{\rm th}=32\pi\,a_{\rm d}^{2}/45+8\pi a_{s}^{2}\), where \(a_{s}\) is the \(s\)-wave scattering length of the molecules [57]. Since \(d_{\rm eff}\) is a function of \(\Delta/\Omega\), our experiment accesses both regimes, with \(k_{\rm B}T\approx E_{\rm d}\) at large microwave detunings and \(k_{\rm B}T\gg E_{\rm d}\) close to resonance. We measure the elastic collision cross section via a cross-thermalization experiment (Fig. 3). For these measurements we keep the peak density below \(0.2\times 10^{12}\) cm\({}^{-3}\) to avoid entering the hydrodynamic regime, where the thermalization rate would be limited to the mean trap frequency [47]. At constant ODT depth, fast heating of the cloud to a set temperature is induced, followed by evaporative cooling along the vertical \(z\)-axis and cross-thermalization in the \(xy\)-plane. During the se Figure 2: Lifetime and inelastic collisions of microwave shielded NaCs molecules. (**A**) Lifetime of molecular ensembles with (blue) and without (grey) shielding. The dashed lines indicate the respective \(1/e\) lifetimes. Error bars show \(1\sigma\) standard-error-of-the-mean. The shielded data is taken at \(\Omega/(2\pi)=4\) MHz and \(\Delta/\Omega=1\). (**B**) Temperature evolution of shielded (blue) and unshielded (grey) samples, corresponding to the data in panel (**A**). The solid curves in (**A**) and (**B**) are fits of the solutions of a kinetic model for molecule number and temperature. (**C**) Measured inelastic rate coefficient as a function of \(\Delta/\Omega\) at \(\Omega/(2\pi)=4\) MHz. Circles represent data points taken at 750(50) nK and squares are taken at 160(10) nK. The black dotted line corresponds to the measured two-body loss rate coefficient in the absence of microwave shielding at 750(50) nK. The orange shaded area shows a coupled-channel calculation for microwave ellipticities between \(1^{\circ}\) and \(5^{\circ}\) at 750 nK. The calculation is scaled by a factor of 2 to highlight the matching trend between experiment and theory. Insets show the relevant experimental sequence. When shielding is ramped on, the ODT power is adjusted to compensate for the change in AC polarizability of the dressed state, ensuring that trap frequencies remain constant [56]. Microwave shielding is kept on during time-of-flight to prevent inelastic losses in the initial phase of time-of-flight. quence, we follow the temperature evolution of the cloud in \(x\)-direction, as shown in Fig. 3A. Via the kinetic model [52], we extract the thermalization rate \(\Gamma_{\rm th}=\sigma_{\rm el}\bar{n}\,v_{\rm th}/N_{\rm col}\)[47], where \(v_{\rm th}=4\sqrt{k_{\rm B}T/(\pi M)}\) is the molecules' mean thermal velocity, \(\bar{n}\) the mean density of the cloud, and \(N_{\rm col}\) the average number of collisions required for cross-thermalization between the \(z\)-axis and the \(xy\)-plane. Smaller \(N_{\rm col}\) means more efficient energy transfer. From \(\Gamma_{\rm th}\) we obtain the elastic scattering cross section \(\sigma_{\rm el}\). The anisotropic nature of dipolar interactions has a profound impact on the thermalization dynamics via the value of \(N_{\rm col}\). Close to resonance, where the molecular gas is in the semiclassical regime and \(E_{\rm d}\) is small, forward collisions that do not deflect the molecules' trajectories by a large angle are favored, thus limiting the transverse energy transfer. More efficient energy redistribution is achieved in the threshold regime at larger detunings, when the induced dipole moment is lower and \(E_{\rm d}\) becomes larger. This is shown in Fig. 3B, in which a calculation of \(N_{\rm col}\) for cross-thermalization between the \(x\) and \(z\) (or \(x\) and \(y\)) axes is provided. Unlike experiments on fermionic dipoles [58; 46] where there is only the dipole-dipole interaction, bosonic systems also have an \(s\)-wave van der Waals contribution to elastic scattering. The scattering length, \(a_{s}\), is not known for NaCs and we obtain a value of \(a_{s}=1200\ a_{0}\) from a fit to our data [52]. With \(a_{s}\) being the only free fitting parameter, we find excellent agreement for \(\sigma_{\rm el}\) between the experiment and a coupled channel calculation, as shown in Fig. 3C. From the measured elastic and inelastic collision rates, we calculate the ratio of elastic-to-inelastic collisions, \(\gamma\), as shown in Fig. 3D. We observe a peak value of \(\gamma\approx 4(1)\times 10^{3}\) at \(\Delta/\Omega=1\). The quantity \(\gamma\) is typically used as a key parameter to characterize the efficacy of forced evaporative cooling [60; 59]. However, evaporative cooling with dipolar elastic collisions in the semiclassical regime is qualitatively different from evaporation in systems with \(s\)-wave or threshold dipolar interactions, as is typically the case in atomic and molecular systems, including the recent demonstrations of evaporative cooling in fermionic dipolar molecules [44; 47]. In our case the reduced quantity \(\gamma/N_{\rm col}\), rather than \(\gamma\), sets the thermalization rate, and thus the evaporation efficiency. Our highest value of \(\gamma/N_{\rm col}\approx 250\) is still favorable for efficient evaporation [43; 47]. ## IV Evaporation We demonstrate evaporative cooling in the stabilized ultracold gas of NaCs molecules. We start with a gas at a temperature of 750(50) nK and a PSD of \(5(1)\times 10^{-3}\). Then, the depth of the ODT is continuously reduced over 1.5 s, while the molecular cloud is shielded at \(\Omega/(2\pi)=4\) MHz and \(\Delta/(2\pi)=6\) MHz. At different stages of the evaporation, we measure the molecule number and temperature, and extract the phase-space density of the cloud, as shown in Fig. 4. At the end of the cooling sequence, we reach a temperature Figure 3: Elastic collisions of microwave shielded NaCs molecules. (**A**) Cross-thermalization experiment at \(\Omega/(2\pi)=4\) MHz and \(\Delta/(2\pi)=6\) MHz. First, shielding is turned off and two-body loss in the unshielded gas is used to heat the sample to between 650 and 800 nK; then shielding is turned back on, triggering evaporation along the \(z\)-axis and cross-thermalization in the \(xy\)-plane. Grey data points show the temperature evolution of the unshielded molecules, blue data points show the temperature in the \(xy\)-plane after shielding is turned back on. The blue solid line shows a fit to the data using the kinetic model. (**B**) Theoretical calculation of \(N_{\rm col}\) as a function of \(\Delta/\Omega\). The dark grey (light grey) band is the calculated number of collisions for \(xz\) (\(xy\))-thermalization for a temperature range between 650 nK and 800 nK. The dashed line indicates \(N_{\rm col}=2.5\) as expected for an isotropically interacting system. (**C**) Measured elastic cross section as a function of \(\Omega/(2\pi)=4\) MHz. The orange shaded band is the integral scattering cross section from the coupled-channel calculation for a temperature range between 650 nK and 800 nK [52]. The grey shaded area corresponds to the hydrodynamic limit [47], which we do not enter at the densities used for this measurement. The dashed vertical line marks the transition between the semiclassical (\(kT_{\rm B}>E_{\rm d}\)) and threshold (\(kT_{\rm B}<E_{\rm d}\)) regimes of elastic scattering. (**D**) Ratio of elastic-to-inelastic collisions, \(\gamma\), as a function of shielding parameter \(\Delta/\Omega\). of \(36(5)\) nK and a corresponding PSD of \(0.10(3)\). For small molecule numbers, the measured phase-space density seems to show a plateau, likely the result of the limited signal-to-noise of our detection system. The extracted evaporation efficiency, \(-d\text{ln}(\text{PSD})/d\text{ln}(N)\), is \(1.0(1)\). This efficiency is similar to those found in recent work on evaporative cooling of fermionic ground state molecules [44; 46; 47]. We note that, besides the prospect of cooling the molecular gas to degeneracy, evaporative cooling allows the preparation of molecular samples at well-defined temperatures over a wide dynamic range, which will facilitate studies of quantum chemistry and collisional physics in bosonic molecular gases [33]. ## Outlook In this work, we have demonstrated that ultracold gases of bosonic ground state molecules can be effectively stabilized via microwave shielding, reaching low inelastic loss rates similar to shielded fermionic molecules despite the absence of a \(p\)-wave barrier. Our data on anisotropic cross-thermalization shows that, even above quantum degeneracy, the strongly dipolar character of our gas leads to nontrivial thermodynamic behavior. Dipolar liquids similar to our system have recently been predicted to show anisotropic thermal conductivity [61] and viscosity [62]. Thanks to the rapid tunability of microwave parameters \(\Omega\) and \(\Delta\), allowing the quasi-instantaneous tuning of inelastic and elastic scattering properties, novel non-equilibrium measurement protocols can be envisioned to probe such thermodynamics. A key question that emerges from this work is whether Bose-Einstein condensation of microwave-shielded dipolar molecules can be achieved. The measured gain in phase-space density brings us to the brink of Bose-Einstein condensation. Following the observed trend in evaporation, a BEC with 100 molecules would be expected. This is currently below the signal-to-noise level of our imaging system, but straightforward upgrades will provide the necessary improvements. In addition, further reduction of microwave noise and a better understanding of inelastic loss at small detunings may allow us to reach even lower loss levels, as predicted by theory. Field-linked resonances [38; 48], which should be accessible for NaCs at low microwave ellipticity and moderate microwave intensity [48], allow for independent tuning of \(a_{\text{d}}\) and \(a_{\text{s}}\)[42] and offer a tuning knob to further improve on the scattering properties of our molecules. Using field-linked resonances to tailor interactions may also tune \(N_{\text{col}}\) close to or below the \(s\)-wave value of 2.5 [63], increasing the thermalization rate by an order of magnitude, enhancing our evaporation. Starting from shielded three-dimensional bulk samples of bosonic molecules, efficient transfer with minimal loss to lower dimensional systems comes within reach. In two-dimensional systems, microwave shielding and d.c. electric fields should enable the realization of strongly correlated phases [28], such as supersolidity and self-organized crystallization of dipoles, both in single layers [27] and multilayers [23]. NaCs molecules are a highly promising platform for such explorations due their large dipole moment, which allows a characteristic range of dipolar interactions, \(a_{\text{d}}\), of tens of micrometers. Shielded loading of optical lattices, in particular from a BEC of molecules, may offer a way to reach unity filling, necessary to realize extended Hubbard models [30; 64], and enabling studies of many-body spin models in defect-free molecular arrays [65; 66; 67; 68]. ## Acknowledgements We thank Andreas Schindewolf, Goulven Quemener, and Timon Hilker for helpful discussions, Michal Lipson and Javad Shabani for the loan of equipment, and Tarik Yefsah for critical reading of the manuscript. This work was supported by an NSF CAREER Award (Award No. 1848466), an ONR DURIP Award (Award No. N00014-21-1-2721), and a Lenfest Junior Faculty Development Grant from Columbia University. C.W. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). W.Y. acknowledges support from the Croucher Foundation. I.S. was supported by the Ernest Kempton Adams Fund. S.W. acknowledges additional support from the Alfred P. Sloan Foundation. ## References Figure 4: Evaporation of microwave dressed molecules at \(\Omega/(2\pi)=4\) MHz and \(\Delta/(2\pi)=6\) MHz. (**A**) Evolution of phase-space density during evaporation. The insets show images of molecular gases after 3 ms of time-of-flight. The low temperature cloud is an average of five images; the high temperature cloud is a single image. (**B**) Temperature evolution corresponding to panel (A).
2303.07790
Object Detection During Newborn Resuscitation Activities
Birth asphyxia is a major newborn mortality problem in low-resource countries. International guideline provides treatment recommendations; however, the importance and effect of the different treatments are not fully explored. The available data is collected in Tanzania, during newborn resuscitation, for analysis of the resuscitation activities and the response of the newborn. An important step in the analysis is to create activity timelines of the episodes, where activities include ventilation, suction, stimulation etc. Methods: The available recordings are noisy real-world videos with large variations. We propose a two-step process in order to detect activities possibly overlapping in time. The first step is to detect and track the relevant objects, like bag-mask resuscitator, heart rate sensors etc., and the second step is to use this information to recognize the resuscitation activities. The topic of this paper is the first step, and the object detection and tracking are based on convolutional neural networks followed by post processing. Results: The performance of the object detection during activities were 96.97 % (ventilations), 100 % (attaching/removing heart rate sensor) and 75 % (suction) on a test set of 20 videos. The system also estimate the number of health care providers present with a performance of 71.16 %. Conclusion: The proposed object detection and tracking system provides promising results in noisy newborn resuscitation videos. Significance: This is the first step in a thorough analysis of newborn resuscitation episodes, which could provide important insight about the importance and effect of different newborn resuscitation activities
Øyvind Meinich-Bache, Kjersti Engan, Ivar Austvoll, Trygve Eftestøl, Helge Myklebust, Ladislaus Blacy Yarrot, Hussein Kidanto, Hege Ersdal
2023-03-14T11:04:50Z
http://arxiv.org/abs/2303.07790v1
# Object Detection During Newborn Resuscitation Activities ###### Abstract _Objective:_ Birth asphyxia is a major newborn mortality problem in low-resource countries. International guideline provides treatment recommendations; however, the importance and effect of the different treatments are not fully explored. The available data is collected in Tanzania, during newborn resuscitation, for analysis of the resuscitation activities and the response of the newborn. An important step in the analysis is to create activity timelines of the episodes, where activities include ventilation, suction, stimulation etc. _Methods:_ The available recordings are noisy real-world videos with large variations. We propose a two-step process in order to detect activities possibly overlapping in time. The first step is to detect and track the relevant objects, like bag-mask resuscitation, heart rate sensors etc., and the second step is to use this information to recognize the resuscitation activities. The topic of this paper is the first step, and the object detection and tracking are based on convolutional neural networks followed by post processing. _Results:_ The performance of the object detection during activities were 96.97 % (ventlations), 100 % (attaching/removing heart rate sensor) and 75 % (suction) on a test set of 20 videos. The system also estimate the number of health care providers present with a performance of 71.16 %. _Conclusion:_ The proposed object detection and tracking system provides promising results in noisy newborn resuscitation videos. _Significance:_ This is the first step in a thorough analysis of newborn resuscitation episodes, which could provide important insight about the importance and effect of different newborn resuscitation activities. Newborn Resuscitation, Automatic Video Analysis, Object Detection, Convolutional Neural Networks ## I Introduction Globally, one million newborns die within the first 24 hours of life each year. Most of these deaths are caused by complications during birth and birth asphyxia, and the mortality rates are highest in low-income countries [1]. As many as 10-20 % of newborns require assistance to begin breathing and recognition of birth asphyxia and initiation of newborn resuscitation is crucial for survival [1, 2, 3]. International guidelines on newborn resuscitation exists, however, the importance and effect of the different treatments and therapeutic activities are not fully explored. Safer Births1 is a research project to establish new knowledge on how to save lives at birth, and the project has, among other things, collected data during newborn resuscitation episodes at Haydom Lutheran Hospital in Tanzania since 2013. The collected data contains video recordings, ECG and accelerometer measurements from a heart rate sensor (HRS) attached to the newborn, and measurements of pressure, flow and expired CO\({}_{2}\) from a bag-mask resusctitator (BMR). A thorough analysis of the collected data could provide important insight about different effects of the resuscitation activities. To be able to study such effects it is necessary to quantify the series of performed activities, in addition to measuring the condition of the newborn during resuscitation and knowing the outcome. A timeline documenting activities like ventilation, stimulation and suction would be of immense value. From such a timeline it would be possible to extract parameters like the amount of both total and continuous time used, the number of starts and stops for different activities etc. The generation of the timelines should preferably be done automatically by using the collected signals and/or video, thus allowing large amounts of data to be analyzed. The value of such timelines would clearly be i) for research and increased knowledge on the effects of newborn resuscitation activities. A future implementation of a complete system would also be useful on-site: ii) as a debriefing tool, summarizing the activities with no need to study video recordings and iii) as a real-time feedback system. Footnote 1: www.saferbirths.com Previously, in Huyen et.al [4], our research group proposed an activity detector based on the HRS signals and the detector discriminated the activities _stimulation_, _chest compressions_ and _other_ with a accuracy of 78.7 %. Stimulation and chest compressions are therapeutic activities, whereas _other_ would include moving and drying the baby, touching the HRS etc. These activities would result in movement in the HRS, and thus be visible in both the ECG and the accelerometer signals, but are not considered therapeutic activities or treatment of the newborn. Using automatic video analysis of the video recordings during the resuscitation episodes could potentially improve the performance achieved using the HRS signals. Furthermore, video analysis could possibly detect activities and information that are difficult or impossible to detect from the ECG and accelerometer signals, like; is the HRS attached to the newborn or not, and how many health care providers (HCPs) are present. The importance of video analysis of newborn resuscitation episodes has been well documented for both evaluation and training purposes [5, 6, 7, 8, 9]. However, manual inspection and annotation is very time consuming, and limits the amount of data that can be analyzed. In addition, a manual inspection entails privacy issues. Thus, there is a need for automatic video analysis of these episodes. Conventional image and pattern recognition methods, e.g segmentation and tracking, has been applied in automatic video analysis for decades [10], but in recent years Deep Neural Networks (DNNs) has shown it's superior strength in the field [11, 12, 13, 14]. In the topic of object and activity detection in resuscitation in general, others have propose the usage of passive radio-frequency identification (RFID) tags on the objects for object motion and interaction detection [15, 16, 17]. Chakraborty et.al [18] proposed an object and activity detector for trauma resuscitation video recordings based on object segmentation and a Markov Logic Network model. In the area of _newborn_ resuscitation Guo et.al [19] proposed an activity detection system for newborn resuscitation videos based on DNN and linear Support-Vector Machines (SVMs). Their dataset included 17 videos recorded with a frame rate of 25 frames per second (FPS) at a hospital in Nepal, and the group aimed to detect the activities _stimulation_, _suction_, _ventilation_ and _crying_. The pre-trained _Faster RCNN_ network and the object class _People_ were used to propose areas involving the newborn, and motion salient areas were further used as input to two pre-trained Convolutional Neural Networks (CNN) from [11] designed to extract motion and spatial features. Further, the features was combined and used as input to linear SVMs, trained on their own dataset, to detect the activities. All though there are similarities between the dataset from [19] and our dataset, they are both noisy real-world videos with large variations, there are some specific tasks an challenges that differs between the studies. First, we aim to detect activities that are not newborn location dependent or movement dependent, like, the number of HCP present, and is the HRS attached or not. Second, in our dataset the newborns are wrapped in blankets most of the time, even before being placed at the resuscitation table, and the image examples from [19], which shows fully uncovered newborns, are more infrequent in our dataset. Thus, using a pre-trained _Person_ detection network as suggested in [19] would most likely not be the best approach. In addition, our videos are recorded with variable frame rate, which in some case are very low and causes motion blurred images of poor quality, resulting in larger per frame motion variations than for images recorded with fixed frame rates. Considering all this, we believe that using an object detection and tracking approach to localize the relevant activity detection areas would be a more robust first step in activity detection. Further, using the areas around each objects would simplify the detection problem to a binary classification problem for the specific activities; is the object being used in resuscitation or not. The topic of this paper is the first step and the object detection and tracking is based on CNNs followed by post processing. Neural networks for object detection requires a lot of training data, so in addition to using image frames from the videos, we use histogram matching [20] for augmentation and also a synthetic dataset. The object detection is performed on each video frame and here we use the well known _YOLOv3_[21] network, used in various object detection applications [22, 23, 24]. Post processing is used to fill in missing detections and track the area around the objects during the episodes. ## II Data material The dataset is collected using _Laerdal Newborn Resuscitation Monitors_ (LNRM) [25] and with cameras mounted over the resuscitation tables. The dataset contains almost 500 videos with corresponding LNRM data. The LNRM records the signals measured by the green HRS and the BMR, both shown at the top of Figure 3 C. The video recordings were initiated to provide additional support in cases and research objectives where the other collected signal or observed data were difficult to interpret. However, the videos are of variable quality and camera and scene settings are not standardized for the different resuscitation tables included in the dataset. The variations are caused by different camera types, camera angles, video resolutions (1024\(\times\)1280, 720\(\times\)1280, and 1200\(\times\)1600), camera distances from resuscitation tables, variable frame rates (2-30 frames per second), unfocused cameras and light settings. All these variations, especially the variable frame rate, make automatic video analysis more challenging. In some cases the frame rate is as low as two frames per second, resulting in motion blurred image frames of poor quality. In Figure 2 some of these challenges are depicted; A) Motion blurring, B) far away camera position, C) occlusion due to camera angle and D) poor lighting conditions. In addition, the videos also have variations like HCPs using different colored rubber gloves, HCPs that do not wear rubber gloves, different colored HCP uniforms and clothing, and colorful and patterned blankets brought by the mothers to wraps the newborn in. The activity timelines that are relevant to generate are: * 1) Bag-mask ventilations: Respiratory support. * 2) Suction: Removal of fluids from nasal and oral cavities using a device called suction penguin (SP). * 3) HRS attached to newborn or not. * 4) Stimulation: Warming, drying, and rubbing the newborns's back. * 5) Chest compressions. Keep oxygenated blood flowing to the brain and other vital organs. * 6) Number of HCPs present. * 7) Newborn wrapped in blanket or not. Activity 1), 2), 3), 4) and 5) can be detected by tracking the objects BMR, SP, HRS and HCPs hands (HCPH), and by analyzing their surrounding areas, 6) by counting the number of detected HCPH, and 7) by analyzing an area around the newborn, found from motion analysis and the location of the detected objects. ## III Methods A block scheme of the planned activity detection system is shown in Figure 1. The steps proposed in this paper is encircled with a red dotted line. These include dataset generation using the collected videos, augmentation of images from the collected videos, generation of a synthetic dataset, object detection using YOLOv3 [21], post processing to select the areas surrounding the relevant objects and an estimation of the number of HCPs involved in the resuscitation at each moment in time. ### _Data Generation_ A dataset, _VideoD_, of 3093 images for object detection training is created by selecting evenly spread image frames from 21 randomly selected videos. The objects are manually labelled using the Image Labeler [26]. #### Iii-A1 Augmentation dataset: VideoD is further augmented to a new dataset, _HistD_, by using histogram matching [20]. A frame from 10 randomly selected videos are used as histogram reference frames, and each of the images in _VideoD_ are augmented with each of the reference frames creating in total 34 023 images. 6 of 10 examples of the histogram match augmentation is shown for one of the frames in Figure 3 B. #### Iii-A2 Synthetic dataset A synthetic dataset, _SynthD_, is created in an attempt of generating example images with the variation found in the original dataset. Because of the colourful and patterned blankets used in the resuscitation videos, the objects we want to detect can appear on all kinds of backgrounds, thus over 6000 different backgrounds, both natural images and texture images are used. First, hands with different coloured gloves and no gloves, two types of BMR that both appear in the collected resuscitation videos, the HRS and the SP were video recorded in front of a blue screen in all possible angles. Object masks are created using video frames, \(I(x,y)_{i}\), where \(x,y\) denote the pixel coordinates and \(i\) the frame number, from the recorded object videos by: \[\mathit{OM}(x,y)_{i,c}=I_{B}(x,y)_{i,c}-I_{L}(x,y)_{i,c}<T_{CK,c} \tag{1}\] where \(c\) denote the object class, \(I_{B}\) the blue channel, \(I_{L}\) the RGB luminance value \((0.3I_{R}+0.59I_{G}+0.11I_{B})\) and \(T_{CK}\) the chroma key thresholds for each \(c\). Around 6300 masks per class are created in average. Next, a background is randomly drawn from the 6482 examples and objects and masks are cast at random positions onto the background. One example of each object, except from HCPH where we use a number between one and three examples, is used. The objects are randomly scaled with the object's typical size relative to the size of the image frame - found from _VideoD_, and hue, saturation and lightness is also randomly chosen between 60-100 % of the original object images. In order to make the object appear as realistic as possible, the final synthetic images are filtered with a small motion blur where the length, \(len\), and angle, \(\theta\), of the motion are randomly chosen. The scene for recording objects, masked objects and an example of a generated synthetic image is shown in Figure, 3 C. #### Iii-A3 Split image dataset In an attempt of better utilizing the resolution in the video frames and to be able to predict the smallest objects, the images in \(HistD\) are split into five equally sized sub images generating a new dataset, \(SplitD\). The four first images are generated from splitting the image into four parts, and the fifth is extracted at the center of the original image frame. This fifth sub image would typically Fig. 1: Block scheme of the activity detection system. The red dotted line encircles the steps proposed in this paper. 1: Generated dataset is input to YOLOv3 object detection network. 2: Detected objects. 3: Detected object area after post processing. 4: Sequence of images from areas are used as input to sequential neural networks. 5: Activity time lines is the final output. Fig. 2: A: Motion blurring due to low frame rate, 1024x1280. B: Camera far away, 1200 x 1600. C: Occlusion (ventilating newborn behind health care provider), 1024x1280. D: Poor lighting, 720 x 1280. contain more objects than the rest, and become an overlap of the other four sub images. The bounding box annotation is also split and the resulting bounding boxes is removed if they are \(<\) 40 % of the size of another box representing the same object in another sub image. This step ensures that all the resulting bounding boxes contain a significant part of the objects, making the resulting images good training examples. #### Iii-A4 Dataset for testing A dataset, \(TestD\), of 1000 images is created by selecting 50 evenly spread image frames from 20 randomly selected videos, not previously used for training, where the mean duration per video is around 7 minutes. The test images are labelled using _Image Labeler_[26]. A split version, \(TestD_{split}\), of \(TestD\) is also created with the same procedure as explained in section III-A3. ### _Object detection_ The proposed system uses the well known YOLOv3 [21] in the object detection step. YOLOv3 is comparable to the state of the art models on the mAP\({}_{50}\) metric [21], and is chosen for the following reasons: 1) Speed - YOLOv3 can perform predictions on video streams in real time - which could be useful in a future application for our proposed system, 2) YOLOv3 is state-of-the-art at predicting the correct class, rather than focusing on accurate bounding box predictions - which suits the problem at hand well. 3) It predicts small objects with better precision than medium and large objects [21] - which also suits the problem at hand well, and finally, 4) due to the limited size of labelled training data, using transfer learning with a-state-of-the-art model as YOLOv3 as the starting point will most likely outperform any training from scratch. #### Iii-B1 Network structure (YOLOv3) YOLOv3 [21] is a fully convolutional network, meaning no fully-connected layers are used. It consist of 75 convolutional layers in total and performs downsampling by using convolutional layers with a stride of two instead of using pooling layers. The network also includes residual blocks [27] and performs detection on three different scales in order to detect objects of different size. The detections on the different scales utilize feature maps from deeper layers in a similar concept to feature pyramid networks [28] and the features go through convolutional layers before outputting 3D tensors with dimension: \[N\times N\times[3\times(4+1+C)] \tag{2}\] where \(N\) is the number of grids at that scale (13, 26 and 52 if image size is \(416\times 416\)), 3 the number of bounding boxes for each grid, 4 the box coordinates and size, 1 the objectness prediction, \(oP\), and C the number of object classes. The YOLO algorithm further performs non-maximum suppression: Removing predicted object with an objectness score below a threshold, \(T_{o}\), and by removing predictions of same class where the bounding box overlap more than threshold \(T_{IoU}\). #### Iii-B2 Post processing object detection Post processing is performed on the detection of _BMR_, _SP_ and _HRS_ to fill in missing detections in frames and to create areas surrounding the object throughout the video. Since we can have multiple true occurrences of HCPH in the same frame, HCPH do not undergo these steps. Denote \(obj\in 1:4\) to be the object classes where \(1=\textit{BMR}\), \(2=\textit{SP}\), \(3=\textit{HRS}\) and \(4=\textit{HCPH}\), and \(N_{E,i}\) to represent the number of detections in image, i, of episode, E. For \(obj_{p}\in\{1,2,3\}\subset obj\) we estimate the most likely object position in each \(i\) by; first, creating blank images, \(IB(x,y,obj_{p})_{E,i}\). Second, for each pixel areas, \(pA_{E,i,obj_{p,n}}=\{x_{n}^{E,i,obj_{p}},y_{n}^{E,i,obj_{p}}\}\), representing all pixel coordinates of a detected object, \(obj(n)_{E,i}\), in an image we add the detection's \(oP\) score, \(oP(n)_{E,i,obj_{p}}\), to the matching coordinates in \(IB(x,y,obj_{p})_{E,i}\). #### Iii-BFor \(n=1:N_{E,i}\) do \[\textit{IB}(x,y,obj_{p})_{E,i}=\left\{\begin{array}{c}\textit{IB}(\cdot)+oP (n)_{E,i,obj_{p}},\\ \forall\{x,y\}\in pA_{E,i,obj_{p,n}(n)}\\ \text{if}\quad obj(n)_{E,i}=obj_{p}\\ \textit{IB}(\cdot),\quad\text{otherwise}\end{array}\right. \tag{3}\] Further the centroid coordinates, \((x_{c}^{E,i,obj_{p}},y_{c}^{E,i,obj_{p}})\), of the most likely object position is found from: \[(x_{c}^{(\cdot)},y_{c}^{(\cdot)})=cent(max(IB(x,y,obj_{p})_{E,i}>T_{obj_{p}}) \tag{4}\] Fig. 3: A: Example of a frame used in _VideoD_. B: Examples of histogram match augmented images from _HistD_. C: Scene for recording objects to be used in the generation of synthetic dataset, masked objects and an example of a generated frame in _SynthD_. where \(T_{obj_{p}}\) defines thresholds for the different object classes. Denote \(d\in X,Y\). Each \(x_{c}^{(\cdot)}\) and \(y_{c}^{(\cdot)}\) are stored in location vectors, \(L(i)_{E,d,obj_{p}}\), representing timelines of the center position of each object as a function of the video frames. \(L(i)_{E,d,obj_{p}}\) further undergoes the three post processing steps illustrated with an example in Figure 4, listed as follows: 1) Filling detection gaps by choosing the previous detected value \(\rightarrow\quad Lf(i)_{E,d,obj_{p}}\). 2) Short peak removal. If \(||Lf(i)_{(\cdot)}-Lf(i-1)_{(\cdot)}||>T_{peak}\), we check if it is an actual large change in object position, or if it returns to a value where \(Lf(i+1:i+10)_{(\cdot)}-Lf(i-1)_{(\cdot)}||<T_{stable}\). This step filters out short false detections of the objects, and outputs the peak removed signal, \(Lpr(i)_{E,d,obj_{p}}\). 3) Signal smoothing by applying a moving average filter of length \(N_{f1}\): \[Ls(i)_{E,d,obj_{p}}=\frac{1}{N_{f1}}\sum_{l=-N_{f1}/2}^{N_{f1}/2}Lpr(l)_{E,d, obj_{p}} \tag{5}\] Finally, object area tracking throughout sequences is performed by adding a \(500\times 500\) bounding box, \(BB_{track,E,obj_{p}}\), around each \(Ls(i)_{E,d,obj_{p}}\) onto the original videos. The size of \(BB_{track,E,obj_{p}}\) ensure that it is possible to detect what activities are performed in the area, and thus discriminate the activities from movement and noise. An example of the tracking results is shown in step 3 of Figure 1. ### _Estimation of number of health care providers present_ Timelines of the number of HCPs present in the resuscitation videos are generated from the number of detected hands in the image frames, \(nH(i)_{E}\). _For \(n=1:N_{E,i}\) do_: \[nH(i)_{E}=\left\{\begin{array}{cc}nH(i)_{E}+1,&\text{if }obj(n)_{E,i}=4\\ \text{and }oP(n)_{E,i}>T_{HCPH}\\ nH(i)_{E},&\text{otherwise}\end{array}\right. \tag{6}\] where \(T_{HCPH}\) is a threshold for detection of HCPHs. To remove noise, \(nH(i)_{E}\) is further smoothed by a moving average filter: \[\overline{nH}(i)_{E}=\frac{1}{N_{f2}}\sum_{l=-N_{f2}/2}^{N_{f2}/2}nH(l)_{E} \tag{7}\] where \(N_{f2}\) is the filter size. Finally, \(\overline{nH}(i)_{E}\) is converted to the detected number of HCPs, \(nHCP(i)_{E}\), by: \[nHCP(i)_{E}=\begin{cases}0\text{if}&\overline{nH}(i)_{E}\leq T_{zero}\\ 1\text{if}&T_{zero}<\overline{nH}(i)_{E}\leq T_{one}\\ 2\text{if}&T_{one}<\overline{nH}(i)_{E}\leq T_{two}\\ 3\text{if}&\overline{nH}(i)_{E}>T_{two}\end{cases} \tag{8}\] ## IV Experiments We used the original pretrained weights for YOLOv3, _darknet53_, and trained different models by further training the weights with four different sets of training data, \(VideoD\), \(HistD\), \(HistD+SynthD\) and \(SplitD+SynthD\). An initialization stage is used to get a stable loss by first freezing all layers except the top 3 layers. In the next and final stage all layers are further trained with learning rate decay and early stopping. The batch size was set to 16. The mean Average Precision (mAP) criterion defined in the PASCAL VOC 2012 competition2 was used to compare single-image object detection results from the models trained on the four different mixtures of the datasets. mAP is a function of _precision_, _recall_ and the Intersection over Unions (IoU), the overlap between predicted and true bounding box. The threshold for IoU was set to 0.5. Footnote 2: [http://host.robots.ox.ac.uk/pascal/VOC/voc2012/](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/) The best models were further used in detection of the objects and the post processing steps to evaluate the performance of the proposed object regions. The proposed regions were added to the original video and the detection results were manually evaluated by annotating timelines using the video annotation tool ELAN3. The annotated timelines for each \(E\) are: Footnote 3: [https://lta.mpi.nl/tools/lta-tools/elan/](https://lta.mpi.nl/tools/lta-tools/elan/) * The number of HCPs: \(nHCP_{ref,E}(i)\), * ventilations, attaching or removing HRS, and suction, \(A_{obj_{p},E}(i)\), * is the object visible: \(V_{obj_{p},E}(i)\) and * is the object detected: \(D_{obj_{p},E}(i)\) (\(>\) half the object is included in \(BB_{track,E,obj_{p}}\)) The main task of the object detection and tracking is to find approximate regions around the objects that can be used for further activity recognition. The aim is not to propose very accurate regions that centers the object perfectly, but more importantly to propose smoothly updated regions that surround the object over time. Thus, we classify a tracking result as correct if the object is at least 50 % included in the proposed region. Since our aim is to track a single object of each of the classes _SP_, _HRS_ and _BMR_ throughout the whole video, we Fig. 4: Example of post processing the centroid X-coordinate of the detected bag-mask resuscitator (BMR). Horizontal axis is the image frame in the video and vertical axis the pixel position in the frame. can evaluate the objects individually. The established metric Multiple Object Tracking Accuracy (MOTA) can be seen in the context of single-object short-term tracking and be simplified to the percentage of correctly tracked frames [29]. Thus, the performance, P, is evaluated for each object class and each episode, E, by the general equation \[P=(\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}I_{f}(i))*100 \tag{9}\] where \(N_{s}\) is the number of frames in the episode and \(I_{f}(i)\) an indicator function defined as 1 if \(|detection(i)_{E}-reference(i)_{E}|=0\) and 0 otherwise. The average performance, \(\overline{P}\), of the post processed object detection are estimated using Eq. 9 with \(D_{obj_{p},E}(i)\) as _detection_\(V_{obj_{p},E}(i)\) as _reference_, and by averaging over the episodes. Further, we evaluate the performance of the object detection during the relevant resuscitation activities, ventilation (BMR), Attaching or removing HRS and suction (SP). From \(A_{obj_{p},E}(i)\) we locate the activity sequences and use them as _reference_ in Eq. 9. Their corresponding sequences in time in \(D_{obj_{p},E}(i)\) is here used as _detection_ and an activity is classified as detected if the detection overlap with the reference data \(>\) 80 % of the time. The timelines \(nHCP(i)_{E}\) is found as explained in Section III-C and the average performance, \(\overline{P}\), of the prediction of number of HCPs is estimated using Eq. 9 with \(nHCP_{ref,E}(i)\) as _reference_ and \(nHCP(i)_{E}\) as _detection_. In addition, the average prediction error, \(\overline{E}\), of \(||nHCP_{ref,E}(i)-nHCP(i)_{E}||\) is estimated over the episodes. The total performance, \(P\), of the classes _no HCP_, _one HCP_, _two HCP_ and _three (or more) HCP_ is also estimated using Eq. 9, where the class-relevant sequences in \(nHCP_{ref,E}(i)\) is the _reference_ and the corresponding sequences in time in \(nHCP(i)_{E}\) is the _detection_. When the results are averaged over results from individual episodes, quartile measurements, \(Q\), are also provided. The experiments are done using Python4 and a Keras5 implementation of YOLOv3 developed by user _qqwwee6_ with minor modifications. Since the objects often are occluded in the videos and the camera distance varies, the objects's size and form have large variations. Therefore, we have chosen to use the YOLOv3 anchor boxes determined using k-means clustering on the large COCO dataset [21] instead of estimating anchor boxes from our limited truth data. Footnote 4: [https://www.python.org/](https://www.python.org/) Footnote 5: [https://keras.io/](https://keras.io/) Footnote 6: [https://github.com/qqwwee/keras-yolo3](https://github.com/qqwwee/keras-yolo3) The threshold and parameter values used in the experiments are: \(T_{CK,c}\in\{80,180\}\), \(len=3-7\), \(\theta=3-10\), \(T_{o}=0.05\), \(T_{IoU}=0.45\), \(T_{obj}=[0.1,0.05,0.1]\) for BMR, SP and HRS, \(T_{HCPH}=0.1\)\(T_{peak}=200\), \(T_{stable}=50\), \(T_{zero}=0.2\), \(T_{one}=2\), \(T_{two}=4\), \(N_{f1}=5\) and \(N_{f2}=40\). ## V Results The mean average precision, mAP, results are listed in Table I for the object detection using models trained on the datasets \(VideoD\), \(HistD\), \(HistD+SynthD\) and \(SplitD+SynthD\). For the objects HCPH, BMR and HRS using a combination of \(HistD\) and \(SynthD\) and image size \(416\times 416\) provided the best results. There was no significant improvement by increasing the image input size to \(608\times 608\). For detection of SP we achieved the best result by using a model trained on \(SplitD\) and \(SynthD\), and an image size of \(608\times 608\). This model also provided the best overall mAP. episodes, and the last two results are estimated per episode. The performance of the detection of number of HCPs is above 90 % when there are zero or one HCP present. However, for two and more than two HCPs the performance is 53 and 6 % respectively. The mean prediction error is here 0.32, in other words, when the number of estimated HCP is incorrect, it is usually underestimated by one. Figure 5 shows the distribution of the sub groups \(\text{FPS}\leq 8\) and \(\text{FPS}>8\) in the groups _detected_ and _undetected_ SP during suction. For the group _undetected_ we list the most likely reason for why the SP were undetected. The group _others_ represent the sequences where no large challenges was observed during the activity. ## VI Discussion The proposed system shows promising results for object detection and tracking in noisy real-world videos of a newborn resuscitation scene. As proposed in Figure 1 the areas around the objects will be used as input to sequential neural networks trained to recognize the different activities by analyzing the areas for short time sequences. Other relevant areas like the area around the newborn, which could be found from the detected hand movements, and around the detected HCPHs can also be used as inputs to the sequential analysis. Due to the suction penguins transparency and small size, the system struggles with detecting it in some of the episodes. Especially in videos with low frame rate and motion blurred images it could be very difficult to detect a SP held in the hand of a health care provider. In additon, the system also has problem detecting the SP in unfocused video sequences and in activity sequences with large occlusions. Using the sub-image approach and the _SplitD_ model improved the detections of the SP. This suggests that it could be possible to further improve the results by experimenting with the size and cropping of training examples. In addition, we could experiment with the generation of the synthetic data to see if it is possible to generate more realistic examples. In future recordings the problem with detection of SP could be solved by using fixed camera settings, focus, frame rate and distance from resuscitation tables and by using two camera angles to avoid occlusion. The performance of detected number of HCPs present in the video is very good for zero and one HCP present, but the system struggles to detect the number of HCPs when there are more than one HCPs present. Instead, in cases of false detection, these are mostly being mislabeled as one HCPs less than the reference data shows. The cause for this is a mixture of variations in the dataset and of camera angles. The system performs worse when the HCPs are not wearing rubber gloves, suggesting the need for more training examples from similar episodes. The cameras are also often placed in a side-position where the HCPs occludes other HCPs and hands. Training the network to discriminate between left and right hands could also improve the performance of the detected number of HCPs present in the videos. ## VII Conclusion and future work The proposed system shows promising object detection and tracking results in noisy real-world videos. The object detection performance during activities was 97 % on ventilation, 100 % on attaching or removing heart rate sensor and 75 % on suction. The system also estimate the number of health care providers (HCP) present with an accuracy of 71 %. In future work we will investigate the possibility of discriminating between left and right HCP hands and implementing hand tracking to improve the performance of the estimated number of HCP. We will also experiment with different network structures and training data to try to improve the detection of the suction device, in addition to increasing the amount of training data in general to get a better overall detection performance. Further, we will continue with step two of the planned system: inputting the proposed object areas to sequential neural networks to detect the resuscitation activities. This will produce timelines useful for quantifying the use of different resuscitation activities, which could further provide new knowledge on the effects of activities on newborn resuscitation outcome. In the future, such a system could also be implemented on-site as a post-resuscitation debriefing tool, and/or for real-time feedback and decision support during newborn resuscitation. The latter would require a very high-performance system. ## VIII Acknowledgement ### _Funding_ Our research is part of the Safer Births project which has received funding from: Laerdal Global Health, Laerdal Medical, University of Stavanger, Helse Stavanger HF, Haydom Lutheran Hospital, Laerdal Foundation for Acute Medicine, University in Oslo, University in Bergen, University of Dublin - Trinity College, Weill Cornell Medicine and Muhimbili National Hospital. The work was partly supported by the Research Council of Norway through the Global Health and Vaccination Programme (GLOBVAC) project number 228203. For the specific study of this paper; Laerdal Medical provided the video equipment. Laerdal Global Health funded data collection in Tanzania and IT infrastructure. The University of Stavanger funded the interpretation of the data. Fig. 5: Object detection during suction. Detected and undetected sequences with the subgroups low and medium frames per second (FPS) rate. ### _Ethical approval_ This study was approved by the National Institute of Medical Research (NIMR) in Tanzania (NIMR/HQ/R.8a/Vol. IX/1434) and the Regional Committee for Medical and Health Research Ethics (REK), Norway (2013/110/REK vest). Parental informed verbal consent was obtained for all results-citated newborns. ### _Conflict of interests_ Myklebust is employed by Laerdal Medical. He contributed to study design and critical revision of the manuscript, but not in the analysis and interpretation of the data.
2306.02036
On the Empirical Evidence of Microservice Logical Coupling. A Registered Report
[Context] Coupling is a widely discussed metric by software engineers while developing complex software systems, often referred to as a crucial factor and symptom of a poor or good design. Nevertheless, measuring the logical coupling among microservices and analyzing the interactions between services is non-trivial because it demands runtime information in the form of log files, which are not always accessible. [Objective and Method] In this work, we propose the design of a study aimed at empirically validating the Microservice Logical Coupling (MLC) metric presented in our previous study. In particular, we plan to empirically study Open Source Systems (OSS) built using a microservice architecture. [Results] The result of this work aims at corroborating the effectiveness and validity of the MLC metric. Thus, we will gather empirical evidence and develop a methodology to analyze and support the claims regarding the MLC metric. Furthermore, we establish its usefulness in evaluating and understanding the logical coupling among microservices.
Dario Amoroso d Aragona, Luca Pascarella, Andrea Janes, Valentina Lenarduzzi, Rafael Penaloza, Davide Taibi
2023-06-03T07:29:54Z
http://arxiv.org/abs/2306.02036v1
# On the Empirical Evidence of Microservice Logical Coupling ###### Abstract. [Context] Coupling is a widely discussed metric by software engineers while developing complex software systems, often referred to as a crucial factor and symptom of a poor or good design. Nevertheless, measuring the logical coupling among microservices and analyzing the interactions between services is non-trivial because it demands runtime information in the form of log files, which are not always accessible. [Objective and Method] In this work, we propose the design of a study aimed at empirically validating the Microservice Logical Coupling (MLC) metric presented in our previous study. In particular, we plan to empirically study Open Source Systems (OSS) built using a microservice architecture. [Results] The result of this work aims at corroborating the effectiveness and validity of the MLC metric. Thus, we will gather empirical evidence and develop a methodology to analyze and support the claims regarding the MLC metric. Furthermore, we establish its usefulness in evaluating and understanding the logical coupling among microservices. Microservices, Logical Coupling, Empirical Software Engineering + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information in Computer Science + Footnote †: journal: Information the unavailability of data. As an example, (Hanan et al., 2012; Wang et al., 2013) assume the availability of log files describing calls to software components, used to extract usage processes, which are then used to propose microservices. Such logs are not always available or might require major changes in the system under development. Moreover, the measurement of coupling proposed in the literature is based on the static (Beng et al., 2015) or dynamic analysis (Wang et al., 2013) of source code. Coupling between teams such as the need to wait or to synchronize with other service teams before committing is not captured by such metrics. Previous works addressed the problem of coupling in monolithic systems thoroughly. Also, the concept of _logical_ coupling was introduced, i.e., coupling, which is not based on a dependency in source code, but on the implicit dependency between artifacts that are often changed together. In particular, (Beng et al., 2015) extended the logical coupling metric, originally proposed by (Wang et al., 2013) to capture whether changes made in a predefined time window are logically coupled. In this research, our objective is to propose a study design aimed at empirically validating the Microservice Logical Coupling (MLC) metric we introduced in our previous work (Beng et al., 2015). To this aim, we plan to conduct an empirical study by focusing our analysis on Open Source Systems (OSS) whose software design is based on microservices architecture. The primary focus of this study is to provide empirical evidence supporting the effectiveness and validity of the Microservice Logical Coupling (MLC) metric. The expected outcome from this work is two-fold: * a new validated context-aware metric to calculate the logical coupling between microservices; * a detailed replication package reporting all the data and scripts to open further research studies. Practitioners can benefit from these results from both research and practical perspective by assessing one aspect of the coupling between microservices. **Paper structure:** Section 2 describes the background, Section 3 depicts related work, Section 4 introduces our study design, Section 5 reports the study execution, Section 6 discusses the possible threats, Section 7 shows the risk management, and Section 8 draws conclusions and future work. ## 2. Background Complex software systems are not always created by ardent practitioners for various reasons. Developers are continuously dealing with project demand and restrictions. As a consequence, developers incur the potential risk of performing changes to the software which aren't related to their expertise, assignments, or component. Logical coupling uses a software system's development history to detect change patterns among code units that are modified together in order to spot entangled changes in versioning systems. Robbes et al.'s (Robbes et al., 2013) metric, adopted by D'Ambroos et al. (Ambroos et al., 2015), measures whether changes performed within a specified time range are logically coupled. In our previous work (Beng et al., 2015), we investigated changes involving multiple microservices committed atomically in the same working unit. In their work, they consider two microservices logically coupled if these have been co-changed more than 5 times during the history of the project. However, the proposed metric (Beng et al., 2015) considers neither the decoupling of two microservices - thus when due to some refactoring in the code the two microservices are not logically coupled anymore - nor the evolution of the coupling during the development process nor the context of the project - commit frequency, number of developers, number of microservices and so on - that can influence the metric leading in an inaccurate result. For this reason, we proposed a new metric that can be used to analyze the evolution of the coupling during the history of the project, allowing us to study when two microservices start to be logically coupled and when they could be considered decoupled. We will perform different analyses to understand how we can define a threshold to consider two microservice logically coupled based on a single specific project, in this way, we can develop a proposal to identify the best threshold depending on the specified project. Furthermore, we will perform a study to understand which factors can influence the metrics and how we can mitigate this influence by setting differently the metrics parameters. ## 3. Related Work Several metrics have been suggested for monolithic systems, some of which have been adapted for service-based systems, particularly for microservices, as demonstrated in previous research (see e.g., (Bogner et al., 2015)). Bogner et al. (Bogner et al., 2015) conducted a systematic literature review on maintenance metrics for microservices, with a focus on service-based systems rather than metrics developed for object-oriented systems. Their findings indicate that most of the metrics that were originally designed for monolithic systems and Service Oriented Architectures are also relevant in the context of microservices. This study is considered to be the first to focus exclusively on microservices, and subsequent research has expanded on these findings to further enhance our understanding of the topic. Apolinaro et al. (Apolinaro et al., 2016) presented a theoretical use case proposing a roadmap to apply four metrics defined by (Bogner et al., 2015): * Absolute Importance of the Service (AIS): number of consumers invoking at least one operation from a service * Absolute Dependence of the Service (ADS): number of services on which the service depends * Service Coupling Factor (SCF) as the density of a graph's connectivity. SCF=SC/N\({}^{2}\)-N where SC is the sum of calls between services, and N is the total number of services. * Average Number of Directly Connected Services (ADCS): the average ADS metric of all services. In their study, Taibi et al. (Taibi et al., 2013) suggested four measures (coupling between microservices, number of classes per microservice, number of duplicated classes, and frequency of external calls) to be considered when breaking down an object-oriented monolithic system into microservices. However, these metrics rely on the concept of classes and lack empirical validation. On the other hand, Panichella et al. (Paniclla et al., 2017) proposed a structural coupling metric, which was tested in 17 open-source microservice-based projects (Imranur et al.(Taranur et al., 2014)). The metric calculates the coupling of services at runtime based on the inbound and outbound calls between services. In our recent work (Beng et al., 2015), we proposed a new metric to measure the level of logical coupling between microservices, which is based on the analysis of commits to versioning systems. To validate the effectiveness of this metric, the authors collected data from 145 open-source microservices projects and performed an initial analysis. The results show that logical coupling has a significant impact on the overall system and tends to increase over time. ## 4. The Empirical Study We now describe our empirical study reporting goal, research questions, context, data collection, and data analysis. ### Goal and Research Questions Our goal is to empirically validate Microservice Logical Coupling (MLC) metric we defined in our previous work (Blei et al., 2017). Therefore, we formulated the following three Research Questions (RQs): **RQ1**.: _Does the MLC metric respect the representation condition of measurement?_ In this RQ, we will consider measurement as a mapping from the empirical world to the formal, relational world. Consequently, a _measure_ is the number or symbol assigned to an entity by this mapping in order to characterize an attribute (Blei et al., 2017). Now for us, the empirical relationship we want to measure is logical coupling in the context of microservices. That is, the representation condition asserts that a measurement mapping (a measurement) M must map entities into numbers and empirical relations into numerical relations in such a way that the empirical relations preserve and are preserved by the numerical relations. For example, if we measure height using a meter, the measurement method M that we use to determine height needs to respect the following representation condition: A is taller than B if and only if M(A)>M(Blei et al., 2017; Blei et al., 2017). In this context, "validating a software measure is the process of ensuring that the measure is a proper numerical characterization of the claimed attribute by showing that the representation condition is satisfied (Blei et al., 2017), which is what RQ1 wants to investigate. **RQ2**.: _How does the project context impact the MLC metric?_ When validating a measure, it "must be viewed in the context in which it will be used" (Blei et al., 2017). Therefore, validation must take into account the measurement's purpose since a measure may be valid for some uses but not for others (Blei et al., 2017). Therefore, with RQ2, we want to investigate how the following factors (F) influence the accuracy of the metric: * _Commit scope_ Since we assume that a commit indicates a change in a particular microservice, large refactoring or application-wide user interface changes should be excluded from this consideration to avoid them from being interpreted as that all microservices are coupled. * _Commit size (F2)_ It is necessary to study in detail which commits need to be excluded or particularly considered in the proposed metric. Moreover, the more rarely a developer commits while changing code, the more changes are included in each individual commit, and thus this increases the probability that two different microservices seem coupled. For example, in the extreme case of one commit per month, all microservices might appear coupled, so it is necessary to understand the sensitivity of our metric with respect to the commit size. * _Microservice size (F3)_ It is common practice to suggest small changes to facilitate the code review process, and to reduce the need for regression testing. For example, a study showed that in open source projects a change in the 80% of the time involves four files (Blei et al., 2017). So if a change in 80% of cases involves 4 to 5 files, it seems more likely that in the extreme case of having very small microservices will result in a higher probability of logical coupling. Conversely, having large microservices this consideration seems less likely. ### Data Collection To understand the coupling and decoupling of microservices, we need to know when different microservices are updated, and how often they are modified in correspondence to each other. We will collect all the commits in a project, grouped by commit day, and by microservice. That is, for each day, we will signal which microservices were updated. To avoid spurious conclusions derived from gaps in the work, we will remove from the dataset all days in which no commits were made to the project. Hence we consider only "active" days (rather than calendar days). ### Data Analysis In order to answer our RQ1, we need to measure the coupling of microservices and their evolution over time. We will use a _sliding window_ approach. Fixed a number \(n\), we look at each window of \(n\) consecutive (active) days. For each pair \(\mu\), \(v\) of microservices of interest, we will count the number of instances in which _both_ services were updated within the window. Specifically, a microservice is updated if at least one of its associated files is modified at the commit. The proportion of days with shared updates is associated with the _coupling value_ at the end of the window. For example, if \(\mu\) is updated on days \(\{1,2,4,6,10\}\) and \(v\) is updated on active days \(\{1,2,3,4,5,10\}\) (Figure 1), assuming a sliding window of 3 days, during the first window the microservices are mutually updated in two (1 and 2) of the three days (Figure 2), yielding a coupling value of \(2/3\). The second window considers days \(2\)-\(4\), and yields a coupling value of \(2/3\) again. On the third window (days \(3\)-\(5\)) the coupling value decreases to \(1/3\) and remains like that in the successive window. The days \(7\), \(8\), and \(9\) are not considered even if they are active days\(-\rho\) is committed--because they are not relevant for the couple \((\mu,v)\). We consider only the number of days (in the window) Figure 1. Selection of the relevant active days for the microservices couple \((\mu,v)\) when at least one of the services is updated. In the example above, the days 7-9 are removed since neither \(\mu\) nor \(v\) is updated on those days (they are thus irrelevant for the pair). Note that, thanks to this focus on relevant active days, the coupling value between the services remains at least \(1/3\), in contrast with the 0 that is observed if days 7-9 were included. In this way, we can model the decrease in coupling. If we considered all the active days, we would have a value of 0, of occurrences in a single day, both when none of the microservices is committed and when only one of the two is committed. But active days when neither of the two microservices of interest is committed add no information and act as noise in the data. Furthermore, if two microservices are seldom updated, their coupling value would be small, even if they are constantly updated together. The coupling value increases as \(\mu\) and \(v\) co-occur more often, and decreases otherwise. Using these values, we can apply different techniques for understanding the evolution of coupling between the services. Given a threshold, we can find instances where the services are "sufficiently" coupled. But using specific numerical values allows us to apply time-series analyses--in particular, time-series segmentation--to understand the periods of time where the coupling is growing, stable, or decreasing, respectively. The size \(n\) of the sliding window cannot be fixed _a-priori_, as it depends on the specific properties of the project. Yet, it can be seen that a longer window yields a more stable behavior, where the numerical values differ less between successive active days. We will experiment with different window sizes (i.e., 10-30-100) to better understand this behavior. Finally, rather than checking the co-occurrence of microservice updates, we can verify for _logical_ dependencies: if service \(\mu\) is updated, do we expect to update \(v\) as well? This value is obtained by modifying the proportion described before to the number of co-occurrences divided by the active days where \(\mu\) is updated within the window. This can be seen as the conditional probability of modifying \(v\) given that \(\mu\) was modified. Importantly, contrary to "pure" coupling, this measure is not symmetric; that is, the likelihood of modifying \(v\) when \(\mu\) is changed is not necessarily equal to that of modifying \(\mu\) under the assumption that \(v\) is modified. This asymmetry allows us to better understand the relationships and dependencies between microservices. Regarding \(\mathrm{RQ}_{2}\), to understand the influence of the three relevant factors on our metrics, we will analyze the impact of selecting different thresholds for filtering out commits, and for filtering microservices of different sizes, in the overall time series through a correlation test. ### Verifiability and Replicability We will make the scripts and raw data available in our online appendix to allow verifiability and replicability. ## 5. Execution Plan This Section shows the execution plan we will aim to implement according to the study design defined in Section 4. The execution plan is constructed by splitting the activities into six main steps: project selection, retrieving co-changed microservices, time series analysis, conditional probability, exploratory analysis, and validation. ### Project Selection As the first step of our study, we must collect data from well-developed software projects. For this aim, we will initially focus on projects whose source code is systematically developed by using err versioning system and is hosted publicly on GitHub. Then, we will define all the characteristics needed by a project to be included in our analysis (e.g., _age, number of commits, number of developers_. _number of stars, number of watchers, supported by companies/goverments/foundation_). In addition, we will include only those projects that implement a consistent architecture based on microservices. ### Retrieving co-changed microservices As a successive step, we will need to collect all microservices whose associated source code is modified within a single time window. More precisely, we refer to co-changed microservices as cases in which at least two files belonging to two different microservices are changed simultaneously, or in other words, they are changed within the same day. To do so, we will first focus on relevant active days by excluding all calendar days which do not present changes on the given microservices and then will extract all the files that changed on these days. Successively, we will create a map to associate each microservice with the related changed file. ### Time series analysis In the third step, we will proceed with a time series analysis by considering all changes associated with each microservices. In particular, for each defined sliding window, we will count in how many active days, in the same sliding window, the microservices have been committed together divided by the length of the sliding window, in Figure 2 the result of this step will be: \(2/3\) for the first and second window and \(1/3\) for the last three windows. ### Conditional Probability In the fourth step, we will focus on the conditional probability. For each pair of microservices, we will extract the active days where at least one of the two microservices has been changed. It is worth noticing that the conditional probability of Figure 2 depends on the order in which the two microservices are analyzed. For example, the conditional probability \(P(\mu\mid v)\) corresponds to \(2/3\) while the conditional probability \(P(v\mid\mu)\) is \(2/2\). ### Exploratory Analysis In the fifth step, we will extract the commit scope (_e.g., refactoring, bug fix, improvement, new feature_) by parsing the commit message. Figure 2. Three days sliding window example We will extract the commit size (e.g _chunks_). We will study how the results change using different _thresholds_ to consider two microservices logically coupled, different lengths of the _sliding window_ and filtering out the commit based on the commit scope and the commit size. We will compare the different results by grouping the projects by the number of microservices. Each variation produces a different time series. These variations will be compared through a correlation analysis. ### Validation In the last step, we will validate our metric. In particular, since we are interested in understanding (i) how the proposed metric depicts the evolution of the logical coupling over time and (ii) to what extent microservices labeled as coupled are effectively united, we will: * Manually calculate the coupling of a statistically relevant sample of microservices (e.g., confidence level 95%, confidence interval 1) so as to create our ground truth. * Use accuracy measures (e.g., _precision, recall, and f-measure_), checking whether the microservices labeled as coupled are actually coupled; * Use the ground truth to evaluate the conditional probability and thus if our probability corresponds to a real update in real life; * Check if the trend caught by our metric correspond to the actual trend. ## 6. Threats to Validity In this section, we discuss the threats that might affect the validity of our empirical study. We acknowledge the potential limitations of our research. Firstly, while we have chosen a diverse and up-to-date dataset, it may not encompass the entire spectrum of open-source microservices projects. Nonetheless, we believe it represents a valuable collection in terms of microservice quantity, project age, and developer involvement. Secondly, it's important to note that industrial projects might have different perspectives on coupling, particularly due to the emphasis placed on minimizing coupling in practitioner discussions. Nevertheless, exploring the microservice lifecycle (MLC) of a project can shed light on ineffective team dynamics and workflow structures, enabling organizational improvements. Furthermore, our MLC validation analysis solely focuses on co-changes of microservices within the same commit. Consequently, it does not capture subsequent changes arising from the necessity to synchronize services. To address this, one potential solution could involve introducing a time window to account for co-changes. Additionally, considering the issues reported in the project's issue trackers could provide insights into whether developers are requesting synchronization with other services for their changes. ## 7. Risk Management During the execution of this study, we foresee mainly two risks that might impact the results: _Lack of OSS Projects_ and the _Unaavailability of some of the original authors during the execution of the study_. For the first aspect, there is a risk of not finding OSS projects fulfilling our criteria. However, we are aware that different works have already performed studies investigating the quality of OSS Microservices, thus minimizing this risk. For the second aspect, since not all authors have permanent contracts, therefore there is the risk that some authors might change affiliations or not be available to continue this work actively. To mitigate this risk, the authors committed to continuing this work even without some of the author's contributions. ## 8. Conclusion In this work, we describe the design of an empirical study aimed at understanding if the Microservice Logical Coupling (MLC) metric respect the representation condition of measurement and how the project context impacts the MLC metric itself. We plan to shed light on the MLC, executing this study on Open Source Projects developed with microservices. The execution of this study will allow us to understand if the MLC is a viable metric to measure logical coupling between microservices, and will allow researchers and practitioners to eventually use and extend it in future works.
2306.05981
On the distribution of powered numbers
Asymptotic formulae are established for the number of natural numbers $m$ with largest square-free divisor not exceeding $m^{\vartheta}$, for any fixed positive parameter $\vartheta$. Related counting functions are also considered.
Jörg Brüdern, Olivier Robert
2023-06-09T15:50:56Z
http://arxiv.org/abs/2306.05981v1
# On the distribution of powered numbers ###### Abstract. Asymptotic formulae are established for the number of natural numbers \(m\) with largest square-free divisor not exceeding \(m^{\vartheta}\), for any fixed positive parameter \(\vartheta\). Related counting functions are also considered. Keywords: powered numbers, nuclear numbers MSC(2020): 11N25, 11N37 ## 1. Introduction Motivated by questions about diophantine equations and the \(abc\)-conjecture, Mazur [1] proposed to smooth out the set of positive \(l\)-th powers in a multiplicative way, by what he named _powered_ numbers. To introduce the latter, let \(k(m)\) denote the largest square-free divisor of the natural number \(m\), and let \(i(m)=\log m/\log k(m)\). For all \(l\in\mathbb{N}\) one has \(i(m^{l})\geq l\). Mazur's powered numbers (relative to \(l\)) are the numbers \(m\in\mathbb{N}\) with \(i(m)\geq l\). Note here that \(l\) need not be integral in this definition, but for \(l\in\mathbb{N}\) the powered numbers (relative to \(l\)) contain the \(l\)-th powers. It is proposed in [1] to replace, within a given diophantine equation, an \(l\)-th power by the corresponding powered numbers, and to consider the resulting equation between powered numbers as the associated "rounded" diophantine equation. In this note we analyse the distribution of powered numbers. We find it is more appropriate to work with the real number \(\vartheta=1/l\). The condition \(i(m)\geq l\) is expressed equivalently as \(k(m)\leq m^{\vartheta}\), and for any \(\vartheta>0\) we define the set \[\mathscr{A}(\vartheta)=\{m\in\mathbb{N}\colon k(m)\leq m^{\vartheta}\}.\] Thus, Mazur's powered numbers (relative to \(l\)) are exactly the elements of \(\mathscr{A}(1/l)\). For analytic approaches to rounded diophantine problems it is indispensable to determine the density of the set \(\mathscr{A}(\vartheta)\). Our principal goal are asymptotic formulae for the number \(S_{\vartheta}(x)\) of elements in \(\mathscr{A}(\vartheta)\) that do not exceed \(x\), and for related counting functions. It is not difficult to see that for any \(0<\vartheta<1\) the number \(S_{\vartheta}(x)\) obeys the inequalities \[x^{\vartheta}\ll S_{\vartheta}(x)\ll x^{\vartheta+\varepsilon} \tag{1.1}\] whenever \(\varepsilon\) is a given positive real number and \(x\) is large in terms of \(\varepsilon\). It now transpires that for \(\vartheta=1/l\) the powered numbers are not much denser than the \(l\)-th powers. A weaker version of (1.1) occurs in Mazur [1] who refers to Granville, showing him an "easy" argument supposedly confirming the inequalities \(x^{\vartheta-\varepsilon}\ll S_{\vartheta}(x)\ll x^{\vartheta+\varepsilon}\). In Mazur's article there is no indication how this would go but the simplest argument we know allows one to take \(\varepsilon=0\) in the lower bound. To ## 1. Introduction Let \(\mathbb{N}\) be a finite set of real numbers. Let \(\mathbb{R}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number and let \(\mathbb{N}\) be a real number. Let \( are featured in the uniform asymptotic formula \[N(x,y)=\big{(}1+o(1)\big{)}yF\big{(}\log(x/y)\big{)}\qquad\big{(}y>\exp\big{(}( \log x)^{2/3}\big{)},\,x\to+\infty\big{)} \tag{1.7}\] that is contained in [2, Proposition 10.1]. As we shall see momentarily, for each pair \(\vartheta,x\) with \(0<\vartheta<1\) and \(x\geq 2\), there is exactly one real number \(\alpha=\alpha_{\vartheta}(x)>0\) with \[\sum_{p}\frac{p^{\alpha}\log p}{(p^{\alpha}-1)\big{(}1+(p+1)(p^{\alpha}-1) \big{)}}=(1-\vartheta)\log x. \tag{1.8}\] We are now in a position to state our first result. **Theorem 1**.: _Let \(0<\vartheta<1\) fixed. Then, for \(x\geq 27\), one has_ \[S_{\vartheta}(x)=x^{\vartheta}F\big{(}(1-\vartheta)\log x\big{)}\frac{\alpha_ {\vartheta}(x)}{\vartheta}\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{\log x}} \Big{)}\Bigg{)}. \tag{1.9}\] _As \(x\to\infty\), one also has_ \[S_{\vartheta}(x)=(1+o(1))x^{\vartheta}F\big{(}(1-\vartheta)\log x\big{)}\frac {1}{\vartheta}\Big{(}\frac{2}{1-\vartheta}\Big{)}^{1/2}\big{(}(\log x)\log \log x\big{)}^{-1/2}. \tag{1.10}\] This result calls for several comments. First we take \(y=x^{\vartheta}\) in (1.7) and substitute the resulting equation \[N(x,x^{\vartheta})=(1+o(1))x^{\vartheta}F\big{(}(1-\vartheta)\log x\big{)} \qquad(x\to+\infty) \tag{1.11}\] within (1.10) to infer that \[S_{\vartheta}(x)=(1+o(1))N(x,x^{\vartheta})\frac{1}{\vartheta}\Big{(}\frac{2} {1-\vartheta}\Big{)}^{1/2}\big{(}(\log x)\log\log x\big{)}^{-1/2}\quad(x\to+ \infty).\] This is an analogue of (1.5), in quantitative form that is of strength comparable to [2, Theoreme 4.4]. Our second comment concerns the implicitly defined function \(\alpha_{\vartheta}(x)\). It originates in the Dirichlet series \[\mathscr{G}(s)=\frac{6}{\pi^{2}}\sum_{m\geq 1}\frac{1}{\psi(m)m^{s}}=\frac{6}{ \pi^{2}}\prod_{p}\bigg{(}1+\frac{1}{(p+1)(p^{s}-1)}\bigg{)} \tag{1.12}\] that converges absolutely in \(\operatorname{Re}s>0\), and therefore has no zeros in this half plane. For real numbers \(\sigma>0\) we have \(\mathscr{G}(\sigma)>0\). We may then define \[g(\sigma)=\log\mathscr{G}(\sigma).\] Note that \(g\) extends to a holomorphic function on the right half plane, and one computes the logarithmic derivative of \(\mathscr{G}(s)\) from the Euler product representation (1.12) to \[g^{\prime}(s)=-\sum_{p}\frac{p^{s}\log p}{(p^{s}-1)\big{(}1+(p+1)(p^{s}-1) \big{)}}.\] Note that the sum on the right hand side here coincides with the sum in (1.8). On differentiating again, it transpires that the real function \(g^{\prime}:(0,\infty)\to\mathbb{R}\) is increasing. Considering \(\sigma\to 0\) and \(\sigma\to\infty\) one finds that its range is the open interval \((-\infty,0)\). We conclude that for a given \(v>0\) there is a unique positive number \(\sigma_{v}\) with \[g^{\prime}(\sigma_{v})+v=0.\] In [2, Lemme 6.6] it is shown that \[\sigma_{v}=(1+o(1))\sqrt{\frac{2}{v\log v}}\qquad(v\to+\infty). \tag{1.13}\] Here we choose \(v=(1-\vartheta)\log x\) and then have \(\alpha_{\vartheta}(x)=\sigma_{v}\). In particular, we see that (1.9) and (1.13) imply (1.10). Thus, it only remains to prove (1.9). Our last comment concerns the actual size of \(S_{\vartheta}(x)\). This requires some more information on the function \(F\). From [2, (2.12)] we have the asymptotic relation \[\log F(v)=\big{(}1+o(1)\big{)}\sqrt{\frac{8v}{\log v}}\qquad(v\to+\infty). \tag{1.14}\] By inserting (1.14) into (1.10), we deduce that there exists some positive number \(\beta(x;\vartheta)>0\) with \[S_{\vartheta}(x)=x^{\vartheta+\beta(x;\vartheta)}\qquad(0<\vartheta<1,\,x \geq x_{0}(\vartheta)) \tag{1.15}\] and the property that for any fixed \(\vartheta\in(0,1)\) one has \[\beta(x;\vartheta)=(1+o(1))\sqrt{\frac{8(1-\vartheta)}{(\log x)(\log\log x)}} \qquad(x\to+\infty).\] Note that (1.15) yields another proof of (1.1). We now turn to local estimates for \(S_{\vartheta}(x)\). In our case, this amounts to comparing the respective behaviour of \(S_{\vartheta}(zx)\) and \(S_{\vartheta}(x)\) uniformly for large \(x\), when \(z\) is in some sense sufficiently close to \(1\). Such estimates are often obtained with the saddle-point method, and we follow this route here, too. In a suitable range for \(z\), the fraction \(S_{\vartheta}(zx)/S_{\vartheta}(x)\) may be approximated by a simple function of \(z\). **Theorem 2**.: _Let \(0<\vartheta<1\). Then for \(x\) large, we have_ \[S_{\vartheta}(zx)=z^{\vartheta}S_{\vartheta}(x)\Bigg{(}1+O\Big{(}\sqrt{\frac{ \log\log x}{\log x}}\Big{)}\Bigg{)}\] _uniformly for \(z>0\) with \(|\log z|\ll\log\log x\)._ Finally, we consider the counting function for a variation of the powered numbers. For given \(\vartheta\in(0,1)\) and \(\Theta\in\mathbb{R}\), we consider \[S_{\vartheta,\Theta}(x)=\#\{n\leq x\colon k(n)\leq n^{\vartheta}(\log n)^{ \Theta}\}.\] Note that \(S_{\vartheta}(x)=S_{\vartheta,0}(x)\). The set of integers such that \(k(n)\leq n^{\vartheta}(\log n)^{\Theta}\) plays a prominent role in a forthcoming paper, and therefore, we provide an estimate for \(S_{\vartheta,\Theta}(x)\). It turns out that the conditions \(k(n)\leq n^{\vartheta}\) and \(k(n)\leq n^{\vartheta}(\log n)^{\Theta}\) are relatively close, and that the ratio \(S_{\vartheta,\Theta}(x)/S_{\vartheta}(x)\) is roughly of size \((\log x)^{\Theta}\). **Theorem 3**.: _Let \(0<\vartheta<1\) and \(\Theta\in\mathbb{R}\) be fixed. Then for \(x\) large, one has_ \[S_{\vartheta,\Theta}(x)=(\log x)^{\Theta}S_{\vartheta}(x)\Bigg{(}1+O\Big{(} \sqrt{\frac{\log\log x}{\log x}}\Big{)}\Bigg{)}.\] ## 2. Proof of Theorem 1 In this section we derive Theorem 1. Before we embark on the main argument, we fix some notation and recall a pivotal result concerned with the distribution of square-free numbers. This involves the function \(\psi(m)\) as defined in (1.6), the Mobius function \(\mu(m)\), and for a parameter \(0\leq\gamma\leq\frac{1}{2}\) at our disposal, the product \[r_{\gamma}(m)=\prod_{p\mid m}\big{(}1+4\gamma p^{-1/2}\big{)}.\] One then has the estimate ([2, (10.1)]) \[\sum_{l\leq z}\mu^{2}(lk)=\frac{6kz}{\pi^{2}\psi(k)}+O\big{(}r_{\gamma}(k)z^{ 1-\gamma}\big{)} \tag{2.1}\] that holds uniformly relative to the square-free number \(k\) and the real parameters \(z,\gamma\) in the ranges \(z\geq 1\), \(0\leq\gamma\leq\frac{1}{2}\). The first steps of our argument follow the pattern laid out in [2, Sect. 10]. Unique factorisation shows that for all natural numbers \(n\) there exists exactly one pair of coprime natural numbers \(l,m\) with \(\mu(l)^{2}=1\) and \(n=lmk(m)\). Note that the two conditions \((l,m)=1\) and \(\mu(l)^{2}=1\) are equivalent to the single condition \(\mu(lk(m))^{2}=1\). Further, one has \(k(n)=lk(m)\). With \(\vartheta\in(0,1)\) now fixed, it follows that \(S_{\vartheta}(x)\) equals the number of \((l,m)\in\mathbb{N}^{2}\) satisfying the conditions \[\mu(lk(m))^{2}=1,\quad lmk(m)\leq x,\quad lk(m)\leq(lmk(m))^{\vartheta}.\] These last three conditions we recast more compactly as \[\mu(lk(m))^{2}=1,\quad lk(m)\leq\min(x/m,m^{\vartheta/(1-\vartheta)}). \tag{2.2}\] From now on, the number \(\kappa=\vartheta/(1-\vartheta)\) features prominently, and we also put \(y=x^{\vartheta}\). Note that \[\min(x/m,m^{\kappa})=m^{\kappa}\ \ \ \text{if and only if}\ \ m\leq x/y. \tag{2.3}\] Hence, we consider the ranges \(m\leq x/y\) and \(x/y<m\leq x\) separately. By (2.2) and (2.3), this leads to the decomposition \[S_{\vartheta}(x)=S_{\vartheta}^{(1)}(x)+S_{\vartheta}^{(2)}(x) \tag{2.4}\] in which \[S_{\vartheta}^{(1)}(x)=\sum_{\begin{subarray}{c}m\leq x/y\\ k(m)\leq m^{\kappa}\end{subarray}}\sum_{l\leq m^{\kappa}/k(m)}\mu(lk(m))^{2}, \quad S_{\vartheta}^{(2)}(x)=\sum_{\begin{subarray}{c}x/y<m\leq x\\ mk(m)\leq x\end{subarray}}\sum_{l\leq x/mk(m)}\mu(lk(m))^{2}.\] We apply (2.1) with \(k=k(m)\) to both inner sums and obtain \[S_{\vartheta}^{(1)}(x) =\frac{6}{\pi^{2}}\sum_{\begin{subarray}{c}m\leq x/y\\ k(m)\leq m^{\kappa}\end{subarray}}\frac{m^{\kappa}}{\psi(m)}+O(R_{1}), \tag{2.6}\] \[S_{\vartheta}^{(2)}(x) =\frac{6}{\pi^{2}}\sum_{\begin{subarray}{c}x/y<m\leq x\\ mk(m)\leq x\end{subarray}}\frac{x}{m\psi(m)}+O(R_{2}). \tag{2.5}\] where \[R_{1}=\sum_{\begin{subarray}{c}m\leq x/y\\ k(m)\leq m^{\kappa}\end{subarray}}r_{\gamma}(m)\Big{(}\frac{m^{\kappa}}{k(m)} \Big{)}^{1-\gamma},\qquad R_{2}=\sum_{\begin{subarray}{c}x/y<m\leq x\\ mk(m)\leq x\end{subarray}}r_{\gamma}(m)\Big{(}\frac{x}{mk(m)}\Big{)}^{1-\gamma}.\] It turns out that \(R_{1}\) and \(R_{2}\) are small. In order to couch their estimation, as well as the analysis of other error terms that arise later, under the umbrella of a single treatment, we choose parameters \(\gamma\) and \(\sigma\) with \(0<\gamma<\sigma\leq\frac{1}{2}\) and introduce the series \[E=\sum_{m=1}^{\infty}r_{\gamma}(m)\Big{(}\frac{y}{k(m)}\Big{)}^{1-\gamma} \Big{(}\frac{x/y}{m}\Big{)}^{\sigma}. \tag{2.7}\] The conditions \(\sigma>\gamma>0\) ensure convergence. It is routine to show that \[R_{1}+R_{2}\ll E.\] In fact, by Rankin's trick, \[R_{1}\leq\sum_{\begin{subarray}{c}m\leq x/y\\ k(m)\leq m^{\kappa}\end{subarray}}r_{\gamma}(m)\Big{(}\frac{m^{\kappa}}{k(m)} \Big{)}^{1-\gamma}\Big{(}\frac{x/y}{m}\Big{)}^{\sigma}.\] For \(m\leq x/y\) one has \(m^{\kappa}\leq y\), and it follows that \(R_{1}\leq E\). Likewise, one confirms \(R_{2}\leq E\) by observing that \(1-\gamma>\frac{1}{2}\geq\sigma\) so that \[\Big{(}\frac{x}{mk(m)}\Big{)}^{1-\gamma}=\Big{(}\frac{y}{k(m)}\Big{)}^{1- \gamma}\Big{(}\frac{x/y}{m}\Big{)}^{1-\gamma}\leq\Big{(}\frac{y}{k(m)}\Big{)} ^{1-\gamma}\Big{(}\frac{x/y}{m}\Big{)}^{\sigma}. \tag{2.8}\] The appearance of \(k(m)\) in the summation conditions on the right hand sides of (2.5) and (2.6) is a nuisance, and we proceed by removing these. If the condition \(k(m)\leq m^{\kappa}\) is removed from the sum in (2.5) one imports an error no larger than \[\sum_{\begin{subarray}{c}m\leq x/y\\ k(m)>m^{\kappa}\end{subarray}}\frac{m^{\kappa}}{\psi(m)}\leq\sum_{ \begin{subarray}{c}m\leq x/y\\ k(m)>m^{\kappa}\end{subarray}}\frac{m^{\kappa}}{k(m)}\leq\sum_{m\leq x/y}\Big{(} \frac{m^{\kappa}}{k(m)}\Big{)}^{1-\gamma}\Big{(}\frac{x/y}{m}\Big{)}^{\sigma} \leq E.\] Here, the last inequality is obtained by the argument that completed the estimation of \(R_{1}\). Similarly, if the condition \(mk(m)\leq x\) is removed from the summation condition in (2.6), then the resulting error does not exceed \[\sum_{\begin{subarray}{c}x/y<m\leq x\\ mk(m)>x\end{subarray}}\frac{x}{m\psi(m)}\leq\sum_{\begin{subarray}{c}x/y<m \leq x\\ mk(m)>x\end{subarray}}\frac{x}{mk(m)}\leq\sum_{\begin{subarray}{c}m>x/y\\ mk(m)>x\end{subarray}}\frac{x}{mk(m)}=R,\] say. By Rankin's trick and (2.8), \[R\leq\sum_{m>x/y}\Big{(}\frac{x}{mk(m)}\Big{)}^{1-\gamma}\leq\sum_{m>x/y} \Big{(}\frac{y}{k(m)}\Big{)}^{1-\gamma}\Big{(}\frac{x/y}{m}\Big{)}^{\sigma} \leq E.\] Collecting together, we deduce from (2.5) and (2.6) the asymptotic relations \[S_{\vartheta}^{(1)}(x)=\frac{6}{\pi^{2}}\sum_{m\leq x/y}\frac{m^{\kappa}}{ \psi(m)}+O(E),\qquad S_{\vartheta}^{(2)}(x)=\frac{6}{\pi^{2}}\sum_{x/y<m\leq x }\frac{x}{m\psi(m)}+O(E),\] and by (2.4), we infer that \[S_{\vartheta}(x)=\frac{6}{\pi^{2}}\sum_{m\leq x}\frac{m^{\kappa}}{\psi(m)} \min(1,xm^{-\kappa-1})+O(E).\] Note that the sum on the right is a partial sum of a convergent series. If one completes the sum, then it is immediate that the error thus imported is bounded by \(R\), and hence by \(E\). We have now reached the provisional expansion \[S_{\vartheta}(x)=\frac{6}{\pi^{2}}\sum_{m=1}^{\infty}\frac{m^{\kappa}}{\psi(m) }\min(1,xm^{-\kappa-1})+O(E). \tag{2.9}\] It remains to estimate \(E\). In its definition (2.7), we encounter a sum over a multiplicative function, and so \[E= y^{1-\gamma}(x/y)^{\sigma}\prod_{p}\Big{(}1+\frac{p^{\gamma}r(p; \gamma)}{p(p^{\sigma}-1)}\Big{)}\] \[\leq y^{1-\gamma}(x/y)^{\sigma}\prod_{p}\Big{(}1+\frac{p^{\gamma}}{p( p^{\sigma}-1)}\Big{)}\prod_{p}\Big{(}1+\frac{4\gamma p^{\gamma-\frac{1}{2}}}{p(p^{ \sigma}-1)}\Big{)}.\] Now, since \(p^{\gamma-\frac{1}{2}}\leq 1\), \(\gamma<\sigma\), \(p^{\sigma}-1\geq\sigma\log p\), and since the sum \(\sum_{p}\frac{1}{p\log p}\) converges, \[\prod_{p}\Big{(}1+\frac{4\gamma p^{\gamma-\frac{1}{2}}}{p(p^{\sigma}-1)}\Big{)} \leq\prod_{p}\Big{(}1+\frac{4}{p\log p}\Big{)}\ll 1.\] As on an earlier occasion, we write \(v=(1-\vartheta)\log x=\log\frac{\pi}{y}\), and then choose \(\sigma=\sigma_{v}\) and \(\gamma=\sigma_{v}-\frac{1}{\log y}\). By (1.12) one has \[\prod_{p}\Big{(}1+\frac{p^{\gamma}}{p(p^{\sigma}-1)}\Big{)}=\frac{\pi^{2}}{6} \mathscr{G}(\sigma)\prod_{p}\Big{(}1+\frac{(p^{\gamma}-1)(p+1)+1}{p+p(p+1)(p^ {\sigma}-1)}\Big{)}.\] Now, on recalling (1.13), \[\prod_{p\leq e^{1/\sigma}}\Big{(}1+\frac{(p^{\gamma}-1)(p+1)+1}{p+p(p+1)(p^{ \sigma}-1)}\Big{)}\leq\prod_{p\leq e^{1/\sigma}}\Big{(}1+\frac{1}{p}\Big{)} \ll\frac{1}{\sigma}\ll(v\log v)^{1/2}\] while one also has \[\prod_{p>e^{1/\sigma}}\Big{(}1+\frac{(p^{\gamma}-1)(p+1)+1}{p+p(p+1)(p^{ \sigma}-1)}\Big{)}\leq\prod_{p>e^{1/\sigma}}\bigg{(}1+\frac{1}{p^{1+\sigma- \gamma}}\bigg{)}\ll\zeta\Big{(}1+\frac{1}{\log y}\Big{)}\ll\log y.\] On collecting together, this shows that \[E\ll y^{1-\sigma_{v}}e^{v\sigma_{v}}\mathscr{G}(\sigma_{v})(v\log v)^{1/2} \log y.\] From [2, (2.11)] we deduce that \[F(v)\asymp\Big{(}\frac{\log v}{v}\Big{)}^{1/4}e^{v\sigma_{v}}\mathscr{G}( \sigma_{v})\qquad(v\geq 2),\] and hence \[E\ll y^{1-\sigma_{v}}F(v)v^{3/4}(\log v)^{1/4}\log y.\] With the choice of \(y\) and \(v\), one has \(\sigma_{v}=\alpha_{\vartheta}(x)\). Moreover, \(\log y\) and \(v\) have the order of magnitude \(\log x\) so that the last inequality now reads \[E\ll x^{\vartheta}F((1-\vartheta)\log x)x^{-\vartheta\alpha_{\vartheta}(x)}( \log x)^{7/4}(\log\log x)^{1/4}. \tag{2.10}\] Our final task is to compare our estimate for \(E\) with the size of the sum on the right of (2.9). Recall that in view of (1.1) and (1.11), \(S_{\vartheta}(x)\) and \(N(x,x^{\vartheta})\) are of comparable size. In order to mimick the estimate (1.11), we introduce the function \[H_{\vartheta}(x)=\frac{6}{\pi^{2}x^{\vartheta}}\sum_{m=1}^{\infty}\frac{m^{ \kappa}}{\psi(m)}\min(1,xm^{-\kappa-1})\] so that (2.9) now reads \[S_{\vartheta}(x)=x^{\vartheta}H_{\vartheta}(x)+O(E).\] Our aim is to give an estimate of \(H_{\vartheta}(x)\) by using the saddle-point method, and to describe more precisely \(H_{\vartheta}(x)/F((1-\vartheta)\log x)\) as \(x\to+\infty\). **Lemma 1**.: _Fix \(\vartheta\) with \(0<\vartheta<1\). Then for \(x\geq 27\) one has_ \[H_{\vartheta}(x)=F((1-\vartheta)\log x)\frac{\alpha_{\vartheta}(x)}{\vartheta }\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{\log x}}\Big{)}\Bigg{)}.\] Proof.: The argument is modelled on [2, Section 8]. Recall the identity \[\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i}\mathbb{R}}\frac{y^{s}}{s(1-s )}\,\mathrm{d}s=\min(1,y),\qquad(0<\sigma<1,\,y>0).\] We then have the integral representation \[H_{\vartheta}(x) =\frac{1}{x^{\vartheta}}\frac{6}{\pi^{2}}\sum_{m\geq 1}\frac{m^{ \kappa}}{\psi(m)}\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i}\mathbb{R}} \frac{1}{s(1-s)}\left(\frac{x}{m^{1/(1-\vartheta)}}\right)^{s}\mathrm{d}s\] \[=\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i}\mathbb{R}} \mathscr{G}\Big{(}\frac{s-\vartheta}{1-\vartheta}\Big{)}x^{s-\vartheta} \frac{\mathrm{d}s}{s(1-s)}\] \[=\frac{1}{2\pi}\int_{\mathbb{R}}\mathscr{G}\Big{(}\frac{\sigma+ \mathrm{i}t-\vartheta}{1-\vartheta}\Big{)}x^{\sigma+\mathrm{i}t-\vartheta} \frac{\mathrm{d}t}{(\sigma+\mathrm{i}t)(1-\sigma-\mathrm{i}t)}.\] After a linear change of variable in \(t\), we arrive at \[H_{\vartheta}(x)=\frac{1}{2\pi}\int_{\mathbb{R}}\mathscr{G}\Big{(}\frac{ \sigma-\vartheta}{1-\vartheta}+\mathrm{i}t\Big{)}x^{\sigma+\mathrm{i}(1- \vartheta)t-\vartheta}\frac{(1-\vartheta)\,\mathrm{d}t}{(\sigma+\mathrm{i}(1- \vartheta)t)(1-\sigma-\mathrm{i}(1-\vartheta)t)}.\] Recall again that \(v=(1-\vartheta)\log x\), and that \(\alpha_{\vartheta}(x)=\sigma_{v}\). We take \(\sigma=\vartheta+(1-\vartheta)\sigma_{v}\). For large \(x\) one then has \(0<\sigma<1\), and the previous formula for \(H_{\vartheta}(x)\) becomes \[H_{\vartheta}(x)=\frac{1}{2\pi}\int_{\mathbb{R}}\frac{\mathscr{G}(\sigma_{v}+ \mathrm{i}t)e^{(\sigma_{v}+\mathrm{i}t)v}}{\big{(}\vartheta+(1-\vartheta)( \sigma_{v}+\mathrm{i}t)\big{)}\big{(}1-\sigma_{v}-\mathrm{i}t\big{)}}\,\mathrm{ d}t.\] After truncation, we have \[H_{\vartheta}(x)=\frac{1}{2\pi}\int_{-v^{2}}^{v^{2}}\frac{\mathscr{G}(\sigma_ {v}+\mathrm{i}t)e^{(\sigma_{v}+\mathrm{i}t)v}}{\big{(}\vartheta+(1-\vartheta) (\sigma_{v}+\mathrm{i}t)\big{)}\big{(}1-\sigma_{v}-\mathrm{i}t\big{)}}\, \mathrm{d}t+O\left(\frac{\mathscr{G}(\sigma_{v})e^{v\sigma_{v}}}{v^{2}} \right).\] Moreover, following the proof of [2, Theoreme 8.6], we set \[\eta_{v}=(\log v)/\sqrt{g^{\prime\prime}(\sigma_{v})}\asymp(\log v)^{3/4}/v^{ 3/4}\] and recall [2, Lemme 8.5], asserting that for some \(c>0\) we have \[|\mathscr{G}(\sigma_{v}+\mathrm{i}t)|\ll\mathscr{G}(\sigma_{v})e^{-c(\log v)^ {2}}\qquad(\eta_{v}\leq|t|\leq\exp((\log v)^{38/37}).\] It now follows that \[H_{\vartheta}(x)=\frac{1}{2\pi}\int_{-\eta_{v}}^{\eta_{v}}\frac{\mathscr{G}( \sigma_{v}+\mathrm{i}t)e^{(\sigma_{v}+\mathrm{i}t)v}}{\big{(}\vartheta+(1- \vartheta)(\sigma_{v}+\mathrm{i}t)\big{)}\big{(}1-\sigma_{v}-\mathrm{i}t\big{)} }\,\mathrm{d}t+O\left(\frac{\mathscr{G}(\sigma_{v})e^{v\sigma_{v}}}{v^{2}} \right).\] Setting \[D_{m}=v^{(m+1)/2}(\log v)^{(m-1)/2}\qquad(m\geq 0),\] we have \[H_{\vartheta}(x)=\frac{\mathscr{G}(\sigma_{v})e^{v\sigma_{v}}}{2\pi}\int_{- \eta_{v}}^{\eta_{v}}\Upsilon(t)e^{-g^{\prime\prime}(\sigma_{v})t^{2}/2}\mathrm{ d}t+O\left(\frac{\mathscr{G}(\sigma_{v})e^{v\sigma_{v}}}{v^{2}}\right)\] where \[\Upsilon(t)=\frac{e^{-\mathrm{i}g^{\prime\prime\prime}(\sigma_{v})t^{3}/6+O(t^ {4}D_{4})}}{\big{(}\vartheta+(1-\vartheta)(\sigma_{v}+\mathrm{i}t)\big{)} \big{(}1-\sigma_{v}-\mathrm{i}t\big{)}}.\] By Taylor expansion, we infer \[\Upsilon(t)=\frac{\mathrm{Z}(t)}{(\vartheta+(1-\vartheta)\sigma_{v})(1- \sigma_{v})}\] where \[\mathrm{Z}(t)=1+\mathrm{i}t\Big{(}\frac{1}{1-\sigma_{v}}-\frac{1-\vartheta}{ \vartheta+(1-\vartheta)\sigma_{v}}\Big{)}-\mathrm{i}g^{\prime\prime\prime}( \sigma_{v})\frac{t^{3}}{6}+O(t^{2}+D_{4}t^{4}+D_{3}^{2}t^{6}).\] Now, still following the pattern of the proof of [2, Theoreme 8.6], one is lead to \[H_{\vartheta}(x)=\frac{x^{(1-\vartheta)\alpha_{\vartheta}(x)}\mathscr{G} \big{(}\alpha_{\vartheta}(x)\big{)}}{\vartheta\sqrt{2\pi g^{\prime\prime}} \big{(}\alpha_{\vartheta}(x)\big{)}}\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x} {\log x}}\Big{)}\Bigg{)}.\] We omit the details. From [2, Theoreme 8.6] we import the relation \[F(v)=\frac{e^{v\sigma_{v}}\mathscr{G}(\sigma_{v})}{\sigma_{v}\sqrt{2\pi g^{ \prime\prime}(\sigma_{v})}}\left(1+O\left(\sqrt{\frac{\log v}{v}}\right) \right)\qquad(v\geq 2),\] and the lemma follows. We may now complete the proof of Theorem 1. Using the lemma and (1.13), we obtain \[H_{\vartheta}(x)\asymp F((1-\vartheta)\log x)\big{(}(\log x)\log\log x\big{)} ^{-1/2},\] so that the estimate (2.10) implies \[E\ll x^{\vartheta}H_{\vartheta}(x)x^{-\vartheta\alpha_{\vartheta}(x)}(\log x) ^{9/4}(\log\log x)^{3/4}.\] We then have \[S_{\vartheta}(x)=x^{\vartheta}H_{\vartheta}(x)\big{(}1+O\big{(}x^{-\vartheta \alpha_{\vartheta}(x)}(\log x)^{9/4}(\log\log x)^{3/4}\big{)}\big{)}.\] We may now replace \(H_{\vartheta}(x)\) by the estimate from Lemma 1. Since \[x^{-\vartheta\alpha_{\vartheta}(x)}(\log x)^{9/4}(\log\log x)^{3/4}\ll\sqrt{ \frac{\log\log x}{\log x}},\] this yields (1.9). As remarked earlier, (1.10) follows from (1.13) and (1.9). The proof of Theorem 1 is complete. ## 3. Proof of Theorem 2 Subject to the hypotheses of Theorem 2, when \(x\) is large, one has \(\log zx\asymp\log x\). Hence, Theorem 1 implies that \[S_{\vartheta}(zx)=(zx)^{\vartheta}F\big{(}(1-\vartheta)\log zx\big{)}\frac{ \alpha_{\vartheta}(zx)}{\vartheta}\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{ \log x}}\Big{)}\Bigg{)} \tag{3.1}\] holds uniformly for \(|\log z|\ll\log\log x\). We recall [2, Proposition 8.7]. This asserts that uniformly for \(|t|\leq v^{3/4}(\log v)^{1/4}\), one has \[F(v+t)=e^{t\sigma_{v}}F(v)\Big{(}1+O\Big{(}\frac{\log v+t^{2}/v}{\sqrt{v\log v }}\Big{)}\Big{)}.\] Using this estimate with \(v=(1-\vartheta)\log x\) and \(t=(1-\vartheta)\log z\), one finds \[F\big{(}(1-\vartheta)\log zx\big{)}=F\big{(}(1-\vartheta)\log x\big{)}z^{(1- \vartheta)\alpha_{\vartheta}(x)}\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{ \log x}}\Big{)}\Bigg{)}.\] Moreover, in the ranges for \(x\) and \(z\) considered here, one has \[z^{(1-\vartheta)\alpha_{\vartheta}(x)}=1+O\big{(}\alpha_{\vartheta}(x)\log z \big{)},\] which yields an admissible error term. Finally, [2, (8.9)] implies that uniformly for \(|t|\leq v/2\) one has \[\sigma_{v+t}=\sigma_{v}\big{(}1+O(|t|/v)\big{)}.\] Hence \[\alpha_{\vartheta}(zx)=\alpha_{\vartheta}(x)\Big{(}1+O\Big{(}\frac{\log z}{ \log x}\Big{)}\Big{)}\] which again leads to an admissible error term. Inserting these estimates in (3.1) completes the proof of Theorem 2. It may be worth pointing out that the above argument actually proves a little more. A close inspection of the proof of Theorem 2 shows that the estimate \[S_{\vartheta}(zx)=z^{\vartheta+(1-\vartheta)\alpha_{0}(x)}S_{\vartheta}(x) \Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{\log x}}+\frac{(\log z)^{2}}{(\log x )^{3/2}(\log\log x)^{1/2}}\Big{)}\Bigg{)}\] holds uniformly in the range \(z>0\), \(x>27\), \(|\log z|\leq(\log x)^{3/4}(\log\log x)^{1/4}\). ## 4. Proof of Theorem 3 Before proving Theorem 3, we briefly sketch the main steps. We first choose a suitable real number \(U=U(x)\) such that \(\log U=(\log x)(1+o(1))\), and count the integers \(n\) not exceeding \(x\) sucht that \(k(n)\leq n^{\vartheta}(\log U)^{\Theta}\). The first step is to show that the number of these integers is essentially \(S_{\vartheta}(x)\) multiplied by \((\log x)^{\Theta}\). The second step is to prove that the number of these integers is close to \(S_{\vartheta,\Theta}(x)\). In light of this description, for any \(x\geq 1\) and any \(z>0\), we set \[B(x,z)=\#\{n\leq x\colon k(n)\leq zn^{\vartheta}\}.\] **Theorem 4**.: _Let \(0<\vartheta<1\) be fixed. Then for \(x\) large, one has_ \[B(x,z)=zS_{\vartheta}(x)\Bigg{(}1+O\Big{(}\sqrt{\frac{\log\log x}{\log x}} \Big{)}\Bigg{)}\] uniformly for \(z>0\) with \(|\log z|\ll\log\log x\)._ _Proof._ We follow very closely the proof of Theorem 1 which corresponds to the case \(z=1\). We redefine the meaning of \(y\), now set to \(y=x^{\vartheta}z\), and keep the notation \(\kappa\) and \(E\). Note that in the sequel of this proof, the error term \(E\) is to be interpreted with the current specific choices for \(x\) and \(y\). For any \(n\geq 1\), recall that there is a unique pair \(l,m\) with \(n=lmk(m)\) and \(\mu^{2}\big{(}lk(m)\big{)}=1\). The conditions \(n\leq x\) and \(k(n)\leq z^{\vartheta}\) become \[lmk(n)\leq x\quad\text{ and }\quad lk(m)\leq m^{\kappa}z^{1+\kappa}.\] By keeping the same dichotomy \(m\leq x/y\) and \(m>x/y\) it is straightforward that \[B(x,z)=\frac{6}{\pi^{2}}\sum_{m=1}^{\infty}\frac{1}{\psi(m)}\min\big{(}m^{ \kappa}z^{1+\kappa},x/m\big{)}+O(E).\] Moreover, (2.9) with the parameter \(xz^{-\kappa-1}\) reads \[S_{\vartheta}(xz^{-\kappa-1})=\frac{6}{\pi^{2}}\sum_{m=1}^{\infty}\frac{1}{ \psi(m)}\min\big{(}m^{\kappa},xz^{-\kappa-1}/m\big{)}+O(z^{-(1+\kappa)(1- \gamma)}E).\] Hence, one has \[B(x,z)=z^{1+\kappa}S_{\vartheta}(xz^{-\kappa-1})+O(z^{(1+\kappa)\gamma}E).\] Still choosing \(\gamma=\sigma_{v}-\frac{1}{\log y}\) and \(\sigma=\sigma_{v}\), we have \(z^{(1+\kappa)\gamma}\ll 1\). Moreover, (2.10) implies that for some \(c>0\) one has \[E\ll x^{\vartheta}F\big{(}(1-\vartheta)\log x\big{)}\exp\Big{(}-c\sqrt{ \frac{\log x}{\log\log x}}\Big{)}.\] This is sufficient to ensure that \[E\ll zS_{\vartheta}(x)\sqrt{\frac{\log\log x}{\log x}}.\] Finally we estimate the term \(S_{\vartheta}(xz^{-\kappa-1})\) by Theorem 2. Inserting this in the estimate for \(B(x,z)\) and noticing that in the main term the exponent \(1+\kappa-(1+\kappa)\vartheta\) of \(z\) is equal to \(1\) gives the expected result. \(\square\) We may now complete the proof of Theorem 3. It is sufficient to prove the result for \(\Theta\neq 0\), since in the case \(\Theta=0\) one has \(S_{\vartheta,\Theta}(x)=S_{\vartheta}(x)\). We put \[C=1+(1+|\Theta|)/\vartheta\quad\text{and}\quad U=U(x)=x(\log x)^{-C}.\] Note that we have \(C>0\), \(C\vartheta\geq 1\) and \(\Theta+C\vartheta\geq 1\). First consider the case \(\Theta>0\). Any integer \(n\) counted by \(S_{\vartheta,\Theta}(x)\) satisfies \(k(n)\leq n^{\vartheta}(\log x)^{\Theta}\), whence \(S_{\vartheta,\Theta}(x)\leq B\big{(}x,(\log x)^{\Theta}\big{)}\). Now, a lower bound is obtained by noticing that the set of integers \(U<n\leq x\) counted in \(S_{\vartheta,\Theta}(x)\) contains the integers such that \(k(n)\leq n^{\vartheta}(\log U)^{\Theta}\). These deliberations yield the inequalities \[B\big{(}x,(\log U)^{\Theta}\big{)}-B\big{(}U,(\log U)^{\Theta}\big{)}\leq S_ {\vartheta,\Theta}(x)\leq B\big{(}x,(\log x)^{\Theta}\big{)}.\] Now, using Theorem 4 to estimate \(B\big{(}x,(\log U)^{\Theta}\big{)}\) and \(B\big{(}x,(\log x)^{\Theta}\big{)}\), and then replacing \((\log U)^{\Theta}\) by \((\log x)^{\Theta}\) at the price of an admissible error term, one obtains the main term to estimate \(S_{\vartheta,\Theta}(x)\). For the remaining term, Theorem 4 and the definition of \(U\) imply that \[B\big{(}U,(\log U)^{\Theta}\big{)}\ll(\log U)^{\Theta}S_{\vartheta}(U)\ll( \log x)^{\Theta}S_{\vartheta}(U).\] Finally, Theorem 2 with the choice \(z=(\log x)^{-C}\) implies that \[S_{\vartheta}(U)\ll z^{\vartheta}S_{\vartheta}(x)\ll(\log x)^{-\vartheta C}S_{ \vartheta}(x)\ll(\log x)^{-1}S_{\vartheta}(x).\] Gathering these estimates, we obtain an upper bound for \(B\big{(}U,(\log U)^{\Theta}\big{)}\) that provides an admissible error term, and therefore proves the result for \(\Theta>0\). The case \(\Theta<0\) is very similar. Any integer counted by \(S_{\vartheta,\Theta}(x)\) either satisfies \(n\leq U\) with \(k(n)\leq n^{\vartheta}\), or \(U<n\leq x\) with \(k(n)\leq n^{\vartheta}(\log U)^{\Theta}\). Moreover, any integer \(n\leq x\) such that \(k(n)\leq n^{\vartheta}(\log x)^{\Theta}\) is counted in \(S_{\vartheta,\Theta}(x)\). We conclude that \[B\big{(}x,(\log x)^{\Theta}\big{)}\leq S_{\vartheta,\Theta}(x)\leq B\big{(}x,(\log U)^{\Theta}\big{)}+S_{\vartheta}(U).\] As before, one uses Theorem 4 to estimate \(B\big{(}x,(\log U)^{\Theta}\big{)}\) and \(B\big{(}x,(\log x)^{\Theta}\big{)}\). This provides the main term. Finally, Theorem 2 with \(z=(\log x)^{-C}\) provides \[S_{\vartheta}(U)\ll z^{\vartheta}S_{\vartheta}(x)\ll(\log x)^{-\vartheta C}S_ {\vartheta}(x)\ll(\log x)^{\Theta-1}S_{\vartheta}(x),\] which yields an admissible error term.
2308.13497
Ngambay-French Neural Machine Translation (sba-Fr)
In Africa, and the world at large, there is an increasing focus on developing Neural Machine Translation (NMT) systems to overcome language barriers. NMT for Low-resource language is particularly compelling as it involves learning with limited labelled data. However, obtaining a well-aligned parallel corpus for low-resource languages can be challenging. The disparity between the technological advancement of a few global languages and the lack of research on NMT for local languages in Chad is striking. End-to-end NMT trials on low-resource Chad languages have not been attempted. Additionally, there is a dearth of online and well-structured data gathering for research in Natural Language Processing, unlike some African languages. However, a guided approach for data gathering can produce bitext data for many Chadian language translation pairs with well-known languages that have ample data. In this project, we created the first sba-Fr Dataset, which is a corpus of Ngambay-to-French translations, and fine-tuned three pre-trained models using this dataset. Our experiments show that the M2M100 model outperforms other models with high BLEU scores on both original and original+synthetic data. The publicly available bitext dataset can be used for research purposes.
Sakayo Toadoum Sari, Angela Fan, Lema Logamou Seknewna
2023-08-25T17:13:20Z
http://arxiv.org/abs/2308.13497v1
# Ngambay-French Neural Machine Translation (sba-Fr) ###### Abstract In Africa, and the world at large, there is an increasing focus on developing Neural Machine Translation (NMT) systems to overcome language barriers. NMT for Low-resource language is particularly compelling as it involves learning with limited labelled data. However, obtaining a well-aligned parallel corpus for low-resource languages can be challenging. The disparity between the technological advancement of a few global languages and the lack of research on NMT for local languages in Chad is striking. End-to-end NMT trials on low-resource Chad languages have not been attempted. Additionally, there is a dearth of online and well-structured data gathering for research in Natural Language Processing, unlike some African languages. However, a guided approach for data gathering can produce bitext data for many Chadian language translation pairs with well-known languages that have ample data. In this project, we created the first sba-Fr Dataset, which is a corpus of Ngambay-to-French translations, and fine-tuned three pre-trained models using this dataset. Our experiments show that the M2M100 model outperforms other models with high BLEU scores on both original and original+synthetic data. The publicly available bitext dataset can be used for research purposes. 1 Footnote 1: [https://github.com/Toadoum/Ngambay-French-Neural-Machine-Translation-sba_fr.yl](https://github.com/Toadoum/Ngambay-French-Neural-Machine-Translation-sba_fr.yl). ## 1 Introduction Differential access to information is a pervasive issue in both developed and developing nations, reinforced by physical, social, and economic structures. The problem is especially acute in rural areas, where the lack of communication technology such as the internet can severely limit access to information. Furthermore, automated translation tools face significant challenges in dealing with low-resource language pairs and morphologically rich languages, leading to limited cultural exchange and market integration for certain nations. A major contributor to this problem is the fact that internet research is primarily conducted in languages such as English, French, Spanish, German, etc. resulting in limited data availability for other languages. As a result, Machine Translation (MT) is heavily dependent on parallel text or "bitext," leaving speakers of languages with limited data resources or parallel corpora at a disadvantage when it comes to building MT models McCarthy (2017). To make the recent successes of MT systems accessible and inclusive, research efforts should focus on identifying and closing the technological gap between these languages that lack digital or computational data resources. Addressing this gap will require innovative approaches for data collection and processing, as well as the development of new MT models that can effectively operate with limited resources. The Ngambay language is one of such marginalized and low-resource language facing the challenges of information access and automated translation. As an example of a morphologically rich language, Ngambay encounters significant difficulties in finding adequate translation resources, limiting cultural exchange and economic integration opportunities. The scarcity of internet research conducted in languages like Ngambay further exacerbates this problem, leaving speakers of such languages at a disadvantage in building MT models. Bridging the technological gap for languages with limited digital and computational resources, like Ngambay, is essential to ensure inclusivity and accessibility to the recent successes of MT systems. This research aims to contribute to the advancement of NMT for low-resource languages like Ngambay, making strides toward more equitable access to information and linguistic inclusion. Related Work Machine translation is a crucial subfield of Natural Language Processing (NLP) that utilizes computers to translate natural languages. Recently, end-to-end neural machine translation (NMT) has emerged as the new standard method in practical MT systems, leveraging transformer models with parallel computation and attention mechanism (Zhixing et al., 2020). Although NMT models require extensive parallel data, which is typically only available for a limited number of language pairs (Surafel et al., 2018), some research has been conducted on NMT using rare African languages such as Swahili, Hausa, Yoruba, Wolof, Amharic, Bambara, Ghomala, Ewe, Fon, Kinyarwanda, and others. (Emezue and Dossou, 2020) introduced the FFR Dataset, a corpus of Fon-to-French translations, which included the diacritical encoding process and their FFR v1.1 model, trained on the dataset. In their 2020 paper titled "Neural Machine Translation for Extremely Low-Resource African Languages: A Case Study on Bambara," (Tapo et al., 2020) introduced the pioneering parallel dataset for machine translation of Bambara to and from English and French. This dataset has served as a significant milestone as it has provided the foundation for benchmarking machine translation results involving the Bambara language. The authors extensively address the unique challenges encountered when working with low-resource languages and propose effective strategies to overcome the scarcity of data in low-resource machine translation. Their research sheds light on the potential solutions for improving machine translation in similar linguistic contexts. By tackling the data scarcity issue, (Tapo et al., 2020)'s work contributes to the advancement of machine translation for under-resourced languages. (Adelani et al., 2022) have created a new African news corpus covering 16 languages, including eight that were not part of any existing evaluation dataset. They demonstrated that fine-tuning large pre-trained models with small amounts of high-quality translation data is the most effective strategy for transferring to additional languages and domains. (Nekoto et al., 2022), in their paper "Participatory Translations of Oshiwambo", built a resource for language technology development and culture preservation, as well as providing socio-economic opportunities through language preservation. They created a diverse corpus of data spanning topics of cultural importance in the Oshindonga dialect, translated to English, which is the largest parallel corpus for Oshiwambo to-date (Nekoto et al., 2022). Other works have also been conducted on African languages, and many of them have websites for data crawling, such as JW300 and BBC. However, there is currently no research related to the Ngambay language or any other local language in Chad, and it is difficult to find websites related to these languages, such as newspapers or other sources, such as JW300. ## 3 Ngambay Lewis, Simons, and Fennig (2013) reported 896,000 Ngambay speakers in Chad and 57,000 in Cameroon (Wikipedia). According to (Ndjerassem, 2000) J.H. Greenberg's classification in The Languages of Africa places Ngambay in the Nilo-Saharan family, Chari-Nil subfamily, Central Sudanese group, and Bongo-Baguinnian subgroup. Tucker and Bryan classify Ngambay as Bongo-Baguinnian, Sara group. Lakka and Mouroum, closely related to Ngambay, share a fair amount of homogeneity, though they differ in vocabulary and pronunciation. (John, 2012) states that Ngambay is related to Western Saras, Kaba, and Laka. Ngambay is spoken in Eastern Logone, Tandjile, Moyen-Chari, Mayo-Kebbi, and Chari-Baguirmi prefectures. It is used as a lingua franca by other ethnic groups. In 1993, 812,003 Ngambay lived in Chad, with at least half in Logone Occidental. The Ngambay people call their language "tar Ngambar" or "ta Ngambar". Protestant priests and missionaries helped many Ngambay speakers learn to read and write. They translated the New Testament and Bible into Ngambay, titled "Testament ge cigi" and "Maktub ge to qe kemee" respectively. Ngambay hymns include "Pa kula ronduba do Mbaidombaije"g". It is worth noting that a monthly evangelical magazine called Dannasur was published for several decades until its discontinuation in 1995, or possibly more recently. However, it is regrettable that the transcription of Ngambay has not taken into account its distinctive feature of tones. Several studies have already been conducted on this language, including the work of Charles Vandame (Archbishop of N'Djamena before) titled The Ngambay-Moundou, which was published in 1963 (Ndjerassem, 2000). Problem of Education The economic difficulties of recent years have had a significant impact on the education sector of Chad, leading to stagnation or even a decline in the quality and effectiveness of the education system. School infrastructure has deteriorated rapidly, and there is a lack of motivated and qualified staff, with illiteracy remaining prevalent and gender disparities showing no signs of improvement. Although the primary school enrollment rate is relatively high at 86.85%, only 41.32% of students complete primary school. When compared with Niger, a neighbouring African country facing similar challenges, the data is disappointing, with Niger having a primary school enrollment rate of 73.43% and nearly 72% of students completing primary school. A recent sectoral analysis of the Chadian education system highlights several deficiencies, including low enrollment rates, a lack of textbooks and inadequate classroom equipment, unqualified teachers, and limited access to higher education. Therefore, several changes are necessary to improve education in Chad. The PAQEPP (Projet d'amelioration de la qualite de l'education par une gestion de proximite) project, funded by the French Development Agency, aims to address these issues, involving 50 schools in Moundou and N'Djamena. The project was scheduled to run for four years, from 2017 to 2021, and involved more than 700 teachers and nearly 55,000 students. However, due to the global health crisis (COVID-19), the project has been extended until 2023. One possible solution to address such problems is the development of efficient Machine Translation Models that can be deployed on edge devices to help overcome language barriers, as many people face difficulties in accessing education. Creating high-quality datasets for research in NMT is crucial for building these models. ## 5 Data creation In data creation, we utilized two sources. The first source was _The Sara Bagirmi Languages Project_ which provided us with the fifth edition (2015) of the Ngambaye to French dictionary in PDF format. However, due to the complexity of performing web scraping on a PDF, we manually created a parallel corpus of 1,176 sentences with short to medium lengths from the most commonly used sentences in daily life using a Google form. The second source was _YouVersion Bible_, an online Bible translated into multiple languages, including Ngambay. Using R programming, we performed web scraping on the website, but the Ngambay translation did not include all the verses like the French version. We extracted up to 34,647 sentences, but there were various grammatical errors, incorrect and incomplete translations, and inconsistencies. To ensure the quality of the data after crawling, we gave the dataset to native speakers of Ngambay and other linguists, including the Association of People translating the Bible from French to Ngambay in Chad, to check for problematic translations, misspellings, and duplicated sentences following Nekoto et al. (2020). After quality control, we combined the two bitext datasets, dropped inconsistent and incomplete translations, and ended up with 33,073 sentences for use in this project. The morphological characteristics of a language can have a significant impact on its sentence structure and complexity. Our analysis revealed that the Ngambay language has a relatively simple morphology compared to French, which contributes to shorter sentences and fewer words. In contrast, French has a highly inflected morphology, resulting in longer and more complex sentences with a larger vocabulary. These differences in morphology pose a challenge for Machine Translation systems, as they must be trained on parallel texts that are aligned at the sentence and word levels. Given the complexity of French and the simplicity of Ngambay, it is essential to develop effective strategies for handling the morphological variations in each language when building MT models. By understanding the unique features of each language, we can improve the accuracy and effectiveness of MT systems for languages with varying levels of complexity. ### Data Split Splitting our bitext data into training, validation, and test sets using a 20% split size is a common ML practice for creating reliable, precise, and generalizable models. After splitting, our sets had 21,166, 6,615, and 5,292 sentences respectively for train, validation and test. We used the Python package jsonlines2 to convert our CSV files to JSON format to match Hugging Face's pre-trained models. Footnote 2: [https://jsonlines.readthedocs.io/en/latest/](https://jsonlines.readthedocs.io/en/latest/) Models and Methods We have used three transformer-based language models in our experiments: MT5 Xue et al. (2021), ByT5 Xue et al. (2022), and M2M100 Fan et al. (2021). Transformers are a type of neural network architecture that has become popular in NLP since 2017 Vaswani et al. (2017). They are used in many cutting-edge NLP applications. Unlike RNNs, transformers use a self-attention mechanism to weigh input sequence importance when making predictions. The transformer architecture consists of an encoder and decoder, which can be trained for NLP tasks such as machine translation, text classification, and language modelling. The encoder produces hidden representations from the input sequence, and the decoder uses them to generate the output sequence Vaswani et al. (2017). ### M2m100 M2M100 is a large multilingual machine translation model proposed by Fan et al. (2021). It uses a shared representation space and a pivot language to enable translations between 100 languages, including low-resource and non-Indo-European languages. The model outperforms previous multilingual models and achieves state-of-the-art results on various translation benchmarksFan et al. (2021). ### ByT5 ByT5 is a byte-to-byte transformer model introduced by Xue et al. (2022). It operates at the byte level, eliminating the need for tokenization and making it suitable for languages with complex scripts or non-standard formatting. ByT5 outperforms existing token-based models on benchmark datasets, including those with low-resource languages Xue et al. (2022). ### Mt5 MT5 is a massively multilingual pre-trained text-to-text transformer proposed by Xue et al. (2021). It is trained on a large corpus of text in over 100 languages and can directly translate between any pair of languages without relying on English as an intermediate step. The text-to-text approach and diverse training tasks contribute to its versatility and performance Xue et al. (2021). Fine-tuning pre-trained models on a new low-resource language like Ngambay requires careful consideration of the available data and the best approach to utilizing it. As noted by Adelani et al. (2022), one effective way to fine-tune pre-trained models is to follow a process. It is essential to select a target language that is represented in all the pre-trained models. In this case, we chose Swahili (sw) as our target language since it is a commonly used language that is present in most pre-trained models. This allows us to leverage the existing knowledge contained in the pre-trained models and adapt it to the new African language Adelani et al. (2022). ### Hardware and Schedule Our models were trained on a single machine equipped with 2 NVIDIA T4 GPUs, 32 vCPUs, and 120 GB of RAM. During the training process, optimization steps for M2M100, ByT5, and MT5 took an average of 5 seconds, 2 seconds, and 4 seconds, respectively, based on the pre-trained models and hyperparameter described in the section 6.5. We trained our models for a total of 133,080 optimization steps. The M2M100 model was trained for 1 day, 15:02:53.55, ByT5 for 1 day, 0:56:06.98, and MT5 for 20:46:36.98. ### Performance Evaluation Metrics and Hyperparameters In this project, we utilized BLEU as a means of automatically evaluating machine translation. BLEU evaluates the adequacy of machine translation by analyzing word precision, as well as the fluency of the translation by calculating n-gram precisions. This method returns a score within a range of [0, 1] or on a [0, 100] scale. We specifically implemented SacreBLEU, which provides dataset scores instead of segment scores. A higher score indicates a translation that is closer to the reference Papineni et al. (2002): Using the HuggingFace transformer tool, we fine-tuned pre-trained models with settings that included a learning rate of 5e-5, a batch size of 5, maximum source and target lengths of 200, a beam size of 10, and a total of 60 epochs. ## 7 Results and Discussion This section will detail our training process, specifically discussing the data augmentation method we used to enhance the performance of our pre-trained models. Our source language is French (Fr), while the target language is Ngambay (sba). Our experiment aimed to identify and select the model that performed best among the pre-trained models when trained on the original bitext data, then use the selected model to generate synthetic data. Of the three pre-trained models we fine-tuned, M2M100 achieved the highest Evaluation BLEU score of 33.06, followed by ByT5 with a score of 28.447 when trained on a sample of 21,166, as shown in Table 1. This can be attributed to the fact that M2M100 is a multilingual model trained on a diverse set of parallel corpora from 100 languages, including news articles, subtitles, and other publicly available texts. It employs a shared encoder-decoder architecture that can be fine-tuned for specific language pairs and integrates multiple techniques to improve performance Fan et al. (2021). ### Data Augmentation using French monolingual data In their 2016 paper, Sennrich et al. (2016) proposed a method to enhance NMT models with available monolingual data for many languages. The two-step process involves training a language model on the bitext data and then using it to generate synthetic parallel sentences for the NMT model by translating the monolingual sentences into the target language Sennrich et al. (2016). Tonja et al. (2023) proposed Source-side Monolingual Data Injection (SMDI) to enhance low-resource NMT systems. A language model is trained on a parallel corpus and used to generate synthetic parallel sentences by translating the monolingual sentences into the target language. Evaluations on several low-resource language pairs showed that SMDI consistently improved NMT system quality Tonja et al. (2023). We are tackling a low-resource language with little in-domain data for Neural Machine Translation. Thus, we use a method similar to Sennrich et al. (2016). To generate synthetic parallel data for Ngambay-French translation we have used the fra_news_2022_100K-sentences.txt dataset from the Leipzig Corpora Collection/Deutscher Wortschatz, containing 100,000 sentences related to 2022 news (politics, sport, entertainment, etc.) because no monolingual Ngambay data exists, unless in hard copy, hence, input (Fr) monolingual source-side. We create synthetic bitext data from French monolingual data. We split the monolingual data into sentences, and perform noisy translation to Ngambay then combine the translated sentences to form a synthetic bitext corpus. ``` 0: * Original bitext dataset: \(sba-Fr\) * French Monolingual dataset: \(Fr_{m}\) * Target synthetic dataset: \(sba_{synth}\) * Synthetic bitext dataset: \(sba_{synth}-Fr_{m}\) * Languages: Fr, sba * Translation model: NMT Fr \(\rightarrow\) sba * Train NMT on \(sba-Fr\) * split \(Fr_{m}\) into sentences * generate synthetic \(sba_{synth}\) by translating \(Fr_{m}\) sentences using trained and saved NMT * Combine sentences from \(Fr_{m}\) and \(sba_{synth}\) to create \(sba_{synth}-Fr_{m}\) * Add \(sba-Fr\) and \(sba_{synth}-Fr_{m}\) to create new bitext data * Retrain the model using the new bitext data. ``` **Algorithm 1** Generating synthetic bitext data & training In machine translation, a model is typically trained on original bitext data, and then utilized to translate a set of monolingual source sentences into the target language. This process generates pseudo-parallel training data, also known as synthetic data. The synthetic data is subsequently combined with the authentic parallel data to train and improved the model, following the self-training concept introduced by He et al. (2020). This involves training a model on labelled data and using it to generate pseudo-labelled data, which is then added to the training set to enhance the model's performance He et al. (2020). We used French monolingual data to generate translations for Ngambay. We combined these to create synthetic bitext data (see section 7.1). Training our models on both the original and synthetic data increased the M2M100 and ByT5 model's Evaluation BLEU score by more than 11 points compared to the original data alone. The MT5 model's Evaluation BLEU score increased by more than 2 points compared to the original dataset. This result is consistent with Tonja et al. (2023), who used target monolingual data in self-training experiments. Table 2 shows that M2M100 outperforms the other two models with original and original + synthetic data. (Agostinho Da Silva et al., 2023) with their work "Findings from the Bambara - French Machine Translation Competition (BFMT 2023)" have used Cyclic backtranslation, aims to enhance the model's learning by utilizing both the training dataset and a monolingual dataset. At each step \(k\), they encourage the Machine Translation (MT) model for each direction to learn from a combination of the original training dataset, sentences generated synthetically, and sentences generated by the MT model of the opposite direction from the previous step. This approach allows the model to benefit from the diverse data sources, leading to improved performance and robustness. They have also used M2M100 model Fan et al. (2021) as their starting point due to its outstanding performance, achieving the highest scores. (Adelani et al., 2022) demonstrated this in their project entitled "A Few Thousand Translations Go A Long Way!", they created an African news corpus with 16 languages, including 8 not in any existing evaluation dataset. M2M100 adapts faster than ByT5, and in most cases, it outperforms the other models and this have been confirmed by Team et al. (2022)'s results. The M2M100 model is capable of translating between 100 languages in a many-to-many manner, which means it can translate any language pair among the 100 supported languages. The model is trained using a novel approach called Cyclic Backtranslation, which enables the model to learn from both the original training dataset and a synthetic dataset generated through translation of monolingual dataset. By leveraging a large amount of multilingual data, the M2M100 model demonstrates significant improvements in translation quality for various language pairs. Hence, it consistently delivers superior results in most cases. ## 8 Conclusion The primary objective of this study is to demonstrate the possibility of gathering data on Chadian languages, similar to how other African countries do, and utilizing this data to develop a Machine Translation (MT) system. Specifically, the aim is to establish an MT system for the Ngambay language as an example for other Chadian languages. By doing so, we hope to set a benchmark for the accuracy of Chadian MT systems. To achieve this goal, we constructed the first bitext dataset for Ngambay-French and fine-tuned three transformer-based models (M2M100, ByT5, and MT5). Our experimental results indicate that M2M100 outperforms the other models and that monolingual source-side can enhance the performance of all models. We believe that such MT system can be integrated into electronic devices to overcome language barriers. However, this work has limitations that future studies can address ## 9 Limitations Challenges exist in developing Neural Machine Translation (NMT) systems for low-resource languages in Chad. Obtaining a well-aligned parallel corpus is difficult, leading to inadequate training in translation models. Furthermore, technological advancement in NMT focuses on global languages, leaving a research gap for local languages in Chad. Consequently, end-to-end NMT trials for low-resource Chad languages have not been conducted. Online and structured data gathering for NLP research in Chadian languages is limited, making it hard to acquire enough data for successful NMT model training. A guided approach was used with languages having abundant data, but this may not capture the local languages' complexities, potentially affecting model performance. The M2M100 model's generalization to other low-resource Chadian languages is uncertain. \begin{table} \begin{tabular}{l c c c} \hline \hline **Models** & **M2M100** & **ByT5** & **MT5** \\ \hline Eval BLEU & 33.06 & 28.447 & 22.12 \\ Predict BLEU & 32.6016 & 32.6016 & 22.0481 \\ Eval loss & 1.7661 & 0.5152 & 1.0874 \\ Train sample & 1166 & 24366 & 21166 \\ Train runtime & 1 day, 15:02:53.55 & 1 day, 0:56:06.98 & 20:46:36.98 \\ \hline \hline \end{tabular} \end{table} Table 1: Result of Fine-tuning M2M100, ByT5, and, MT5 using original Dataset. Biases in the sba-Fr Dataset used in the project could affect the model's accuracy and practicality. ## 10 Future Work To address the limitations of our current study, future research can focus on several aspects. Firstly, our dataset predominantly originates from the bible, which may introduce biased religious references. To mitigate this bias, researchers can collect more diverse and general text data for the Ngambay language. Additionally, exploring advanced techniques like circular Back-translation using monolingual target source-side and Meta-Learning for Few-Shot NMT Adaptation, as proposed (Sennrich et al., 2016) and (Kim et al., 2019) respectively, could lead to enhancements in both the dataset quality and the overall performance of the machine translation (MT) system. These techniques have shown promise in improving MT systems by leveraging additional data and adapting to low-resource languages like Ngambay more effectively. ## 11 Acknowledgments We express our gratitude to the African Institute for Mathematical Sciences (AIMS) with the program African Master's of Machine Intelligence (AMMI) for providing us with high-quality machine-learning training and for supporting us throughout this project. We also extend our appreciation to Google for providing us with a Google Cloud Platform (GCP) grant that allowed us to run our experiments. Special thanks go to the AMMI staff for their assistance and support. Many thanks to Chris Emezue and Lyse Naomi Wamba for the proofreading and useful comments.
2307.13212
Analytical Insights and Universal Behavior in Fast Thermal Equilibration Protocols
When a system deviates from equilibrium, it is possible to manipulate and control it to drive it towards equilibrium within a finite time $t_f$, even reducing its natural relaxation time scale $\tau_{relax}$. Although numerous theoretical and experimental studies have explored these shortcut protocols, few have yielded analytical results for the probability distribution of work, heat and entropy production. In this paper, we propose a two-step protocol that captures the essential characteristics of more general protocols and has analytical solution for the relevant thermodynamic probability distributions. Additionally, we demonstrate that for very short protocol duration $t_f\ll \tau_{relax}$, all protocols exhibit universal behavior in their shape and for the ratio of probability distribution functions of positive and negative work, heat and the entropy production.
Diego Rengifo, Gabriel Téllez
2023-07-25T02:53:23Z
http://arxiv.org/abs/2307.13212v2
# Analytical Insights and Universal Behavior in Fast Thermal Equilibration Protocols ###### Abstract When a system deviates from equilibrium, it is possible to manipulate and control it to drive it towards equilibrium within a finite time \(t_{f}\), even reducing its natural relaxation time scale \(\tau_{\rm relax}\). Although numerous theoretical and experimental studies have explored these shortcut protocols, few have yielded analytical results for the probability distribution of work, heat and entropy production. In this paper, we propose a two-step protocol that captures the essential characteristics of more general protocols and has analytical solution for the relevant thermodynamic probability distributions. Additionally, we demonstrate that for very short protocol duration \(t_{f}\ll\tau_{\rm relax}\), all protocols exhibit universal behavior in their shape and for the ratio of probability distribution functions of positive and negative work, heat and the entropy production. ## I Introduction Almost every thermodynamic system in nature is out of equilibrium. Equilibrium states are therefore not common but desirable. When a system is out of equilibrium and left without any external intervention, it takes a finite time to reach an equilibrium state. Let us call \(\tau_{\rm relax}\) the time scale of relaxation. This time is an intrinsic characteristic of any physical system and depends on various factors, such as the underlying interactions, which are encoded in the friction, transport coefficients, the external parameters (if any), and the temperature [1]. Technology is advancing very rapidly, and one of its trends is to create and control smaller devices. This miniaturization of devices has led to an increasing interest in the development of engineered techniques that can shorten the natural time scale for relaxation between equilibrium states. These procedures are designed to connect equilibrium states through a particular protocol that is considerably shorter than the natural equilibration time. Such techniques have been inspired by the so-called shortcut to adiabaticity [2; 3; 4]. Since then, the term Engineered Swift Equilibration (ESE) has been coined to describe such protocols [5]. These protocols are also known as "shortcut to isothermality" [6] or "swift state-to-state transformations" [7]. Several protocols have been established for the ESE process, including those for frictionless atom cooling in harmonic traps [8; 9] and nanosystems as micromechanical oscillators in contact with a thermostat, both in the overdamped or underdamped regime [5; 10]. These procedures enable the creation of a non-equilibrium state that can be controlled and manipulated, thereby allowing the exploration of novel physical phenomena [7]. Overall, the ESE process has shown great promise in the field of non-equilibrium statistical mechanics and has opened up new avenues for research in the development of novel techniques for controlling and manipulating the dynamics of physical systems, especially for nanodevices. Due to the miniaturization of chips, robots, and devices, understanding controlled dynamics has become mandatory in modern technology. The suitable framework to tackle such small systems is stochastic thermodynamics. To advance our understanding of how protocols accelerate system equilibration, it is crucial to comprehend the stochastic thermodynamics of small systems. Therefore, it is vital to calculate the probability distribution functions (PDF) of work, heat, and entropy production for these distinct processes. In this paper, we consider a Brownian particle, the system, immerses in a viscose medium, the environment at temperature \(T\), whose particles have a smaller size than the Brownian particle. This justifies the validity of the commonly called overdamped regime, which is particularly applicable to colloidal particles. The particle is subjected to a time-dependent and externally controlled harmonic-type potential, which allows for precise manipulation and control. An experimental realization of this system has been made with colloidal particles trapped with laser beams [5; 11; 12] and has been used to build microscopic heat engines [13; 14]. When the stiffness of the potential is controlled through ESE protocols, the probability distributions of the relevant thermodynamic quantities like work and heat cannot generally be calculated analytically. Therefore, their theoretical study is limited to numerical simulations. In this paper, we introduce the two-step protocol as a novel approach. Our protocol enables us to derive exact analytical results. Remarkably, the analytical solution of the two-step protocol captures the essential characteristics of broader protocols, exhibiting a certain level of universality. The structure of this article is as follows. In the first part, inspired by the ideas presented in [5], we have developed a protocol that establishes a connection between two equilibrium states, theoretically reducing the equilibration time. The work from Ref. [5] also proposed a similar protocol and carried out experimental realizations, achieving a two-order-of-magnitude reduction in equilibration time. However, due to the complex energetics involved, a detailed study of these protocols has proven to be extremely challenging [15]. In the second part, we propose a simplified toy model called the Two-Step Protocol (TSP), which allows for an analytical solution of the probability distribution functions of relevant thermodynamic quantities. We calculate the work, heat, and entropy production PDF for the TSP and we check the validity of the Jarzynski equality [16; 17], Crooks relation [18; 19], and the entropy production fluctuation theorem [20] for this specific protocol. A comprehensive comparison with more general protocols reveals some universal features of the shape of the protocols as well as in the ratio of PDF \(P_{A}(A)/P_{A}(-A)\) (with \(A\) being the work, heat and entropy production). These relations are different from the usual fluctuations theorems [16; 17; 18; 19; 20; 21; 22], as both \(P_{A}(A)\) and \(P_{A}(-A)\) refer to the PDF of a forward process (we do not consider a backward process with the direction of time reversed). ## II ESE protocol In this section, we present a review of the concepts underlying the ESE protocols [5; 7]. The central idea behind an ESE process is to construct a customized time-dependent protocol, denoted as \(\lambda(t)\), for the externally controlled parameter. This protocol is specifically designed to guide the system from the initial equilibrium state characterized by \(\lambda_{i}\) to the desired final equilibrium state characterized by \(\lambda_{f}\) within a finite time interval \(t_{f}\) that is shorter than the system's equilibration time \(\tau_{\rm relax}\). Let us consider a Brownian particle trapped in a harmonic potential with time-dependent stiffness, given by \[U(x,t)=\frac{1}{2}k(t)x^{2}. \tag{1}\] Here \(x\) is the position of the particle and the protocol \(\lambda(t)\) is essentially characterized by the stiffness \(k(t)\). Initially, the particle is in equilibrium with stiffness \(k_{i}\). By controlling the stiffness of the potential \(k(t)\), the system reaches a final equilibrium state with stiffness \(k_{f}\). However, there are infinitely many possible protocols that can achieve this transformation, so we can impose constraints to obtain the desired solution. Since the goal is to reduce the equilibration time through external control, the problem falls within the realm of optimization theory [23; 24]. A solution to this problem has been presented in [5], where both experimental and theoretical results have been obtained in the overdamped limit, i.e., the acceleration term in the Langevin equation has been neglected. In this limit, the corresponding Langevin equation is given by \[\dot{x}=-\frac{k(t)}{\gamma}x+\sqrt{2\mathcal{D}}\xi(t), \tag{2}\] where \(\xi(t)\) is a Gaussian white noise with zero average and auto-correlation function \(\left\langle\xi(t)\xi(t^{\prime})\right\rangle=\delta(t-t^{\prime})\). Here, \(\beta=1/(k_{B}T)\) and \(\gamma\) are the inverse temperature and friction coefficient, respectively, and \(\mathcal{D}\) is the diffusion constant \(\mathcal{D}=k_{B}T/\gamma\). From now on, we will assume that the overdamped limit is valid. From the Langevin equation, we can infer that the natural time scale is given by \(\tau_{\rm relax}=\gamma/k_{f}\). It is useful to use a set of adimensional variables associated with this time scale and the final stiffness \(k_{f}\) of the protocol: \(\tilde{t}=t/\tau_{\rm relax}\), \(\tilde{x}(\tilde{t})=x(t)/\sqrt{\mathcal{D}\tau_{\rm relax}}\), and \(\tilde{k}(\tilde{t})=k(t)/k_{f}\). The natural energy scale is \(k_{B}T\), given by \(\tilde{U}=U/(k_{B}T)=(1/2)\tilde{k}(\tilde{t})(\tilde{x}(\tilde{t}))^{2}\). With this set of units, the reduced Langevin equation reads \[\frac{d\tilde{x}}{d\tilde{t}}=-\tilde{k}(\tilde{t})\tilde{x}(\tilde{t})+\sqrt {2}\tilde{\xi}(\tilde{t}), \tag{3}\] where \(\tilde{\xi}(\tilde{t})=\sqrt{\tau_{\rm relax}}\xi(t)\), which satisfies \(\left\langle\tilde{\xi}(\tilde{t})\right\rangle=0\) and \(\left\langle\tilde{\xi}(\tilde{t})\tilde{\xi}(\tilde{t}^{\prime})\right\rangle =\delta(\tilde{t}-\tilde{t}^{\prime})\). Using these units, it is clear that there are only two parameters for our problem: the initial stiffness with respect to the final one, \(\tilde{k}_{i}=k_{i}/k_{f}\), and the target duration of the protocol compared to the relaxation time scale, \(\tilde{t}_{f}=t_{f}/\tau_{\rm relax}\). From now on, we will use these non-dimensional units and remove the tilde to lighten the notation. The system is in an equilibrium state at the beginning, \(t_{i}=0\), and the goal of the ESE protocol is that it reaches a new equilibrium state at time \(t_{f}\). As a result, the position probability distributions can be described by Gaussian distributions, which are characterized by \[\left\langle x(0)\right\rangle =0 \tag{4}\] \[\sigma_{i}^{2}=\left\langle x^{2}(0)\right\rangle =\frac{1}{k_{i}}\] (5) \[\sigma_{f}^{2}=\left\langle x^{2}(t_{f})\right\rangle =\frac{1}{k_{f}}=1. \tag{6}\] As is well-known [25], the Langevin equation can be mapped to a Fokker-Planck equation \[\frac{\partial P}{\partial t}=\frac{\partial}{\partial x}\left[k(t)xP\right]+ \frac{\partial^{2}P}{\partial x^{2}} \tag{7}\] for the probability density function \(P(x,t)\) of position. To solve this equation, we performed a Fourier transform \[\hat{G}(p,t)=\int_{-\infty}^{\infty}P(x,t)e^{-ipx}\,dx \tag{8}\] leading to an equation of the form \[\frac{\partial}{\partial t}(\ln\hat{G})=k(t)p\frac{\partial}{\partial p}(\ln \hat{G})-p^{2}. \tag{9}\] The combination \(\ln\Bigl{(}\hat{G}\Bigr{)}\) is the cumulant generating function whose Taylor series is \[\ln\hat{G}(p,t)=\sum_{n=0}^{\infty}\chi_{n}(t)\frac{(-ip)^{n}}{n!}, \tag{10}\] where \(\chi_{n}(t)\) are the cumulants of \(P(x,t)\). The average value and variance correspond to \(n=1\) and \(n=2\), respectively. The insertion of this expansion leads to the derivation of a set of ordinary differential equations governing the time evolution of all cumulants for the position. Notably, the only non-trivial equation corresponds to \(n=2\) (\(\chi_{2}(t)=\langle x(t)^{2}\rangle\)) \[\dot{\chi}_{2}(t)+2k(t)\chi_{2}(t)=2. \tag{11}\] The essence of the ESE protocols is to propose a particular functional form for the variance \(\chi_{2}\) such that it satisfies the initial (5) and final conditions (6): \(\chi_{2}(0)=1/k_{i}\) and \(\chi_{2}(t_{f})=1/k_{f}=1\). After that, the appropriate stiffness \(k(t)\) can be extracted from equation (11). Thus, the process of finding the appropriate stiffness is done in a reverse-engineered way. To ensure a smooth transition to equilibrium, two additional conditions can be optionally imposed on \(\chi_{2}(t)\), given by \[\dot{\chi}_{2}(0)=\dot{\chi}_{2}(t_{f})=0. \tag{12}\] We have four conditions that must be satisfied, and hence, we need four parameters to adjust. In addition, we aim to optimize the work done on the system during the process by adding an extra fifth parameter that allows us to tune it in such a way that the work is minimized. Based on this consideration, we propose a solution in the form of a fourth-degree polynomial for \(\chi_{2}\): \[\chi_{2}(t)=A_{0}+A_{1}t+A_{2}t^{2}+A_{3}t^{3}+A_{4}t^{4}. \tag{13}\] Finding the parameters \(A_{i}\) using the boundary conditions (5)-(6), the variance is given by \[\chi_{2}(t)=\frac{1}{k_{i}}-\frac{\Delta k}{k_{i}k_{f}}\left(3s^{2}-2s^{3} \right)+A_{4}t_{f}^{4}\left(s^{2}-2s^{3}+s^{4}\right), \tag{14}\] where \(s=t/t_{f}\) is the reduced time and \(\Delta k=k_{f}-k_{i}\). Replacing this in (11), the function \(k(t)\) must be \[k(t)=k_{i}\frac{1+\frac{3}{t_{f}}\frac{\Delta k}{k_{i}}(s-s^{2})-\epsilon \frac{k_{f}}{k_{i}}\frac{1}{t_{f}}(s-3s^{2}+2s^{3})}{1-\frac{\Delta k}{k_{f}} (3s^{2}-2s^{3})+\epsilon(s^{2}-2s^{3}+s^{4})}, \tag{15}\] where \(\epsilon=A_{4}k_{i}t_{f}^{4}\). The stochastic work [26; 27] done on the system is given by \[W=\frac{1}{2}\int_{0}^{t_{f}}x^{2}\frac{dk}{dt}dt. \tag{16}\] Computing the average and replacing (15), we obtain \[\langle W\rangle =\frac{1}{2}\ln\left(\frac{k_{f}}{k_{i}}\right)+\frac{1}{4t_{f}} \frac{k_{f}}{k_{i}}\eta,\] \[=\Delta F+\langle W_{\rm irr}\rangle \tag{17}\] where \(\eta\) has the expression \[\eta=\int_{0}^{1}\frac{\left[\frac{-6\Delta k}{k_{f}}(s-s^{2})+\epsilon(2s-6s ^{2}+4s^{3})\right]^{2}}{1-\frac{\Delta k}{k_{f}}(3s^{2}-2s^{3})+\epsilon(s^{ 2}-2s^{3}+s^{4})}ds, \tag{18}\] and \(\Delta F=\frac{1}{2}\ln(k_{f}/k_{i})\) is the free energy difference between the final and initial state and \(\langle W_{\rm irr}\rangle\) is the irreversible work. The average work can be interpreted as a function of \(\epsilon\), and numerical methods can be used to find the value of \(\epsilon\) that minimizes the average work. It is worth noting that the irreversible work \(\langle W_{\rm irr}\rangle\) exhibits an inverse relationship with the duration of the protocol, \(t_{f}\). This inverse correlation is consistent with our expectation for a process aimed at accelerating equilibration. The external control exerted on the system necessitates additional work to achieve equilibrium within a shorter timeframe. Since the choice for the functional form of \(\chi_{2}\) is arbitrary, there exist an infinite number of solutions to the fast equilibration problem. For example, in [5], a polynomial of degree 3 form was proposed for the inverse of \(\chi_{2}(t)\), and the corresponding experimental realization using optical tweezers has been done. The experimental realization has proved that their protocol has shortened the relaxation time by two orders of magnitude. In order to make a comparison, we use the same values reported in [5], which in adimensional units are \(k_{i}=0.5\) and \(t_{f}=1/30\), as shown in Figure 1. The average work for the protocol \(k(t)\) defined in Eq. (15), optimizing \(\epsilon\) to minimize the average work, is \(\langle W\rangle=6.52\), while the result published in [5] is \(\langle W\rangle=6.71\). It is worth noting that the optimal work protocol can be obtained using Pontryagin's principle (see Eq. (20)), with a result of \(\langle W\rangle_{\rm opt}=5.49\)[5; 23; 24; 28; 29]. Once we have proposed a form for \(\chi_{2}(t)\), it is possible to find the corresponding protocol \(k(t)\) using Eq. (11). It is clear that there are infinitely many protocols with different characteristics. For example, for a variance of the form \[\chi_{2}(t)=\frac{1}{k_{i}}(1+ct)^{2}, \tag{19}\] the protocol is \[k_{\rm op}(t)=\frac{k_{i}}{(1+ct)^{2}}-\frac{c}{1+ct}, \tag{20}\] where \(c=\frac{1}{t_{f}}(\sqrt{k_{i}/k_{f}}-1)\) and time interval \(t\in(0,t_{f})\). This protocol has the peculiarity that it minimizes the work, as it has been shown in Refs. [23; 28]. A second example is a linear relation for the variance \(\chi_{2}(t)\) that satisfy the boundary conditions Eqs.(5)-(6): \[\chi_{2}(t)=\frac{1}{k_{i}}+(\frac{1}{k_{f}}-\frac{1}{k_{i}})\frac{t}{t_{f}}, \tag{21}\] whose protocol \(k_{L}(t)\) is of the form for \(t\in(0,t_{f})\) \[k_{L}(t)=\frac{2-(\frac{1}{k_{f}}-\frac{1}{k_{i}})\frac{1}{t_{f}}}{2\left(\frac{ 1}{k_{i}}+(\frac{1}{k_{f}}-\frac{1}{k_{i}})\frac{t}{t_{f}}\right)}. \tag{22}\] Both protocols are discontinuous at \(t=0\) and \(t=t_{f}\) and they will be called optimal and linear, respectively. Studying the energetics [26] of these protocols, as well as almost all other protocols, can only be achieved through numerical simulations [15]. This raises the question of whether it is possible to design a protocol for which analytical solutions can be obtained for the PDF of work, heat, and entropy production. The next section addresses this question. ## III Two-step protocol In this section, we propose a new fast thermalization protocol that provides analytical solutions for the PDF of relevant thermodynamic quantities. Upon careful examination of Figure 1, it is evident that the stiffness must be significantly greater than both the initial and final values of \(k\). This is because, in order to accelerate the equilibration process, the stiffness needs to be increased to reduce the characteristic time scale evolution of the system. Therefore, it is reasonable to conclude that the large values of stiffness play a critical role in achieving a shortcut to adiabaticity. Keeping this in mind, we propose a two-step protocol (TSP) defined as follows: \[k(t)=\begin{cases}k_{i}&\text{if}\,\,\,t\leq 0,\\ k_{m}&\text{if}\,\,\,0<t<t_{f},\\ k_{f}&\text{if}\,\,\,t\geq t_{f}.\end{cases} \tag{23}\] As we will see, this toy model captures the essential features of general protocols. Using this stiffness, the variance in the time interval \(0\leq t\leq t_{f}\) is obtained by solving Eq. (11) with the initial condition (5): \[\chi_{2}(t)=\sigma_{X}(t)^{2}=\frac{1}{k_{m}}+\left(\frac{1}{k_{i}}-\frac{1} {k_{m}}\right)e^{-2k_{m}t}. \tag{24}\] Using this variance, the expression for the probability distribution of the position at any time (\(0\leq t\leq t_{f}\)) can be written as: \[P_{X}(x_{t},t)=\frac{1}{\sqrt{2\pi}\sigma_{X}(t)}\exp\left(-\frac{x_{t}^{2}}{ 2\sigma_{X}^{2}(t)}\right). \tag{25}\] Now, by forcing the system to arrive at a final equilibrium state at \(t=t_{f}\), the variance \(\chi_{2}(t_{f})\) has to be \(1/k_{f}\). This leads us to a self-consistency equation for \(k_{m}\) given by: \[\exp(-2k_{m}t_{f})=\frac{\frac{1}{k_{f}}-\frac{1}{k_{m}}}{\frac{1}{k_{i}}- \frac{1}{k_{m}}}. \tag{26}\] This equation is transcendental and can be solved numerically to determine the value of \(k_{m}\). Utilizing this value, the TSP is a control protocol that joins the equilibrium state at \(k_{i}\) with the equilibrium state at \(k_{f}\) in a finite time \(t_{f}\). This equation is not symmetric under time inversion of the protocol, i.e., if the system evolves from \(k_{f}\) to \(k_{i}\), and the equilibrium distributions are desired at initial and final times, the value of \(k_{m}\) changes because the roles of \(k_{i}\) and \(k_{f}\) need to be interchanged in Eq. (26), and therefore the system does not follow the same evolution under the reverse direction of time. This fact is related to the entropy production, as discussed below. This approach differs from that of the previous section and previous works [5; 30], as here we propose a specific form of the protocol and determine the corresponding variance. Thus, this process can be referred to as direct engineering. Although Eq. (26) cannot be solved analytically, we can derive the asymptotic behavior of \(k_{m}\) for large and small values of the target final time \(t_{f}\). If \(t_{f}\gg 1\), \(\exp(-2k_{m}t_{f})\to 0\). From the right-hand side of (26), we find \(k_{m}=k_{f}=1\). This is to be expected as if we have an infinite amount of time to let the system equilibrate (\(t_{f}\to\infty\)), we can fix \(k_{m}=k_{f}\) and wait for the system to reach equilibrium at this new value of the stiffness. More interesting and pertinent is the behavior when \(t_{f}\to 0\). In this limit, \(k_{m}\to\infty\), which means that the shorter the target time is, the larger the stiffness needs to be. In this limit, the right-hand side of (26) converges Figure 1: Plots of protocols (15), Martinez et al. ([5]) and TSP (23). Note that the maximum value of \(k(t)\) occurs around \(k_{\max}=16k_{f}\). This high value is necessary to reduce the equilibration time. The central step in the protocol is \(k_{m}\) that is solution of eq (26). The difference between the stiffness average of the Martinez et al. [5], the stiffness average of the protocol \(k(t)\) Eq. (15) and \(k_{m}\) are small, and it is not visible in the plot. The parameters used are \(k_{i}=1/2\) and \(t_{f}=1/30\). to a finite value \(k_{i}/k_{f}\). Therefore, we deduce that \(k_{m}t_{f}\) must remain finite, and we have: \[k_{m}\sim\frac{1}{2t_{f}}\ln\frac{k_{f}}{k_{i}}. \tag{27}\] We observe that \(k_{m}\) is inversely proportional to \(t_{f}\). This quantifies the previous observation that to achieve fast relaxation to equilibrium, it is necessary to significantly increase the stiffness. Since the typical relaxation time at a fixed stiffness \(k_{m}\) is of the order \(1/k_{m}\), the TSP matches a transient relaxation time of the order \(1/k_{m}\) to the target duration \(t_{f}\) of the accelerated protocol. The specific relation is given by (27). By expanding the right-hand side of (26) in powers of \(1/k_{m}\), one can systematically compute the corrections to (27). For example, the next-to-leading order term is given by: \[k_{m}t_{f}=\frac{1}{2}\ln\frac{k_{f}}{k_{i}}+t_{f}\frac{k_{f}-k_{i}}{\ln(k_{f} /k_{i})}+O(t_{f}^{2}). \tag{28}\] It turns out that the leading order (27) is universal for more general protocols, if we replace \(k_{m}\) by the average value of \(k(t)\) over the protocol duration \[k_{m}=\frac{1}{t_{f}}\int_{0}^{t_{f}}k(t)\,dt. \tag{29}\] Indeed, using Eq. (11) for a general protocol, we find that the average of \(k(t)\) is \[k_{m}=\frac{1}{2t_{f}}\ln\frac{\chi_{2}(0)}{\chi_{2}(t_{f})}+\int_{0}^{t_{f}} \frac{1}{\chi_{2}(t)}\frac{dt}{t_{f}}. \tag{30}\] Since \(\chi_{2}\) is a continuous function of \(t\), applying the mean value theorem and using the initial and final conditions (5)-(6) leads to \[k_{m}=\frac{1}{2t_{f}}\ln\frac{k_{f}}{k_{i}}+\frac{1}{\chi_{2}(t^{*})} \tag{31}\] for some \(t^{*}\in[0,t_{f}]\). Figure 2 illustrates this by showing a plot of \(k_{m}t_{f}\) as a function of \(t_{f}\) for different values of \(k_{i}\) and the ESE protocols discussed in the previous section and Ref. [5]. As \(t_{f}\to 0\), all the protocols converge to the same value given by (27). However, the next-to-leading corrections differ from one protocol to another. ### Work distribution for two-step protocol Since the TSP seems to be analytically tractable and exhibits some general features of more complex ESE protocols, we now proceed to use it as a basis for understanding the stochastic energetics of the fast relaxation protocols. More specifically, in this section, we want to calculate the work probability distribution. Using (16), we obtain the work done on the system \(W_{D}\) during the process \[W_{D}=\left\{\begin{array}{ll}\frac{1}{2}(k_{m}-k_{i})x_{i}^{2}&0<t<t_{f}, \\ \frac{1}{2}(k_{m}-k_{i})x_{i}^{2}+\frac{1}{2}(k_{f}-k_{m})x_{f}^{2}&t>t_{f}. \end{array}\right. \tag{32}\] Since the initial and final positions are stochastic variables with normal distributions, the work done is also a stochastic variable. To compute the work distribution function, we consider the following cases: #### iii.1.1 For \(0<t<t_{f}\) To study this time interval, we need to determine the position probabilities at two points described by \((x_{i},t_{i}=0)\) and \((x_{t},t)\). Therefore, the joint probability distribution (JPD) can be expressed as follows: \[P_{2}(x_{i},t_{i};x_{t},t)=P(x_{t},t|x_{i},t_{i})P(x_{i},t_{i}). \tag{33}\] Figure 2: The average value \(k_{m}\) of the stiffness multiplied by \(t_{f}\) is plotted for several protocols: the Two-Step Protocol (TSP) given by Eq.(23), the protocol proposed by Martinez et al.[5], and the protocol \(k(t)\) given by Eq.(15). It can be observed that at short target times \(t_{f}\), all protocols exhibit the same asymptotic behavior, as predicted by Eq.(27). Computing \(P_{2}\) is a straightforward task; \(P(x_{t},t|x_{i},t_{i})\) follows an Ornstein-Uhlenbeck process distribution [31]: \[P(x_{t},t_{t}|x_{i},t_{i})=\frac{1}{\sqrt{2\pi\sigma_{m}(t)^{2}}}\exp\biggl{\{}- \frac{(x_{t}-x_{i}e^{-k_{m}t})^{2}}{2\sigma_{m}(t)^{2}}\biggr{\}} \tag{34}\] where \[\sigma_{m}(t)^{2}=\frac{1-e^{-2k_{m}t}}{k_{m}}. \tag{35}\] The initial Gaussian probability distribution is \[P(x_{i},t_{i})=\frac{1}{\sqrt{2\pi\sigma_{i}}}\exp\biggl{\{}-\frac{x_{i}^{2}}{ 2\sigma_{i}^{2}}\biggr{\}} \tag{36}\] with \(\sigma_{i}^{2}=1/k_{i}\) (eq. (25) at \(t=0\)). To compute the probability density function of the work, we adopt the following approach: while there exist an infinite number of paths connecting the points \((x_{i},t_{i})\) and \((x_{t},t)\), we focus solely on the paths that yield a specific amount of work, denoted as \(W\), regardless of the initial and final points. Hence, the PDF of the work can be expressed as follows: \[P_{W}(W,t)=\int_{-\infty}^{\infty}dx_{i}dx_{t}\delta(W-W_{D})P_{2}(x_{i},t_{i} ;x_{t},t). \tag{37}\] Since the work \(W_{D}\) is independent of \(x_{t}\) for \(t<t_{f}\), the work probability is stationary, and the previous equation simplifies to: \[P_{W}(W,t)=\int_{-\infty}^{\infty}dx_{i}\delta\left(W-\frac{(k_{m}-k_{i})x_{i} ^{2}}{2}\right)P(x_{i},t_{i}). \tag{38}\] If \(W/(k_{m}-k_{i})<0\), the argument of the Dirac delta distribution never vanishes, resulting in \(P(W,t)=0\). However, when \(W/(k_{m}-k_{i})\geq 0\), there are two roots to the equation \(W-(k_{m}-k_{i})x_{i}^{2}/2=0\), yielding: \[P_{W}(W,t)=\sqrt{\frac{k_{i}}{\pi(k_{m}-k_{i})W}}\exp\left(-\frac{k_{i}}{k_{m} -k_{i}}W\right). \tag{39}\] This result can also be obtained using the methods described in [27; 15]. Returning to (38), one can use the Fourier representation of the Dirac distribution \[\delta(W-(k_{m}-k_{i})x_{i}^{2}/2)=\int_{-\infty}^{+\infty}e^{iz(W-(k_{m}-k_{ i})x_{i}^{2}/2)}\,\frac{dz}{2\pi}. \tag{40}\] to compute the characteristic function of \(P_{W}(W,t)\). Using (40) into (38), the resulting gaussian integral over \(x_{i}\) can be performed to obtain \[P_{W}(W,t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}dz\hat{P}_{W}(z)e^{iWz} \tag{41}\] where the characteristic function is \[\hat{P}_{W}(z)=\frac{1}{\sqrt{1+ia_{1}z}} \tag{42}\] with \(a_{1}=(k_{m}-k_{i})/k_{i}\). From this we can obtain all the cumulants \(c_{n}\) of \(W\), \[\ln\hat{P}_{W}(z) =\sum_{n=1}^{\infty}\frac{(-iz)^{n}}{n!}c_{n}\] \[=\sum_{n=1}^{\infty}\frac{(-ia_{1}z)^{n}}{n}. \tag{43}\] Thus \[c_{n}=\frac{1}{2}(n-1)!a_{1}^{n}. \tag{44}\] In particular, the average work and variance for this time interval are \[\left\langle W\right\rangle = \frac{k_{m}-k_{i}}{2k_{i}}, \tag{45}\] \[\sigma_{W}^{2} = \frac{(k_{m}-k_{i})^{2}}{2k_{i}^{2}}. \tag{46}\] Note that \(\sigma_{W}^{2}=2\left\langle W\right\rangle^{2}\). From the characteristic function one can check that the work distribution satisfies Jarzynski equality [16] as it should: \[\left\langle e^{-W}\right\rangle=\hat{P}_{W}(i)=\sqrt{\frac{k_{i}}{k_{m}}}=e^{ -\Delta F}, \tag{47}\] where \(\Delta F=\frac{1}{2}\ln(k_{m}/k_{i})\) is the free energy difference for the oscillator with final stiffness \(k_{m}\) (in the time interval \(0<t<t_{f}\)) and initial stiffness \(k_{i}\). #### iii.2.2 For \(t\geq t_{f}\) In the temporal range \(t\geq t_{f}\), it is necessary consider the three-point joint probability distribution \[P_{3}(x_{i},t_{i};x_{f},t_{f};x,t)=P(x,t|x_{f},t_{f})P_{2}(x_{f},t_{f};x_{i},t _{i}). \tag{48}\] Again, the work distribution is stationary, hence we only need the two-points JPD \[P_{2}(x_{i},t_{i};x_{f},t_{f}) =\frac{1}{2\pi\sigma_{i}\sigma_{m}(t)}\exp\biggl{\{}-\frac{(x_{f }-x_{i}e^{-k_{m}t_{f}})^{2}}{2\sigma_{m}(t_{f})^{2}}\biggr{\}}\] \[\times\exp\biggl{\{}-\frac{x_{i}^{2}}{2\sigma_{i}^{2}}\biggr{\}} \tag{49}\] here \(\sigma_{m}(t)\) is given by (35). Inserting the two-point JPD in an expression similar to (37) and using the Fourier representation of the Dirac distribution, we obtain a Gaussian integral in two variables that can easily be computed leading us to a characteristic function of the form \[\hat{P}_{W}(z)=\frac{1}{\sqrt{1+a_{1}z+a_{2}z^{2}}} \tag{50}\] where the coefficients are given by \[a_{1} = \frac{i(k_{f}-k_{i})}{k_{i}}+\frac{i(k_{f}-k_{m})(k_{i}-k_{m})\sigma _{m}^{2}(t_{f})}{k_{i}} \tag{51}\] \[a_{2} = \frac{i(k_{f}-k_{m})(k_{i}-k_{m})\sigma_{m}^{2}(t_{f})}{k_{i}} \tag{52}\] or utilizing the consistency equation (26), these expressions can be simplified to \[a_{1} = \frac{i(k_{f}-k_{i})k_{m}}{k_{i}k_{f}} \tag{53}\] \[a_{2} = \frac{(k_{f}-k_{i})(k_{m}-k_{f})}{k_{f}k_{i}} \tag{54}\] Clearly, \(P_{W}(W)\) is normalized (\(\hat{P}_{W}(0)=1\)) and satisfies the Jarzynski equality [16] \[\left\langle e^{-W}\right\rangle=\hat{P}_{W}(i)=e^{-\Delta F}, \tag{55}\] where now \(\Delta F=\frac{1}{2}\ln(k_{f}/k_{i})\) is the Helmholtz free energy difference between the two equilibrium states with stiffness \(k_{f}\) and \(k_{i}\). From the characteristic function, we can determine any moments of the work distribution, including the average work and variance: \[\left\langle W\right\rangle = \frac{k_{m}}{2}\left(\frac{1}{k_{i}}-\frac{1}{k_{f}}\right), \tag{56}\] \[\sigma_{W}^{2} = 2\left\langle W\right\rangle^{2}+\frac{2(k_{m}-k_{f})\left\langle W \right\rangle}{k_{m}}. \tag{57}\] It is worth noting that \(\sigma_{W}^{2}>2\left\langle W\right\rangle^{2}\) holds true when \(\left(k_{m}-k_{f}\right)\left\langle W\right\rangle>0\). Fig. 6 shows the evolution of the average work and its standard deviation. The exact computation of the inverse Fourier transform in Eq. (41) can be achieved for the characteristic function given in Eq. (50). To accomplish this, it is imperative to use the vertex form of a quadratic polynomial. Subsequently, an elementary change of variable results in an integral representation of the modified Bessel function of the second kind of order zero \(K_{0}\). The final expression is as follows: \[P_{W}(W)=\frac{1}{\pi\sqrt{\sigma_{W}^{2}-2\left\langle W\right\rangle^{2}}} \exp\Biggl{\{}\frac{\left\langle W\right\rangle}{\sigma_{W}^{2}-2\left\langle W \right\rangle^{2}}W\Biggr{\}}K_{0}\left(\frac{\sqrt{\sigma_{W}^{2}-\left\langle W \right\rangle^{2}}}{\sigma_{W}^{2}-2\left\langle W\right\rangle^{2}}|W| \right). \tag{58}\] The tails of this work distribution can be obtained from the asymptotic behavior of the Bessel function \(K_{0}\), they are \[P_{W}(W)\sim\left\{\begin{array}{cc}\frac{1}{\sqrt{2\pi}\sigma_{r}W}\exp \Bigl{\{}\frac{-1}{\sigma_{r}+\left\langle W\right\rangle}W\Bigr{\}}&W\to \infty,\\ \frac{1}{\sqrt{-2\pi}\sigma_{r}W}\exp\Bigl{\{}\frac{1}{\sigma_{r}-\left\langle W \right\rangle}W\Bigr{\}}&W\to-\infty.\end{array}\right. \tag{59}\] where \(\sigma_{r}^{2}=\sigma_{W}^{2}-\left\langle W\right\rangle^{2}\). To test this analytical result and compare it with the PDF for more complicated protocols, we performed simulations using a code developed by one of us [32]. The comparison between this exact solution and the simulation is depicted in figure 3. The probability density function of the work exhibits an intriguing structure characterized by the product of two components. The first component is a symmetric function of \(W\), specifically the Bessel \(K_{0}\) function with an argument proportional to the absolute value of \(W\). The second component is an exponential function of \(W\), introducing an asymmetry in the probabilities of obtaining positive or negative work values. Since the overall process is a compression (\(k_{f}>k_{i}\)), the average work on the particle is positive. Nevertheless there are rare events in which \(W<0\) but those are less frequent than the ones where \(W>0\): the tail of \(P_{W}(W)\) for \(W>0\) is larger than the one for \(W<0\). Mathematically, this asymmetry factor can be expressed as follows: \[\frac{P_{W}(W)}{P_{W}(-W)}=\exp\Biggl{\{}\frac{2\left\langle W\right\rangle}{ \sigma_{W}^{2}-2\left\langle W\right\rangle^{2}}W\Biggr{\}}=\exp\Biggl{\{} \frac{k_{m}}{k_{m}-k_{f}}W\Biggr{\}}. \tag{60}\] Equivalently, this implies the symmetry relation for the characteristic function \[\hat{P}_{W}(z)=\hat{P}_{W}\left(-z-i\frac{k_{m}}{k_{m}-k_{f}}\right). \tag{61}\] The other protocols follow a similar relation, at least for \(t_{f}\ll 1\), as observed in the plot shown in Figure 4. This suggests that the relation given by Eq. (60) is universal for protocols that achieve fast thermal equilibration in a short time, where \(t_{f}\ll\tau_{\rm relax}=1\). However, this universality breaks down for longer times: when \(t_{f}\sim 1\), the protocols exhibit slight deviations from the predicted behavior of the TSP protocol, as shown in Figure 5. Nonetheless, the universality of Eq. (60) remains a prominent feature, valid within the range of interest for fast thermalization protocols (\(t_{f}\ll 1\)). #### iii.1.3 Reverse protocol In the previous section, we computed the probability distribution connecting two equilibrium states, \(k_{i}\) and \(k_{f}\), through a suitable choice of \(k_{m}\) as given by Eq. (26). However, our results are more general than this specific case. The probability density function of work retains the same expressions as before for arbitrary \(k_{m}\), except that the average work and its variance will change. In this general setup, the expressions for \(t\geq t_{f}\) are as follows: \[\left\langle W\right\rangle = \frac{k_{f}-k_{i}}{2k_{i}}+\frac{(k_{m}-k_{f})(k_{m}-k_{i})}{2k_{ i}}\sigma_{m}^{2}(t_{f}), \tag{62}\] \[\sigma_{W}^{2} = 2\left\langle W\right\rangle^{2}+\frac{(k_{m}-k_{f})(k_{m}-k_{i} )}{k_{i}}\sigma_{m}^{2}(t_{f}). \tag{63}\] For \(t<t_{f}\), the expressions remain unchanged, as is shown in Eq. (45). Figure 4: The protocol given by Martinez et al. [5], the protocol described in Eq. (15), and the linear (22) and optimal protocol (20) exhibit a relation predicted by the TSP in Eq. (60). We have performed \(10^{6}\) simulations using \(k_{i}=1/2\) and \(t_{f}=1/30\). Figure 5: For \(t_{f}\sim 1\), the protocols shown in the figure, present a deviation from the behavior described by (60). However, we can appreciate a linear tendency. This plot has been made for \(10^{6}\) simulations with parameters \(t_{f}=0.7\) and \(k_{i}=1/2\). Figure 3: The curve is the theoretical prediction for \(t>t_{f}\) given by (58) and the histogram is built simulating the Langevin dynamic of the system under study. This histogram is for \(10^{6}\) simulations with parameters \(k_{i}=k_{f}/2=1/2\) and \(t_{f}=1/(30k_{f})\). Figure 6: The standard deviation and work average in all time interval for the TSP protocol whose parameters are \(k_{i}=1/2\) and \(t_{f}=1/30\). Let us consider a time-reversed protocol defined as \[k^{R}(t)=\begin{cases}k_{f}&\text{if}\;\;t\leq 0,\\ k_{m}&\text{if}\;\;0<t<t_{f},\\ k_{i}&\text{if}\;\;t\geq t_{f}.\end{cases} \tag{64}\] In this time-reversed protocol, we do not change the value of \(k_{m}\), it is the same as in the forward protocol. As a result, the system will _not_ be at thermal equilibrium at \(t_{f}\), since the value of \(k_{m}\) has not been adjusted properly (to obtain thermal equilibrium at \(t_{f}\) in a reversed protocol, the roles of \(k_{i}\) and \(k_{f}\) had to be interchanged in Eq. (26)). The Crooks relation can be verified by utilizing the general expression for \(\left\langle W\right\rangle\) and \(\sigma_{W}^{2}\)[19]: \[\frac{P_{W}(W)}{P_{W}^{R}(-W)}=\exp\{W-\Delta F\},\quad\text{for any }t. \tag{65}\] Here, \(P_{W}^{R}(-W)\) represents the work probability distribution of the reverse protocol. The cancellation of Bessel functions occurs due to the symmetry of the argument under the interchange of \(k_{i}\) and \(k_{f}\). The factor \(\Delta F\) arises from the ratio of normalization factors, while the appearance of \(W\) stems from the exponential function (refer to Appendix A for more details). ### Heat distribution function for TSP Now we move to the study of another relevant thermodynamics quantity: heat. In the realm of stochastic thermodynamics, the concept of heat, as defined in [26], is expressed as: \[Q=\int_{0}^{t}k(t)x\dot{x}dt. \tag{66}\] Using the first law of thermodynamics, the heat can also be computed from \(Q_{D}(t)=U(t)-U(0)-W_{D}(t)\). For the two-step protocol (TSP), the stochastic heat is: \[Q_{D}(t)=\left\{\begin{array}{ll}\frac{1}{2}k_{m}(x_{t}^{2}-x_{i}^{2})&t\leq t _{f},\\ \\ \frac{1}{2}k_{m}(x_{f}^{2}-x_{i}^{2})+\frac{1}{2}k_{f}(x_{t}^{2}-x_{f}^{2})&t> t_{f}.\end{array}\right. \tag{67}\] It is worth noting that the heat distribution is time-dependent because it involves \(x_{t}\). This complication leads to a non-stationary heat probability density function (PDF). Similar to the work calculation, the analysis of heat needs to be performed in two distinct cases: #### iv.2.1 For \(0\leq t\leq t_{f}\) By following a similar methodology as with the work distribution, we can obtain the characteristic function for the heat \(\hat{P}_{Q}(z)=(1+b_{1}z+b_{2}z^{2})^{-1/2}\) with the coefficients \[b_{1} = \frac{-i(k_{m}-k_{i})k_{m}}{k_{i}}\sigma_{m}^{2}(t),\] \[b_{2} = \frac{k_{m}^{2}}{k_{i}}\sigma_{m}^{2}(t),\] where \(\sigma_{m}(t)\) is given at 35. From this characteristic function, we can derive the average \(\left\langle Q\right\rangle_{t}\): \[\left\langle Q\right\rangle_{t}=-\frac{k_{m}-k_{i}}{2k_{i}}\left(1-e^{-2k_{m}t }\right), \tag{68}\] and the variance of the heat \[\sigma_{Q}(t)^{2}=2\left\langle Q\right\rangle_{t}^{2}-\frac{2k_{m}\left\langle Q \right\rangle_{t}}{k_{m}-k_{i}}. \tag{69}\] It is important to observe that the average heat and variance exhibit a characteristic time \(\tau_{m}=1/k_{m}\). The characteristic function has a similar structure to that of the work probability density function for \(t>t_{f}\). Therefore, the inverse Fourier transform can be computed explicitly, leading to the heat probability distribution for \(0\leq t\leq t_{f}\), \[P_{Q}(Q,t)=\frac{1}{\pi\sqrt{\sigma_{Q}(t)^{2}-2\left\langle Q\right\rangle_{ t}^{2}}}\exp\Biggl{\{}\frac{\left\langle Q\right\rangle_{t}}{\sigma_{Q}(t)^{2}-2 \left\langle Q\right\rangle_{t}^{2}}Q\Biggr{\}}K_{0}\left(\frac{\sqrt{\sigma_{ Q}(t)^{2}-\left\langle Q\right\rangle_{t}^{2}}}{\sigma_{Q}(t)^{2}-2 \left\langle Q\right\rangle_{t}^{2}}|Q|\right). \tag{70}\] This result can also be obtained using the methods described in [33]. It is worth noting that the exponential term in the heat distribution is time-independent since \(\frac{\left\langle Q\right\rangle_{t}}{\sigma_{Q}^{2}-2\left\langle Q\right\rangle _{t}^{2}}=\frac{k_{i}-k_{m}}{2k_{m}}\) does not depend on \(t\). On the other hand, the argument of the Bessel function \(K_{0}\) is time-dependent. The ratio of the heat probability distributions \(P(Q,t)/P(-Q,t)\) satisfies a similar relation to the one found for the work in Eq. (60) \[\frac{P_{Q}(Q,t)}{P_{Q}(-Q,t)}=\exp\Biggl{\{}\frac{2\left\langle Q\right\rangle_{ t}}{\sigma_{Q\,t}^{2}-2\left\langle Q\right\rangle_{t}^{2}}Q\Biggr{\}}=\exp \Biggl{\{}\frac{k_{i}-k_{m}}{k_{m}}Q\Biggr{\}}. \tag{71}\] A numerical test of this relation for other protocols in the range \(t_{f}\ll\tau_{\rm relax}=1\) is shown in Fig. (7) revealing again its universality. For longer protocols \(t_{f}\sim 1\), the relation given by Eq. (71), which has been proven for the TSP protocol, remains valid at large values of \(|Q|\) but shows some deviations at small values as illustrated in Fig. 8 for several protocols. A similar relation to (71) where \(P_{Q}(Q,t)/P_{Q}(-Q,t)\) is a simple exponential function of \(Q\) appears also in a different context when one considers the relaxation from the microcanonical to the canonical ensembles of this system [34]. #### iv.2.2 For \(t>t_{f}\) After the second step of the TSP, the average heat reaches its equilibrium value given by Eq. (68) evaluated at \(t=t_{f}\). However, surprisingly, moments of order \(n\geq 2\) of the heat distribution take longer to reach their equilibrium values. To investigate this phenomenon, we attempt to calculate the heat distribution for \(t>t_{f}\), which necessitates the joint probability distribution of the three-points: \[P_{Q}(Q,t)=\int dx_{i}dx_{f}dx,\delta(Q-Q_{D})P_{3}(x_{i},t_{i};x_{f},t_{f};x _{t},t). \tag{72}\] Here, \(P_{3}\) is a product of two Ornstein-Uhlenbeck processes with stiffness \(k_{m}\) and \(k_{f}\) and the initial probability Figure 8: The relation predict by (71) continues to be valid even for \(t_{f}\sim 1\) as is shown for the plot. The figure is based on \(10^{6}\) simulations with parameters \(t_{f}=0.7\) and \(k_{i}=1/2\). Figure 7: The protocol described by Martinez et al. [5], the protocol \(k(t)\) described in Eq. (15), the linear (22) and the optimal protocols (20) exhibit the same relation as the TSP given in Eq. (71). This observation is based on \(10^{6}\) simulations performed with \(k_{i}=1/2\) and \(t_{f}=1/30\). Figure 9: The presented plot provides a comparative analysis of the probability density function for heat, specifically focusing on \(t=t_{f}\). The analysis includes both the theoretical result (70) obtained for the TSP and the results from \(10^{6}\) simulations. The simulations were conducted using a specific parameter setting, where \(k_{i}\) is half of \(k_{f}\). Notably, a remarkable level of agreement is observed between the theoretical and simulation outcomes, indicating a high degree of concurrence. distribution, \[P_{3}(x_{i},t_{i};x_{f},t_{f};x_{t},t)= P(x_{i},t_{i})P(x_{f},t_{f}|x_{i},t_{i})\] \[\times P(x_{t},t|x_{f},t_{f}), \tag{73}\] with \[P(x_{t},t|x_{f},t_{f})=\frac{1}{\sqrt{2\pi\sigma_{f}(t)^{2}}}\exp\biggl{\{}- \frac{(x_{t}-x_{f}e^{-k_{f}(t-t_{f})})^{2}}{2\sigma_{f}(t)^{2}}\biggr{\}}, \tag{74}\] where \[\sigma_{f}(t)^{2}=\frac{1-e^{-2k_{f}(t-t_{f})}}{k_{f}}, \tag{75}\] and \(P(x_{f},t_{f}|x_{i},t_{i})\) is given by (34). By combining these expressions, utilizing the Fourier representation of the delta distribution, and performing the resulting Gaussian integrals, we can obtain the characteristic function. \[\hat{P}_{Q}(z)=\frac{1}{\sqrt{1+c_{1}z+c_{2}z^{2}+c_{3}z^{3}}} \tag{76}\] whose coefficients are given by \[c_{1} = A(1-e^{-2k_{f}(t-t_{f})})-\frac{ik_{m}(k_{m}-k_{i})}{k_{i}} \sigma_{m}^{2}(t_{f}),\] \[c_{2} = B(1-e^{-2k_{f}(t-t_{f})})+\frac{k_{m}^{2}}{k_{i}}\sigma_{m}^{2} (t_{f}),\] \[c_{3} = C(1-e^{-2k_{f}(t-t_{f})}),\] where \(A,B\) and \(C\) are constants given by \[A = \frac{i(k_{i}-k_{f})}{k_{i}}+\frac{ik_{f}(k_{m}-k_{i})}{k_{i}} \sigma_{m}^{2}(t_{f}),\] \[B = \frac{k_{f}+(k_{f}k_{i}-2k_{m}k_{f}-k_{i}k_{m}+k_{m}^{2})\sigma_{ m}^{2}(t_{f})}{k_{i}},\] \[C = \frac{i(k_{m}-k_{f})k_{m}\sigma_{m}^{2}(t_{f})}{k_{i}}.\] These coefficients are completely general, meaning that \(k_{m}\) does not necessarily satisfy the consistency equation (26). If \(k_{m}\) does satisfy the consistency equation, the coefficients \(B\) and \(C\) are nonzero, while the coefficient \(A\) vanishes. As a result, the average heat for \(t>t_{f}\) becomes time-independent. However, the variance and higher-order moments of the heat depend on time and have a characteristic time given by \(1/(2k_{f})\). If \(k_{m}\) is a solution of the consistency equation (26), the average heat and variance are given by: \[\left<Q\right> = -\frac{k_{m}}{2}\left(\frac{1}{k_{i}}-\frac{1}{k_{f}}\right), \tag{77}\] \[\sigma_{Q}^{2} = 2\left<Q\right>^{2}-\frac{2k_{m}\left<Q\right>}{k_{m}-k_{i}}+D(1 -e^{-2k_{f}(t-t_{f})}), \tag{78}\] here \(D\) is \[D=\frac{(k_{f}-k_{m})(k_{i}^{2}+k_{f}k_{m}-k_{i}k_{m})}{k_{i}k_{f}(k_{i}-k_{m} )}. \tag{79}\] Fig. 10 shows the evolution of the average heat and its standard deviation. When \(t>t_{f}\), the average heat has stabilized but its standard deviation continues to change in time with a relaxation time of order \(1/k_{f}\). This finding holds considerable importance as it indicates that the Brownian particle reaches a state of equilibrium in terms of its position and work probability distribution by the final time \(t_{f}\). However, it is worth noting that the heat distribution requires additional time to reach its ultimate state of equilibrium. The primary aim of the shortcut protocols is to achieve equilibrium in the position distribution, while the attainment of equilibrium in other relevant distributions is not necessarily simultaneous with that of the position distribution. ## IV Entropy production In this section, we specifically focus on the entropy production associated with the two-step protocol. This quantity holds significant importance in understanding the inherent irreversibility within nonequilibrium systems and provides valuable insights into the fundamental thermodynamic principles that govern the TSP. We recall the definition of total entropy in the realm of stochastic thermodynamics [17; 19]: \[\Delta S=-\frac{Q}{T}+k_{B}\ln\frac{P_{i}}{P_{f}}, \tag{80}\] where \(\Delta S\) is the entropy change between two states with the probability density function (PDF) of the position given by \(P_{i}\) and \(P_{f}\), respectively. The first term corresponds to the increase in entropy of the environment, which is assumed to be in equilibrium at temperature \(T\) and the last term represents the increase in entropy Figure 10: Standard deviation and average of heat for the TSP with parameters \(t_{f}=1/30\) and \(k_{i}=1/2\). The dotted line is the asymptotic value of \(\sigma_{Q}\) when \(t\rightarrow\infty\). of the system. In our notation, we will use \(\Delta S\) and \(\Sigma\) interchangeably to represent entropy production. By employing a methodology analogous to that used for calculating the probability density functions of heat and work, we can derive the PDF of entropy production. Due to the similarities with the previous derivations, we will omit several intermediate steps. Similar to before, the PDF of entropy production can be expressed as: \[P_{\Sigma}(\Sigma)=\frac{1}{2\pi}\int_{-\infty}^{\infty}dz\hat{P}_{\Sigma}(z) \exp\{(iz(\Sigma-\Sigma_{0}))\}, \tag{81}\] where \(\hat{P}_{\Sigma}(z)e^{-iz\Sigma_{0}}\) is the characteristic function associated with the entropy production random variable and \[\Sigma_{0}=\frac{1}{2}\ln\frac{\sigma_{X}(t)^{2}}{\sigma_{i}^{2}}. \tag{82}\] Once again, it is necessary to consider two distinct cases: #### iv.2.1 For \(0\leq t\leq t_{f}\) To calculate the second term in the entropy expression, we need the probability density function of the position at the initial time as well as the position PDF at \(t\leq t_{f}\), as given by Eq. (25). By utilizing the expression for the heat done, as shown in Eq. (67), and the two-point probability distribution given in Eq. (33), we can determine the total entropy (in the chosen units) for \(t\leq t_{f}\) as follows: \[\Sigma=\frac{1}{2}k_{m}(x_{i}^{2}-x_{t}^{2})+\Sigma_{0}-\frac{x_{i}^{2}}{2 \sigma_{i}^{2}}+\frac{x_{t}^{2}}{2\sigma_{X}(t)^{2}}. \tag{83}\] The function \(\hat{P}_{\Sigma}(z)\) is \[\hat{P}_{\Sigma}(z)=\frac{1}{\sqrt{1+d_{1}z+d_{2}z^{2}}}, \tag{84}\] where \[d_{1} = \frac{i\left(-k_{m}\sigma_{m}^{4}k_{i}+k_{m}^{2}\sigma_{m}^{4}-k _{m}\sigma_{m}^{2}+1\right)}{\sigma_{m}^{2}k_{i}}, \tag{85}\] \[d_{2} = \frac{\left(k_{m}-k_{i}\right)\left(k_{m}\sigma_{m}^{2}-1\right) }{k_{i}}. \tag{86}\] Computing the integral (81), we arrive to the expression \[P_{\Sigma}(\Sigma,t)=\frac{1}{\pi\sqrt{\sigma_{\Sigma}(t)^{2}-2\left\langle \Sigma-\Sigma_{0}\right\rangle_{t}^{2}}}\exp\Biggl{\{}\frac{\left\langle\Sigma -\Sigma_{0}\right\rangle_{t}}{\sigma_{\Sigma}(t)^{2}-2\left\langle\Sigma- \Sigma_{0}\right\rangle_{t}^{2}}(\Sigma-\Sigma_{0})\Biggr{\}}K_{0}\left(\frac{ \sqrt{\sigma_{\Sigma}(t)^{2}-\left\langle\Sigma-\Sigma_{0}\right\rangle_{t}^{2 }}}{\sigma_{\Sigma}(t)^{2}-2\left\langle\Sigma-\Sigma_{0}\right\rangle_{t}^{2 }}|\Sigma-\Sigma_{0}|\right). \tag{87}\] where \[\left\langle\Sigma\right\rangle_{t} = \Sigma_{0}+\frac{k_{m}-k_{i}}{2k_{i}}\left(1-e^{-2k_{m}t}\right), \tag{88}\] \[\sigma_{\Sigma}^{2}(t) = 2\left\langle\Sigma-\Sigma_{0}\right\rangle^{2}+\frac{\left(k_{i }-k_{m}\right)^{2}\left(1-e^{-2k_{m}t}\right)}{k_{i}\left(k_{m}-k_{i}\left(1-e ^{2k_{m}t}\right)\right)}. \tag{89}\] #### iv.2.2 For \(t\geq t_{f}\) In this case, the entropy is given by \[\Sigma=\frac{1}{2}k_{m}(x_{i}^{2}-x_{f}^{2})+\frac{1}{2}k_{f}(x_{f}^{2}-x_{t} ^{2})+\Sigma_{0}-\frac{x_{i}^{2}}{2\sigma_{i}^{2}}+\frac{x_{t}^{2}}{2\sigma_{ X}(t)^{2}}. \tag{90}\] It is worth noting that if we ensure a final equilibrium situation at \(t_{f}\) by imposing the consistency relation (26), there is a cancellation between \(-k_{f}x_{t}^{2}/2\) and \(x_{t}^{2}/(2\sigma_{X}(t)^{2})\) making \(\Sigma\) stationary. In this time interval and assuming that \(k_{m}\) is general, the function \(\hat{P}_{\Sigma}(z)\) takes the form \[\hat{P}_{\Sigma}(z)=\frac{1}{\sqrt{1+f_{1}z+f_{2}z^{2}+f_{3}z^{3}}}, \tag{91}\] where the coefficients \(f_{1},f_{2}\), and \(f_{3}\) are complex expressions of the system parameters (refer to Appendix B for details). Due to the cubic polynomial inside the square root, the integral in Eq. (81) cannot be evaluated in closed form. Therefore, in this section, we will impose a constraint on the variable \(k_{m}\) to satisfy the consistency equation (26). Under this constraint, the coefficient \(f_{3}\) vanishes, allowing us to obtain a closed-form expression. This is similar to the result in the previous section, Eq. (87), although the average and variance will exhibit modifications. Specifically, they are time-independent, given by: \[\left\langle\Sigma\right\rangle = \Sigma_{0}+\frac{k_{m}}{2}\left(\frac{1}{k_{i}}-\frac{1}{k_{f}} \right), \tag{92}\] \[\sigma_{\Sigma}^{2} = 2\left\langle\Sigma-\Sigma_{0}\right\rangle^{2}+\frac{2\left\langle \Sigma-\Sigma_{0}\right\rangle(k_{m}-k_{f})}{k_{m}}. \tag{93}\] with \(\Sigma_{0}=\frac{1}{2}\ln(k_{i}/k_{f})\). These computed values coincide with the final values obtained in the previous time interval, Eqs. (88) and (89) for \(t=t_{f}\). Furthermore, for \(t>t_{f}\), these values remain constant as expected for an equilibrium situation. Fig. 11 shows the evolution of the average entropy production and its standard deviation. The entropy production distribution satisfies a relation similar to the ones for work and heat shown in Eqs. (60) and (71) if we define \(\tilde{\Sigma}=\Sigma-\Sigma_{0}\). This is given by \[\frac{P_{\tilde{\Sigma}}(\tilde{\Sigma})}{P_{\tilde{\Sigma}}(- \tilde{\Sigma})} = \exp\Biggl{\{}\frac{2\left\langle\Sigma-\Sigma_{0}\right\rangle_{ t}}{\sigma_{\Sigma}^{2}-2\left\langle\Sigma-\Sigma_{0}\right\rangle_{t}^{2}} \tilde{\Sigma}\Biggr{\}} \tag{94}\] \[= \exp\Biggl{\{}\frac{k_{m}}{k_{m}-k_{f}}\tilde{\Sigma}\Biggr{\}}.\] Fig. 12 represents this relation for \(t=t_{f}\ll 1\) and the simulation results for the other protocols considered in this paper. This figure shows the universality of the relation (94) when \(t_{f}\ll 1\). However this universality breaks down for longer protocols \(t_{f}\sim 1\) as can be appreciated at Fig. 13. #### iv.2.3 Fluctuation theorem for the entropy production The entropy distribution must satisfy a fluctuation theorem [20; 21], in the same way that work distribution must satisfy the Jarzynski equality and the Crocks relation. Hence, we will prove the integral fluctuation relation for the entropy production. Using the definition of the expectation value and inserting the integral representation given in Eq. (81), we can obtain the following identity: \[\langle e^{-\Sigma}\rangle=\hat{P}_{\Sigma}(-i)e^{-\Sigma_{0}}. \tag{95}\] For arbitrary \(k_{m}\) (not necessarily a solution of the consistency equation (26)), we can compute \(\hat{P}_{\Sigma}(-i)\) as: \[\hat{P}(-i)=\frac{\sigma_{X}(t)}{\sigma_{i}}. \tag{96}\] As a result, the expectation value is given by: \[\langle e^{-\Sigma}\rangle=1. \tag{97}\] This is the well-known integral fluctuation theorem for the entropy production [20; 21; 22]. ## V Conclusion We have reviewed how to construct ESE protocols, in particular, we have shown that for a Brownian particle in a harmonic potential with time-dependent stiffness and at equilibrium initially with a Gaussian distribution, the position distribution remains Gaussian for later times Figure 11: Standard deviation and average for the entropy production for the TSP protocol with parameters \(k_{i}=1/2\) and \(t_{f}=1/30\). Figure 12: Relation (94) for several protocols. \(10^{6}\) simulations have been performed with \(k_{i}=1/2\) and \(t_{f}=1/30\). Figure 13: The prediction given by (94) fails when \(t_{f}\sim 1\); however, the behavior looks linear. This plot is for \(10^{6}\) simulations with parameters \(k_{i}=1/2\) and \(t_{f}=0.7\). and the variance of the position distribution depends on the protocol. Moreover, we have thoroughly described the two-step protocol and examined its energetic behaviors. Notably, we obtained analytical expressions for the distribution functions of work, heat, and entropy production. These analytical solutions comply with the fluctuation theorems and identities well-established in the existing literature [16; 17; 18; 19; 20; 21; 22]. The significance of the two-step protocol extends beyond analytical solutions, as it also characterizes more general protocols within the time range of interest of the shortcut protocols (i.e., when \(t_{f}\ll\tau_{\rm relax}\)). Our findings unveil intriguing insights into the behavior of various protocols that are of common interest for theoretical and experimental investigations. The average value of the stiffness \(k_{m}\) is inversely proportional to the protocol duration \(t_{f}\), explicitly given by Eq. (27). This relation can serve as a guide for designing fast thermalization protocols. It also imposes some restrictions on them. If for some practical application the stiffness has to be bounded, Eq. (27) imposes a limit on how short the protocol can be [23]. Regarding the energetics statistics, we obtained strong numerical evidence that suggest that the probability distribution functions of work, heat and entropy production satisfy universal simple relations given by Eqs. (60), (71) and (94) when we compare the positive versus the negative production of these quantities. In summary, our research significantly advances the comprehension of stochastic thermodynamics in colloidal Brownian particles. We have gained valuable insights into the dynamics of overdamped Brownian particles under time-dependent potentials and external control, facilitating the acceleration of equilibration times. These findings hold great potential for the design and application of efficient protocols across diverse scientific and engineering disciplines. Despite the valuable insights gained, this study faced certain limitations, particularly in the context of the overdamped regime. Future research may involve finding analytical solutions in the underdamped regime and extending the analysis to systems of interacting particles. By addressing these aspects, future investigations can provide a more comprehensive understanding of the fast thermalization protocols for Brownian particles in complex environments, further enriching the field of stochastic thermodynamics. ## VI Acknowledgments This work was supported by Fondo de Investigaciones, Facultad de Ciencias, Universidad de los Andes INV-2021-128-2267. We thank Emmanuel Trizac for fruitful discussions. ## VII Appendixes ### Appendix A In order to prove the Crooks identity (65), it is necessary to demonstrate the symmetry under the permutation \(k_{i}\leftrightarrow k_{f}\) of the Bessel function in (58). To achieve this goal, we use the followings definitions \[S = \frac{(k_{m}-k_{f})(k_{m}-k_{i})\sigma_{m}^{2}(t)}{2}, \tag{101}\] \[A = \frac{k_{f}-k_{i}}{2}, \tag{102}\] where \(S\) and \(A\) stand for symmetric and anti-symmetric under the permutation, respectively. The equations (62) takes the form \[\left\langle W\right\rangle = \frac{A+S}{k_{i}}, \tag{103}\] \[\sigma_{W}^{2} = \frac{2(A+S)^{2}+2k_{i}S}{k_{i}^{2}}. \tag{104}\] Now, the combination in the argument of the Bessel function is \[\frac{\sqrt{\sigma_{W}^{2}-\left\langle W\right\rangle^{2}}}{\sigma_{W}^{2}-2 \left\langle W\right\rangle^{2}}=\frac{\sqrt{A^{2}+S^{2}+2S(A+k_{i})}}{2S}. \tag{105}\] The only term that is not evident symmetric is the combination \(A+k_{i}\), however \[A+k_{i}=\frac{k_{i}+k_{f}}{2} \tag{106}\] that is symmetric under the permutation. ## Appendix B In this appendix, where give the intricate expression of the coefficients given intervening in Eq. (91): \[\sigma_{t}^{2}f_{3} = i\sigma_{f}^{2}\sigma_{m}^{2}\left(k_{f}-k_{m}\right)\left(k_{f} \sigma_{t}^{2}-1\right)\left(\sigma_{i}^{2}k_{m}-1\right),\] \[\sigma_{t}^{2}f_{2} = \sigma_{m}^{2}\left(\sigma_{i}^{2}k_{m}-1\right)\left(k_{m} \sigma_{t}^{2}-1\right)-\sigma_{f}^{2}\left(k_{f}\sigma_{t}^{2}-1\right)\] \[\times\left(\sigma_{m}^{2}\left(2k_{f}-k_{m}\right)\left(\sigma_ {i}^{2}k_{m}-1\right)-k_{f}\sigma_{i}^{2}+1\right),\] \[\sigma_{t}^{2}f_{1} = -i\sigma_{f}^{2}\left(k_{f}\sigma_{t}^{2}-1\right)\left(k_{f} \left(\sigma_{m}^{2}\left(\sigma_{i}^{2}k_{m}-1\right)-\sigma_{i}^{2}\right)+1 \right)+\] \[+ i\sigma_{t}^{2}\left(k_{m}\sigma_{m}^{2}\left(k_{m}\sigma_{t}^{2 }-1\right)+1\right)-i\sigma_{t}^{2}\left(k_{m}\sigma_{m}^{2}+1\right)+\] \[+ i\sigma_{m}^{2},\] where \(\sigma_{t}\) is the solution of (11) in the interval \(t\geq t_{f}\) given by \[\sigma_{t}^{2}=\left(-\frac{1}{k_{f}}+\frac{1-k_{m}\sigma_{m}^{2}}{k_{i}}+ \sigma_{m}^{2}\right)e^{-2k_{f}(t-t_{f})}+\frac{1}{k_{f}}. \tag{107}\] Using the consistency equation, these coefficients collapse to \[f_{3} = 0, \tag{14}\] \[f_{2} = \frac{\left(k_{i}-k_{f}\right)\left(k_{f}-k_{m}\right)}{k_{f}k_{i}},\] (15) \[f_{1} = -ik_{m}\left(\frac{1}{k_{f}}-\frac{1}{k_{i}}\right). \tag{16}\] Because \(f_{3}\) is zero, the polynomial (91) is of second order and the inverse Fourier transform can be computed in terms of Bessel functions (Eq. (87)).
2305.05209
Decay of superheavy nuclei based on the random forest algorithm
How nuclides decay in the superheavy region is key information for investigating new elements beyond oganesson and the island of stability. The Random Forest algorithm is applied to study the competition between different decay modes in the superheavy region, including $\alpha$ decay, $\beta^-$ decay, $\beta^+$ decay, electron capture and spontaneous fission. The observed half-lives and dominant decay mode are well reproduced. The dominant decay mode of 96.9 % nuclei beyond $^{212}$Po is correctly described. $\alpha$ decay is predicted to be the dominant decay mode for isotopes in new elements $Z = 119 - 122$, except for spontaneous fission in some even-even ones because of the odd-even staggering effect. The predicted half-lives show the existence of a long-lived spontaneous fission island at the southwest of $^{298}$Fl caused by the competition of nuclear deformation and Coulomb repulsion. More understanding of spontaneous fission, especially beyond $^{286}$Fl, is crucial to search for new elements and the island of stability.
Boshuai Cai, Cenxi Yuan
2023-05-09T07:00:40Z
http://arxiv.org/abs/2305.05209v1
# Decay of superheavy nuclei based on the random forest algorithm ###### Abstract How nuclides decay in the superheavy region is key information for investigating new elements beyond ganesson and the island of stability. The Random Forest algorithm is applied to study the competition between different decay modes in the superheavy region, including \(\alpha\) decay, \(\beta^{-}\) decay, \(\beta^{+}\) decay, electron capture and spontaneous fission. The observed half-lives and dominant decay mode are well reproduced. The dominant decay mode of 96.9 % nuclei beyond \({}^{212}\)Po is correctly described. \(\alpha\) decay is predicted to be the dominant decay mode for isotopes in new elements \(Z=119-122\), except for spontaneous fission in some even-even ones because of the odd-even staggering effect. The predicted half-lives show the existence of a long-lived spontaneous fission island at the southwest of \({}^{298}\)Fl caused by the competition of nuclear deformation and Coulomb repulsion. More understanding of spontaneous fission, especially beyond \({}^{286}\)Fl, is crucial to search for new elements and the island of stability. ## I Introduction The limit of nuclear landscape [1; 2] is always an intriguing subject. Exotic properties of nuclei are found around the boundary of nuclear limits, e.g., the shell evolution [3; 4; 5; 6], the \(4n\) resonant state [7; 8], the \(4p\) unbound state [9], etc. The discovery of new elements (nuclides) generally faces three problems: production, separation and identification [10]. Because the nucleus is unstable, generally with a rather short half-life, one has to utilize some probes. One of the most direct is the decay mode [10; 11], using the decay products as the signal of existence. Thus, it is important to investigate and predict the dominant decay mode of those unknown nuclides. The nuclear binding energy and the half-life are key data for understanding the decay mode of an atomic nucleus. The former measures the stability of nuclides through energy criteria, and the latter describes the possibility of decay. Both microscopic and macroscopic methods have been used to study the nuclear binding energy and the partial half-life of each decay channel. The microscopic theory starts from the nucleon-nucleon interaction, either realistic or phenomenological. The macroscopic theory uses selected variables with physical considerations to construct semi-empirical formulas and fit the experimental data, with a risk of overfitting and inappropriate parameters. Besides, the exotic nuclei may deviate far from the general fitting as outliers. Decreasing the deviation between theoretical predictions and observables is always a critical issue. With computing and storage power advancement, machine learning algorithms have become available and helpful in many fields with various successes [12]. As summarized in a recent colloquium, estimating the residuals of nuclear properties through the machine learning algorithms is a powerful strategy [13]. The neural network was used to compensate for the residuals of nuclear masses [14; 15; 16] and nuclear charge radii [17; 18; 19] with structure optimization and careful choice of the inputted parameters with definite physical meanings. The applicability of the Decision Tree (DT) was verified by training and testing with residuals of binding energy in 2020 [20]. However, the Random Forest (RF) [21], developing from the DT algorithm, has been tested for neither nuclear mass nor partial half-life of a specific decay channel, of which the semi-empirical formulas have suggested several major components but with residuals. The machine learning algorithms can include the possible features to make a training for the residuals, while the RF, with bootstrap sampling, can not only avoid the overfitting but also take into account the correlation between data combinations and several features, which increases the robustness and is conducive to the extrapolation. Until now, none of work has investigated the competition between different decay modes by the partial half-lives estimated by the machine learning algorithm. The present work applies the RF machine learning algorithm to study the major decay mode of heavy and superheavy nuclei. The competition of \(\alpha\) decay, \(\beta\) decay and spontaneous fission (SF) of new elements and the possible long-lived island are discussed in the superheavy region. ## II Method The present work concentrates on the region of \(Z\geqslant 84\) and \(N\geqslant 128\). The partial half-lives of \(\alpha\) decay, \(\beta^{-}\) decay, \(\beta^{+}\) decay, electron capture (EC), and SF are calculated by the semi-empirical formulas and then the residuals of each formula are trained by the RF algorithm respectively. The minimum partial half-life of one mode corresponds to the dominant decay mode. ### Decay Half-life Formulas The universal decay law (UDL) [22; 23], \[\begin{split}\log_{10}T_{1/2,\alpha}=& aZ_{\alpha}(Z-Z_{\alpha})\sqrt{\mu/Q_{\alpha}}\\ &+b\sqrt{\mu Z_{\alpha}(Z-Z_{\alpha})(A_{\alpha}^{1/3}+(A-A_{ \alpha})^{1/3})}\\ &+c,\end{split} \tag{1}\] is used to fit the \(\alpha\) decay half-life. \(Z_{\alpha}\), \(A_{\alpha}\), \(Q_{\alpha}\) and \(\mu=A_{\alpha}(A-A_{\alpha})/A\) denote the proton number, the mass number of \(\alpha\) particle, the \(\alpha\) decay energy and the reduced mass, respectively. The channel is supposed to be from the ground state to the ground state. As to SF, a three-parameter formula (noted as SF3), \[\log_{10}T_{\rm SF}=a\frac{(Z-\nu)^{2}}{(1-\kappa I^{2})A}+\frac{b}{A}+c, \tag{2}\] is proposed based on several existing formulas [24; 25; 26; 27; 28; 29], where \(\nu\) presents the blocking effect from unpaired nucleons, takes 0 for even-even nuclei and 2 for other nuclei [24], \(\kappa\) takes 2.6 [28; 30], \(I=\frac{N-Z}{A}\), and \(a\), \(b\), and \(c\) are fitting coefficients. Eq. (2) is particularly fitted to nuclei with \(Z<104\) and the rest because of a systematic difference as shown in TABLE 1. \(T_{1/2,\rm SF}\) of nuclei with \(Z<104\) increases largely with the decrease of \(Z\) since the Coulomb repulsion decreases. The rather long \(T_{\rm SF}\) (\(>10^{8}\) s) of some nuclei in this region cannot be universally described at present and are not taken to fit Eq.(2) because the competition of such SF is very weak comparing to other decay modes. This formula avoids the divergence of other SF formulas in Refs. [24; 25; 26; 27] during extrapolation, which write as: \[\begin{split}\log_{10}T_{1/2,\rm SF,ren}=& a\frac{Z-90-\nu}{A}+b\frac{(Z-90-\nu)^{2}}{A}\\ &+c\frac{(Z-90-\nu)^{3}}{A}\\ &+d\frac{(Z-90-\nu)(N-Z-52)^{2}}{A}+e,\end{split} \tag{3}\] \[\begin{split}\log_{10}T_{1/2,\rm SF,xu}=& aA+bZ^{2}+cZ^{4}+d(N-Z)^{2}\\ &+eZ^{2}A^{-1/3}+f,\end{split} \tag{4}\] \[\begin{split}\log_{10}T_{1/2,\rm SF,santhosh}=& a\frac{Z^{2}}{A}+b(\frac{Z^{2}}{A})^{2}\\ &+c\frac{N-Z}{A}+d(\frac{N-Z}{A})^{2}+e,\end{split} \tag{5}\] \[\begin{split}\log_{10}T_{1/2,\rm SF,soylu}&=aA+bA^{2 /3}+cZ(Z-1)A^{-1/3}\\ &+d(N-Z)^{2}/A+eZ^{4}+f.\end{split} \tag{6}\] When the higher order terms in these four formulas enhance their interpolation, the divergence is introduced into the extrapolation, which will be discussed in the last part of Sec. III. The \(\beta\) decay half-life is estimated by the formula in Refs. [31; 32]. Assuming that the ground state \(\beta\) decay is one effective Gamow-Teller (GT) transition, the partial half-life is expressed as \[\begin{split}\log_{10}T_{1/2,\beta}=\log_{10}\kappa_{1}-\log_{1 0}f_{0}-\log_{10}B_{\rm GT},\end{split} \tag{7}\] where \(\kappa_{1}=\frac{2\pi^{3}\hbar^{7}\ln 2}{m_{e}^{2}c^{4}G_{\rm F}^{2}}=6147\) s, \(f_{0}\) is the phase-space factor and \(B_{\rm GT}\) is the GT reduced transition probability [31]. For EC, the phase-space factor is deduced as \[\begin{split} f_{0}^{\rm EC}\approx 2\pi(\frac{Z}{137})^{3}(1- \frac{1}{2}(\frac{Z}{137})^{2}+E_{0})^{2},\end{split} \tag{8}\] while for \(\beta^{\pm}\) decay, it is \[\begin{split} f_{0}^{\beta^{\pm}}\approx\frac{\mp(E_{0}^{5}-10E_ {0}^{2}+15E_{0}-6)2\pi(Z\mp 1)/137}{30(1-\exp(\pm 2\pi(Z\mp 1)/137))},\end{split} \tag{9}\] where \(E_{0}\) is the renormalized \(\beta\) decay energy. Because \(Q_{\beta}\) provided by AME2020 [33] is the difference of atomic mass, the electron mass should be reconsidered: \[\begin{split} E_{0,\beta^{+}}&=\frac{Q_{\beta^{+}} +2m_{e}c^{2}}{m_{e}c^{2}}\\ E_{0,\beta^{-}}&=\frac{Q_{\beta^{-}}+m_{e}c^{2}}{m _{e}c^{2}}\\ E_{0,\rm EC}&=\frac{Q_{\rm EC}-m_{e}c^{2}}{m_{e}c^{2 }}.\end{split} \tag{10}\] Finally, the \(\log_{10}B_{\rm GT}\) is estimated as the average of \(\log_{10}(f_{0}T_{1/2,\beta}/\kappa_{1})\). The fitting results are listed in TABLE. 1 ### Random Forest Method RF is an integration of the DT and bootstrap algorithm. DT is a non-parametric supervised learning algorithm. For a dataset consisting of \(S\) samples of \(I\) features (variables) \(\{(\theta_{1},...,\theta_{I})s,s\in[1,S]\}\) and object (observable) \(\{y_{s},s\in[1,S]\}\), it establishes a binary tree structure which divides the dataset into \(L\) subsets based on the values of features, each subset is called as a leaf. Such partition aims at the minimum root-mean-square error (RMSE) \[\begin{split}\text{RMSE}=\sqrt{\frac{1}{S}\sum_{s=1}^{S}(y_{s}-f (\theta_{1},...,\theta_{I}))^{2}}\end{split} \tag{11}\] of the whole dataset, assigning each leaf a value. Bootstrap is a statistical method with a basic idea of randomly resampling with replacement, through which the possible combination and weighting of data are automatically taken into account [34; 35]. Each time a new dataset is obtained, a new DT is trained and used to predict the object of each sample in the whole dataset. Repeating this \(M\) times, one obtains a forest of \(M\) trees. The final predicted value of the object for a sample is the average of results calculated by all trees in the forest. Since each tree is trained by a part of samples in the dataset, the value for each sample predicted by the forest is an average of interpolation and extrapolation, which decreases the divergence when the calculation is implemented to the unmeasured nuclei. The open source scikit-learn [36] is used for machine learning. The forest is assumed to be composed of \(10^{5}\) trees to decrease the dispersion of RMSE in the present work. ## III Results and discussion The residuals of the decay formulas of \(\alpha\) decay, \(\beta^{-}\) decay, \(\beta^{+}\) decay, and EC are trained by RF with features \(Z\), \(N\), \(A\), the odevity of \(Z\) and \(N\), and decay energy. Because no decay energy can be defined for SF, the fission barrier (FB) extracted from Ref. [37] is used to replace the decay energy in the features set to consider the deformation effect. The leaves number chooses 11, which is same as that determined for training the binding energy in this region in our previous work [38]. FIG. 1 compares the residuals of these decay formulas before and after RF training. Two conditions are assumed to determine outliers here. One is locating out of the dash line with corresponding color, which means the scatters deviate two times of RMSE from the experimental \(\log_{10}T_{1/2}\). Another is \(|\log_{10}(T_{1/2,\text{cal}}/T_{1/2,\text{exp}})|\) value larger than 3, which denotes that the calculation is three times of magnitude far from the experimental value. In this way, one avoids missing (adding) outliers due to the very large (small) RMSE. After training, the biases of outliers of these decay formulas are well reduced, and the RMSE of formulas decreases (TABLE 1) as expected. The condition of outlier is not too strict because our aim is not to decrease the RMSE as small as possible but go to an appropriate scale where the dominant decay mode can be described. The present work chooses the same features and the same leaves number of RF to train the residuals of different decay formulas, which avoids the overfitting in seeking an extreme small RMSE. from NUBASE2020 [39]. The dominant decay mode and the partial half-life of nuclides are drawn in FIG. 2(a-b). A long-lived \(\alpha\) decay valley from \({}^{226}_{88}\)Ra\({}_{138}\) to \({}^{251}_{98}\)Cf\({}_{153}\) lies between a narrow \(\beta^{+}\)/EC decay band and a neutron-rich \(\beta^{-}\) region. Away from this valley, the half-life of nucleus decreases. The southwest direction is dominated by \(\alpha\) decay while the southeast direction is occupied by \(\beta^{-}\) decay. At the northwest direction, \(\beta^{+}\) decay and EC compete with \(\alpha\) decay and lose after \(Z\) increases. At the northeast direction, \(\alpha\) decay and SF compete with each other and a region extended from the \(\alpha\) valley seems to be dominated by SF. Though the distribution of dominant decay mode has clear boundary, the minimum partial half-life is smooth. There are 341 (104) nuclides are with known (unknown) corresponding reaction energies among all 445 considered nuclides. Those nuclides with unmeasured mass take that calculated by WS4 [40] and UNEDF0 [41] for estimating the partial half-lives. The results are depicted in FIG. 2(c-f). The calculation accords well with the experiment, as the dominant decay mode is correctly described for 431 and 427 (96.9% and 96.0%) nuclei when the RMSE of log\({}_{10}T_{1/2}\) of the dominant decay mode is 0.62 and 0.67, respectively. Those nuclides, of which the dominant decay mode is inconsistently described, are generally with two competitive decay modes. For exam Figure 2: The dominant decay mode (left panels) and the minimum partial half-lives (right panels) of \(\alpha\) decay, \(\beta^{-}\) decay, \(\beta^{+}\) decay, EC and SF. (a-b) The experimental data in NUBASE2020. (c-f) The energies are calculated by WS4 and UNEDF0. Specifically, the FB is used to replace the decay energy to learn SF. Nuclides, of which the predicted partial half-life is longer than \(10^{4}\) s, are marked by star. ple, the \(\alpha\) and SF branch ratios of \({}^{255}\)Rf, \({}^{262}\)Db, and \({}^{286}\)Fl are around 50%. Meanwhile, the liquid drop model trained by RF [38] is also applied to provide the energies, which presents consistent results and is not shown here. The accuracy of energy is important to the half-life calculation. If calculated energies of WS4 and UNEDF0 replace all the experimental ones, the number with consistent decay mode comparing to the experiment reduces to 72.6% and 64%, and the RMSE of \(\log_{10}T_{1/2}\) also increases to 2.07 and 2.64. The difference of the results between using energies of the two models comes from the accuracy since the RMSE of mass of WS4 is about 0.3 MeV [40] while that of UNEDF0 is about 1.45 MeV [41]. This also leads to difference during extrapolation. The consistency rate of dominant decay mode between using energies calculated by these two models decreases from 82.2% to 66.2%. More accurate and precise measurements of decay energy will aid the theoretical prediction. Besides, WS4 and UNEDF0 may lose prediction power after training through machine learning. If the WS4 and UNEDF0 binding energies are trained with features \(Z\), \(N\), \(\delta\) and \(P\), which well describe the residuals in Ref. [15], though the description of energy improves, the consistency in the dominant decay mode decreases several percent, which is considerable compared with the rate 23.4% of theoretical energies among all (104/445). The SF is important for investigating the half-lives of superheavy nuclei. As shown by FIG. 2(c, e), the dominant decay mode of the unknown nuclides is determined through the competition between SF, \(\alpha\) decay, and \(\beta^{-}\) decay. The major competition is between SF and \(\beta^{-}\) decay for neutron-rich nuclides, while it is between SF and \(\alpha\) decay for neutron-deficient ones. The existing experimental data show a long-lived \(\alpha\) decay region from \({}^{226}_{88}\)Ra\({}_{138}\) to \({}^{251}_{98}\)Cl\({}_{153}\), lying between the \(\beta^{+}\) and \(\beta^{-}\) decay regions, and being ended by the SF. The present models correctly describe such phenomenon. Along the long-lived region, and after \(N\) exceeds 154, a blue band is shown in FIG. 2(d, f), which indicates half-lives of about \(10^{2}\sim 10^{7}\) s. At the southwest corner of \(Z=114\) and \(N=184\), nuclides in a circle are with longer half-life than those surrounded. This is because the fission barrier is high in this region and lead to longer \(T_{1/2,\rm SF}\). FIG. 3 compares the evolution of FB and measured \(T_{1/2,\rm SF}\) along the mass number. The FB decreases with \(A\) before \(A=230\), then behaves like a sinusoidal wave oscillating between 2 and 10 MeV. It seems that there exists a FB threshold, below which could the nuclide fissure spontaneously. Nuclides with rather long \(T_{1/2,\rm SF}\) are generally with small SF branch ratio. Besides, the FB of nuclides with SF branch ratio less than 1% is mostly higher than those with SF branch ratio greater than 1%, which implies that higher the FB, weaker the SF. However, if one concentrates only on nuclides with SF branch ratio less than 1% or greater than 1%, the correspondence between FB and \(T_{1/2,\rm SF}\) becomes much more complex. Nuclides with partial half-life predicted to be longer than \(10^{4}\) s are marked by star in FIG. 2(d, f), which suggests \({}^{250,252,254}\)Cm, \({}^{260,261}\)Es, \({}^{261\sim 264}\)Md, and \({}^{265}\)Lr for future measurement. It is a coincidence that no experimental value of the half-life of \({}^{250}\)Cm is suggested in NUBASE2020 and thus extrapolated in the present work. In NNDC, SF is shown to be its dominant decay mode, and the half-life is recommended to be 8300 years, which is rather long. Though the calculation of the present work underestimates the NNDC value, the long half-life property and the dominant decay mode are reproduced. Besides, the upper limit of the half-life of \({}^{252}\)Cm is 2 days, proposed in 1966 by Ref. [46] and not updated, while the present work estimates a value of 1.43 days. No experimental half-lives are published for \({}^{260,261}\)Es, \({}^{261\sim 264}\)Md, and \({}^{265}\)Lr. But their nearby isotopes are with long half-lives, e.g., \({}^{257}\)Es (7.7 days), \({}^{260}\)Md (31.8 days), \({}^{259}\)Md (1.6 hours), \({}^{258}\)Md (51.5 days), \({}^{257}\)Md (5.52 hours) and \({}^{266}\)Lr (11 hours). Moreover, these Es, Md, and Lr isotopes locate at the extension of the narrow long-lived region from \({}^{226}\)Ra to \({}^{251}\)Cf, which makes it convincing that these Es, Md, and Lr isotopes are candidates with long partial half-lives. More measurement is also suggested since, for example, the more than 50 years of un-updated datum of \({}^{252}\)Cm. Comparing all possible decay channels is limited by the accurate description of each channel and the observed data. We should note that the mechanism of SF is still not fully understood. The effect of the quadrupole deformation parameter (\(\varepsilon_{2}\)) [47] on the half-life estimation is Figure 3: The evolution of \(T_{1/2,\rm SF}\) and FB along the mass number. The datasets are divided according to whether the corresponding \(T_{1/2,\rm SF}\) is measured and whether the branch ratio (BR) of SF is less than 1%. also investigated. If one replaces FB by \(\varepsilon_{2}\) during training the RF, more nuclides in \(N>184\) are calculated to be dominated by \(\alpha\) and \(\beta^{-}\) decay. The \(\alpha\) decay leads to shorter half-life. Besides, the relatively long-lived circle at the southwest of \(Z=114,N=184\) is no longer locally. The blue region is much more extended and more long-lived candidates are marked by star. This is because the deformation decreases, which compensates the Coulomb repulsion, that is increased by \(Z\). Furthermore, the FB combines contribution of multipole deformations and thus presents stronger quantum effect in FIG. 2(d, f) than \(\varepsilon_{2}\). The extrapolation stops at the single neutron (proton) and two-neutron (two-proton) drip lines. Dataset of UNEDF0 stops at \(Z=120\). From the existent region to the neutron-deficient side, \(\alpha\) decay and SF are predicted to compete with each other. On the neutron-rich side, calculations predict \(\beta^{-}\) decay as the dominant mode, while SF competes in specific nuclides. Up to date, results of most theoretical calculations of partial half-lives [27, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52] support that the \(\alpha\) decay is the dominant decay mode for new elements at \(N\leqslant 184\). But those calculated \(T_{1/2,\mathrm{SF}}\) diverge when it is far from shells. In FIG. 4, we compare the partial half-lives of isotopes with \(Z=117-122\) predicted in the present work and the corresponding results of Refs. [42, 43, 44]. Even though the partial half-lives of \(\beta^{+}\) decay and EC in the present work are not globally longer than that in Ref. [44], they are still about five orders of magnitude greater than that of \(\alpha\) decay in this region, which does not change the dominant decay mode. \(T_{1/2,\alpha}\) predicted in the present work are longer than the results of Refs. [42, 43, 44], which does not change the dominant decay mode of odd-\(Z\) isotopes but enhances the competition of SF in even-\(Z\) isotopes. Furthermore, the prediction of the present work shows strong odd-even staggering of \(T_{1/2,\mathrm{SF}}\) of even-\(Z\) isotopes, i.e., \(T_{1/2,\mathrm{SF}}\) of even-even nuclei is several times of magnitude shorter than its two isotopic neighbors, which differs from the weak or not predicted odd-even staggering effect of other SF models in FIG. 4. In fact, all measured \(T_{1/2,\mathrm{SF}}\) of even-\(Z\) isotopes show such odd-even staggering. FIG. 5 draws \(T_{1/2,\mathrm{SF}}\) and \(T_{1/2,\alpha}\) of isotopes with \(Z\geqslant 92\). When \(Z\) is small, for example in U, Pu, Cm and Cf isotopes, SF is not competitive to \(\alpha\) decay because the Coulomb repulsion is not enough strong. But when \(Z\) is large, this odd-even staggering makes SF competitive with \(\alpha\) decay in these even-even nuclides. The \(\alpha\) decay is thus suggested to be a key signal detected for \(Z=119\) and \(121\) isotopes, while the SF should also be taken into account Figure 4: Comparison of partial half-lives of isotopes with \(Z=117-122\). IMELDM is extracted from Ref. [42], GLDM+RHF, KPS and Xu are extracted from [43], Sarriguren is extracted from Ref. [44], Sridhar is extracted from Ref. [45]. for even-\(N\) isotopes of \(Z=120\) and 122. Note that the odd-even staggering also exists in the odd-\(Z\) isotopes. It can only be verified by \({}^{260\sim 263}\)Db because the data are rare. This is why the odd-even staggering of odd-\(Z\) isotopes is not predicted in the present work. The DNS model predicted \(\sigma_{\rm ER}\) of hundreds of fb for the \(3n\) or \(2n\) channels producing \({}^{293}119_{174}\) on \({}^{243}\)Am target [53], which can be examined on the new facility CAFE2 and SHANS2 in Lanzhou [54]. Considering the odd-even effect of partial half-lives, candidates of nuclide for new superheavy elements still need analysis through the production cross section and the partial half-life. There are several formulas of the SF with more parameters [24; 25; 26; 27], but the extrapolation diverges, sometimes to tens of orders of magnitude, as drawn in Figure 6, because of the higher order terms. There is no hint of such divergence of \(T_{1/2,{\rm SF}}\) from experiments. If the SF is not considered in the extrapolation because of its possible large uncertainty, a long-lived region is predicted along the boundary of competition between the \(\alpha\) and \(\beta^{-}\) decays. Particularly, results from UNEDF0 show a new long-lived region just around \(Z=114\) and \(N=184\), which is also mainly contributed by the \(\alpha\) decay. Thus, the SF is key for investigating the superheavy stable island but still not well understood. ## IV Conclusion In summary, the decay modes of superheavy nuclei are investigated through the Random Forest algorithm. The partial half-lives of \(\alpha\) decay, \(\beta^{-}\) decay, \(\beta^{+}\) decay, EC, and SF are studied and compared with each other. The dominance of \(\alpha\) decay in the neutron-deficient region is more convinced. \(\beta^{-}\) decay is predicted to be dominant in the neutron-rich region. SF contributes to a long-lived circle at the southwest corner of \(Z=114\) and \(N=184\). More accurate and precise measurement on nuclear mass and decay energy can improve the prediction of decay mode. After correcting the divergence of up-to-date SF formulas, the odd-even effect of SF is found in even-\(Z\) nuclides, which induces possible competition between SF and \(\alpha\) decay in even-even nuclides. The \(\alpha\) decay is thus suggested to be key probe of isotopes with \(Z=119\) and 121, while the competition of SF should be taken into account in even-even isotopes with \(Z=120\) and 122. Figure 5: The odd-even staggering of \(T_{1/2,{\rm SF}}\) of U, Pu, Cm, Cf, Fm, Rf, Sg, Ds, Cn and Db isotopes and comparing with \(T_{1/2,\alpha}\). Figure 6: The comparison of \(T_{1/2,{\rm SF}}\) calculated by formulas of Ren2005 [24], Xu2008 [25], Santhosh2010 [26], Soylu2019 [27] and the present work (\({\rm SF}_{\rm FB}\)). The extrapolation in the upper panel is constrained by \(104\leqslant Z\leqslant 121\) and \(56\leqslant N-Z\leqslant 62\), the bottom panel is constrained by \(Z=112-116\). \({}^{250,252,254}\)Cm, \({}^{260,261}\)Es, \({}^{261\sim 264}\)Md, and \({}^{265}\)Lr with half-life predicted to be longer than \(10^{4}\) s are suggested for future measurement. The SF influenced by the fission barrier and the Coulomb repulsion leads to a long-lived region during extrapolation. The present results indicate that study on SF, especially beyond \({}^{286}\)Fl, which is the presently heaviest nuclide with significant SF branch ratio, would be of key importance for further studies to be performed on new facilities, such as CAFE2 and SHANS2 in Lanzhou. ###### Acknowledgements. The authors acknowledge useful discussion with Professors Zaiguo Gan, Zhongzhou Ren, and Zhiyuan Zhang. This work has been supported by the Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2021B0301030006, and the computational resources from SYSU and National Supercomputer Center in Guangzhou.
2308.12951
Directly imaging spin polarons in a kinetically frustrated Hubbard system
The emergence of quasiparticles in quantum many-body systems underlies the rich phenomenology in many strongly interacting materials. In the context of doped Mott insulators, magnetic polarons are quasiparticles that usually arise from an interplay between the kinetic energy of doped charge carriers and superexchange spin interactions. However, in kinetically frustrated lattices, itinerant spin polarons - bound states of a dopant and a spin-flip - have been theoretically predicted even in the absence of superexchange coupling. Despite their important role in the theory of kinetic magnetism, a microscopic observation of these polarons is lacking. Here we directly image itinerant spin polarons in a triangular lattice Hubbard system realised with ultracold atoms, revealing enhanced antiferromagnetic correlations in the local environment of a hole dopant. In contrast, around a charge dopant, we find ferromagnetic correlations, a manifestation of the elusive Nagaoka effect. We study the evolution of these correlations with interactions and doping, and use higher-order correlation functions to further elucidate the relative contributions of superexchange and kinetic mechanisms. The robustness of itinerant spin polarons at high temperature paves the way for exploring potential mechanisms for hole pairing and superconductivity in frustrated systems. Furthermore, our work provides microscopic insights into related phenomena in triangular lattice moir\'{e} materials.
Max L. Prichard, Benjamin M. Spar, Ivan Morera, Eugene Demler, Zoe Z. Yan, Waseem S. Bakr
2023-08-24T17:41:07Z
http://arxiv.org/abs/2308.12951v1
# Directly imaging spin polarons in a kinetically frustrated Hubbard system ###### Abstract The emergence of quasiparticles in quantum many-body systems underlies the rich phenomenology in many strongly interacting materials [1]. In the context of doped Mott insulators, magnetic polarons are quasiparticles that usually arise from an interplay between the kinetic energy of doped charge carriers and superexchange spin interactions [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. However, in kinetically frustrated lattices, itinerant spin polarons - bound states of a dopant and a spin-flip - have been theoretically predicted even in the absence of superexchange coupling [13; 14; 15; 16; 17; 18; 19; 20; 21]. Despite their important role in the theory of kinetic magnetism, a microscopic observation of these polarons is lacking. Here we directly image itinerant spin polarons in a triangular lattice Hubbard system realised with ultracold atoms, revealing enhanced antiferromagnetic correlations in the local environment of a hole dopant. In contrast, around a charge dopant, we find ferromagnetic correlations, a manifestation of the elusive Nagaoka effect [22; 23]. We study the evolution of these correlations with interactions and doping, and use higher-order correlation functions to further elucidate the relative contributions of superexchange and kinetic mechanisms. The robustness of itinerant spin polarons at high temperature paves the way for exploring potential mechanisms for hole pairing and superconductivity in frustrated systems [15; 16]. Furthermore, our work provides microscopic insights into related phenomena in triangular lattice moire materials [24; 25; 26; 27]. One of the key questions in quantum condensed matter physics is how doped Mott insulators give rise to exotic metallic and superconducting phases. Understanding this problem is crucial for explaining the emergence of the unusual physical properties of many families of strongly correlated electron systems, including the high-\(T_{c}\) cuprates [28], organic charge transfer salts [29] and moire materials [30; 31; 32]. An important aspect of this problem is the interplay between spin order and the quantum dynamics of mobile dopants. So far, most studies have focused on Mott insulators on a square lattice where the motion of charge carriers disturbs spin correlations, resulting in an adversarial relationship between doping and spin order [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. This explains why many theoretical studies of doped high-\(T_{c}\) cuprates are usually done from the perspective of Mott states in which spin order has been suppressed by fluctuations [33; 34; 35; 36]. Recently, twisted bilayer graphene [30] and transition metal dichalcogenides [32] have provided a strong motivation for studying doped Mott insulators in triangular lattices. We explore this problem microscopically using a cold-atom triangular Fermi-Hubbard system [37; 38]. One surprise of our experiments is that in contrast to square lattice systems, there is a symbiotic relation between mobile holes and antiferromagnetism. This manifests in the formation of antiferromagnetic (AFM) itinerant spin polarons in the hole-doped system, which we directly image by measuring spin correlations around mobile holes. In striking contrast, we find that particle doping favors the formation of ferromagnetic (FM) polarons similar to those discussed previously for the square lattice Fermi-Hubbard model [22; 23]. Some of the most important implications of our results are for systems in which the local interaction \(U\) is much larger than the single electron tunnelling \(t\), in which case the magnetic superexchange \(J\) is strongly supressed. Indeed, this is the relevant regime in most moire systems. Intuition based on earlier studies would suggest that at temperatures higher than the superexchange scale, the regime we explore here, one can not expect coherent propagation of quasiparticles [2]. Our results demonstrate that this does not have to be the case in triangular lattices. Formation of polarons around mobile dopants facilitates their propagation and makes their dynamics more coherent. This robustness of the quasiparticle can also be understood as the result of effective magnetic interactions with energy scale \(t\) induced by the motion of dopants in the frustrated system [14]. At the heart of the mechanism responsible for the formation of polarons in our experiment is the phenomenon of kinetic frustration, which has received much recent theoretical attention [15; 16; 17; 18; 19; 20; 21]. This describes the reduction of the mobility of dopants due to the destructive interference of different propagation paths in certain lattice geometries, including the triangular one. To release this frustration and lower the kinetic energy of the dopants, the system develops magnetic correlations. The resulting magnetism, known as kinetic magnetism, is closely re lated to that studied by Nagaoka is his seminal work [22], but is more robust in that it is predicted to survive for finite interactions and doping. Indeed, kinetic magnetism has been observed very recently in doped van der Waals heterostructures through measurements of the spin susceptibility [24; 26]. Magnetisation plateaus attributed to kinetic effects have also been measured in these materials [27]. Our experimental results provide a microscopic picture underlying these observations. More broadly, our results motivate studying the properties of doped Mott insulating states in triangular lattices, including superconductivity, from the perspective of self-organisation of itinerant spin polarons [35; 39; 40; 16; 41]. Our system consists of a two-dimensional degenerate gas of \({}^{6}\)Li that is an equal mixture of two spin species corresponding to the first (\(\ket{\uparrow}\)) and third (\(\ket{\downarrow}\)) lowest hyperfine states of the atom. The gas is loaded adiabatically into a an optical lattice realising a triangular lattice Hubbard model, \[\hat{H}=-\sum_{\langle i,j\rangle,\sigma}t\left(\hat{c}^{\dagger}_{i\sigma} \hat{c}_{j\sigma}+\hat{c}^{\dagger}_{j\sigma}\hat{c}_{i\sigma}\right)+U\sum_ {i}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}, \tag{1}\] where \(\hat{c}^{\dagger}_{i\sigma}\) (\(\hat{c}_{i\sigma}\)) creates (destroys) a fermion of spin at lattice site \(i\), the number operator \(\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}\) measures site occupation and \(\langle i,j\rangle\) denotes nearest-neighbor sites. In the model, particles hop with \(t>0\). With this sign of the tunneling, a particle in an empty lattice can lower its energy by delocalizing on each lattice bond in a symmetric spatial orbital (Fig. 1a). The corresponding band structure is particle-hole asymmetric, and the particle attains its minimal energy of \(-6t\) at zero quasi-momentum. Kinetic frustration can be understood by considering the opposite scenario of a single hole moving in a spin-polarized background. In this case, the Hamiltonian is better expressed in terms of hole operators with \(\hat{h}^{\dagger}_{i}=\hat{c}_{i}\), \[\hat{H}=\sum_{\langle i,j\rangle}t\left(\hat{h}^{\dagger}_{i}\hat{h}_{j}+\hat {h}^{\dagger}_{j}\hat{h}_{i}\right) \tag{2}\] Crucially, the change in the sign of the tunneling resulting from the anticommutation of fermionic hole operators in the Hamiltonian favors antisymmetric spatial orbitals for the hole on each bond. This condition cannot be simultaneously satisfied on all bonds of a triangular plaquette, leading to kinetic frustration in a manner reminiscent of spin frustration. Indeed, this is manifested in the band structure of the hole, which is mirrored about zero energy relative to the particle. The hole kinetic energy is thus minimized at a value of \(-3t\), larger than in the unfrustrated system. A simplified picture explaining the emergence of the itinerant spin polaron in the doped interacting system can be obtained by considering a triangular plaquette with two fermions. In the limit of strong interactions, Figure 1: **Itinerant spin polaron.****a,** A single particle in a triangular lattice with \(t>0\) minimizes its energy by occupying symmetric orbitals on each bond. Its band structure \(E(k)\) exhibits a minimum energy of \(E=-6t\). In a spin polarized background, a single hole has a negative effective tunneling and tries to occupy asymmetric orbitals on each bond, but is unable to do so (kinetic frustration). Its minimum energy is \(E=-3t\). **b,** Once spins of the background Mott insulator are considered, the motion of a hole in a closed loop on a plaquette exchanges the spins. If the neighboring spins are in the singlet (\(S=0\)) sector, the final state \(\ket{\psi_{f}}\) picks up a spin Berry phase, i.e. \(\ket{\psi_{f}}=e^{i\pi}\ket{\psi_{i}}\). This phase is absent in the triplet (\(S=1\)) sector. **c,** The relative sign flip for hole (particle) dopants means that a spin singlet (triplet) configuration is favored, manifesting as an AFM (FM) polaron. **d,** An optical lattice with triangular connectivity is formed by superimposing non-interfering square and 1D lattice potentials. The nearest-neighbor tunneling matrix elements are indicated as \(t_{x}\), \(t_{y}\) and \(t_{d}\). **e,** A single fluorescence image gives the spatial distribution of both spin states. The reconstructed image contains hole (gray) and particle (purple) dopants in a Mott insulator surrounded by either ferromagnetically or antiferromagnetically correlated spins. Spatial distances in the highlighted region of the lattice have been transformed to reflect the connectivity of the lattice. double occupancies are energetically forbidden and the motion of the hole on a closed loop on the plaquette will exchange the two spins. In the spin singlet sector, this produces a spin Berry phase of \(\pi\), whereas the phase is zero in the spin-symmetric triplet sector (Fig. 1b) [42]. The phase acquired by the hole in the singlet sector returns the tunneling to a positive value, thereby releasing the kinetic frustration and allowing the hole to reach a lower ground state energy. The resulting object, a singlet bond bound to a hole with a binding energy of order \(t\), is predicted to persist in the many-body setting for light hole doping (Fig. 1c). This corresponds to a polaron with antiferromagnetic spin correlations in the vicinity of a hole. The situation is reversed for particle doping, favoring ferromagnetic correlations in the vicinity of a doublon. We directly detect the itinerant spin polaron in our system using a connected three-point correlation function, which probes the spin correlations in the environment of a hole or doublon. Such correlators have been previously used to identify magnetic polarons in the square lattice, although in that case, the mechanism that leads to the formation of the polaron is different and the binding energy is on the superexchange scale [10; 43; 12]. To realise a lattice with triangular connectivity [37; 38; 44; 45; 46; 47; 48; 49], we superimpose two non-interfering lattices, a strong one-dimensional optical lattice with spacing \(a=532\) nm and depth \(V_{532}=6.7(2)\,E_{R,532}\) and a weak square optical lattice with larger spacing \(a=752\) nm and depth \(V_{752}=2.9(1)\,E_{R,752}\), where \(E_{R,a}\equiv h^{2}/8ma^{2}\) with \(m\) the mass of the atom (Fig. 1d) [48]. The frequency detuning between the two lattices is used to tune their relative alignment to obtain a triangular geometry. Their relative depths are chosen to produce an isotropic triangular lattice by equalising the tunneling strength along the original square lattices axes and one diagonal. The gas is prepared at a magnetic field near a Feshbach resonance at 690 G allowing us to freely tune the scattering length. In this way we tune the coupling strength \(U/t\) to explore the evolution of the correlations from the metallic to the Mott insulating regime. We use a quantum gas microscope to measure site-resolved correlations associated with the polaron in the many-body system [50]. We further implement a bilayer imaging technique [51; 52; 53; 54], wherein a magnetic field gradient is first used to separate the two spin states into different layers prior to imaging them simultaneously (Fig 1e). From the reconstructed images, we can calculate arbitrary \(n\)-point correlation functions involving both spin and density operators averaged over experimental cycles. In the strongly interacting regime, the atoms order in a Mott insulator and exhibit short-range \(120^{\circ}\) spiral AFM correlations that have been observed in previous experiments [37; 38]. We use the two-point spin correlations for thermometry by comparison to Determinant Quantum Monte Carlo (DQMC) calculations [55] (see Methods). The typical peak density of the clouds in the lattice is \(n=n_{\uparrow}+n_{\downarrow}=1.2\), allowing us to study a range of dopings \(\delta=n-1\) on either side of half-filling of the Hubbard system in each experimental snapshot due to the harmonic confinement of the lattice beams. To detect the polaron, we evaluate connected three Figure 2: **Imaging the internal structure of the polaron.****a,** Three point correlations \(C^{(3)}((1,0),(1/2,\sqrt{3}/2))\) (blue and green) and \(C^{(3)}((1,0),(3/2,\sqrt{3}/2))\) (red and orange) versus doping \(\delta\). Theory curves (gray bands) are from DQMC with \(U/t=11.8(4),T/t=0.94(4)\). Right: (top) Further correlations \(C^{(3)}_{h}\) at a \(\delta=-0.10(2)\) and (bottom) \(C^{(3)}_{4}\) at \(\delta=0.15(2)\). In the vicinity of a hole for \(\delta<0\), we see stronger antiferromagnetic correlations at \(((1,0),(1/2,\sqrt{3}/2))\), and weak ferromagnetic correlations at further distances. We see the opposite behavior for doubles at \(\delta>0\). **b,** When a hole dopant is introduced into a Mott insulator, spins near the hole have antiferromagnetic correlations. If the hole tunnels once, the plaquette around the hole has antiferromagnetic correlations while the next furthest correlator now has ferromagnetic correlations. **c,** Three-point correlators normalized by the dopant density. DQMC theory is shown for the interacting systems (gray bands) and in the non-interacting limit (dotted red and green lines). Error bars represent 1 standard error of the mean (s.e.m.). point charge-spin-spin correlation functions. For a hole dopant, the relevant correlation function, \(C_{h}^{(3)}(\mathbf{d_{1}},\mathbf{d_{2}})\) is computed as: \[C_{h}^{(3)}(\mathbf{d_{1}},\mathbf{d_{2}})\\ \equiv\langle\hat{n}_{\mathbf{r_{0}}}^{h}\hat{\mathcal{S}}_{\mathbf{ r_{0}}+\mathbf{d_{1}}}^{z}\hat{\mathcal{S}}_{\mathbf{r_{0}}+\mathbf{d_{2}}}^{z} \rangle-\langle\hat{n}_{\mathbf{r_{0}}}^{h}\rangle\langle\hat{\mathcal{S}}_{ \mathbf{r_{0}}+\mathbf{d_{1}}}^{z}\hat{\mathcal{S}}_{\mathbf{r_{0}}+\mathbf{d_ {2}}}^{z}\rangle, \tag{3}\] where we have assumed a spin-balanced system \(\langle\hat{S}_{\mathbf{r}}^{z}\rangle=0\). Here \(\mathbf{d_{1}}\) and \(\mathbf{d_{2}}\) are displacement vectors relative to a site at position \(\mathbf{r_{0}}\), \(\hat{S}_{\mathbf{r}}^{z}=\hat{n}_{\mathbf{r}\uparrow}-\hat{n}_{\mathbf{r}\downarrow}\) is the projection of the spin on site \(\mathbf{r}\) along the quantization axis and \(\hat{n}_{\mathbf{r}}^{h}\) is the hole number operator \((1-\hat{n}_{\mathbf{r}\uparrow})(1-\hat{n}_{\mathbf{r}\downarrow})\). We average the correlation function over sites \(\mathbf{r_{0}}\) with similar doping. The doublon correlation function \(C_{d}^{(3)}\) is constructed in an analogous way by replacing the hole number operator \(\hat{n}_{\mathbf{r}}^{h}\) with the doublon number operator \(\hat{n}_{\mathbf{r}}^{d}=\hat{n}_{\mathbf{r}\uparrow}\hat{n}_{\mathbf{r}\downarrow}\). For the range of dopings we consider, the two-point spin correlator is always negative due to the dominant superexchange antiferromagnetism. The second term of the connected three-point correlators defined in equation (3) removes any uncorrelated charge-spin-spin signal associated with this background AFM signal, allowing us to isolate kinetic effects. The measured correlation functions \(C_{h}^{(3)}\) and \(C_{d}^{(3)}\) are shown in Fig. 2a for a lightly doped Mott insulator (\(U/t=11.8(4)\)). For \(\delta=-0.10(2)\), \(C_{h}^{(3)}\) exhibits the expected negative correlations with a value of \(-5.9(7)\times 10^{-3}\) for \(\mathbf{d_{1}}=(1,0)\), \(\mathbf{d_{2}}=(1/2,\sqrt{3}/2)\), indicating an enhancement of AFM order in the immediate vicinity of a hole, as expected from the itinerant spin polaron picture. We observe that the correlations for displacements corresponding to bonds further away from the central site flip sign, e.g. for \(C_{h}^{(3)}(\mathbf{d_{1}}=(1,0)\), \(\mathbf{d_{2}}=(3/2,\sqrt{3}/2))=1.3(4)\times 10^{-3}\). These correlations may be understood using the picture of a mobile hole perturbing the surrounding AFM ordering (Fig. 2b). For \(\delta=0.15(2)\), the doublon correlation function \(C_{d}^{(3)}\) exhibits the opposite sign correlations, corresponding to a ferromagnetic polaron. This ferromagnetic polaron may be responsible for the observation of ferromagnetic two-point correlators at larger particle dopings in a previous optical lattice experiment, interpreted in that work as a potential hint of kinetic magnetism [38]. Additionally, we explore the dependence of the three-point correlators on doping for the same large interaction (Fig. 2a). For small \(\delta\), we find that the correlators scale linearly with the relevant dopant density \(\delta\) for \(|\delta|\lesssim 0.1\). This indicates that in this regime, the description of the system in terms of weakly interacting polarons is valid. Polaron interactions become important for larger dopant densities. For example, \(C_{h}^{(3)}((1,0),(1/2,\sqrt{3}/2))\) reaches its most negative value at \(\delta\sim-0.3\). These observations motivate introducing a normalized version of the correlators by dividing out the relevant dopant density \(\delta\), where Figure 3: **Evolution of three-point correlations with doping and interactions.**\(C_{h}^{(3)}\) and \(C_{d}^{(3)}\) connected correlations vs. doping in the metallic and Mott insulating regimes for \(\mathbf{d_{1}}=(1,0)\), \(\mathbf{d_{2}}=(1/2,\sqrt{3}/2)\). Data shown for **a,**\(U/t=4.4(1),T/t=0.68(2)\), **b,**\(U/t=8.0(2),T/t=0.84(3)\), and **c,**\(U/t=11.8(4),T/t=0.94(4)\). For all interactions, we observe the largest negative \(C_{h}^{(3)}\) correlator at a doping of around -0.3. For increasing \(U/t\), the \(C_{h}^{(3)}\) and \(C_{d}^{(3)}\) correlators become more linear near half filling, indicating a region where there are weakly interacting polarons. The blue dashed line in **c**, which is fit to the DQMC in the doping range \(-0.09<\delta<0.06\), illustrates this region. DQMC theory is shown for the interacting systems (gray bands) and in the non-interacting limit (dotted red and green lines). Error bars represent 1 s.e.m. \(C_{\rm norm}^{(3)}\equiv C^{(3)}/\delta\) (Fig. 2c). We note that for this strong interaction, the holes and doublons are predominantly itinerant dopants, rather than virtual doublon-hole fluctuations of the underlying Mott insulator. The normalized spin correlation emphasize the fact that the spin correlations per dopant are strongest close to half-filling. While the itinerant spin polaron picture we have presented so far is in the regime of strong interactions, it is interesting to explore how the three-point correlations evolve with \(U/t\). Fig. 3 shows these correlations in the metallic (\(U/t=4.4\)), Mott insulating (\(U/t=11.8\)) and intermediate regimes (\(U/t=8.0\)) in the temperature range \(T/t\sim 0.7-0.9\). Surprisingly, many of the qualitative features of the correlations are similar, including the minimum in the antiferromagnetic correlations around a hole at \(\delta\sim-0.3\). This can again be understood from a single plaquette in the alternative limit of vanishing interactions, which predicts correlations of the same sign as the itinerant spin polaron [56] (see Methods). For all interactions, the measured correlations show reasonable agreement with DQMC calculations with a small systematic deviation in \(C_{d}^{(3)}\) and \(C_{h}^{(3)}\) for larger fillings, possibly due to an increase in reconstruction errors (see Methods). The correlations differ significantly from those expected for the non-interacting gas, especially for the two stronger interactions. As \(U/t\) increases, the onset of the correlation moves closer to half-filling as contributions from virtual doublon-hole fluctuations are increasingly suppressed. The characteristic linear growth of the correlations, expected in the polaronic regime and observed for \(U/t=11.8\), is absent for the weakest interactions. Additional evidence for an experimental observation of the itinerant spin polaron at the largest interaction strength comes by combining the observed three-point correlators with a measurement of the singles fraction \(n_{s}=n-n^{d}\) at half-filling, which ensures that the system is in the strongly interacting regime. For \(U/t=4.4,8.0\) and \(11.8\), this is \(0.71(1),0.85(1)\) and \(0.93(1)\) respectively. Since superexchange-induced AFM correlations are present in the system for any finite interaction strength, it is illuminating to quantify their strength in comparison to kinetic magnetism around dopants. This can be done using four-point correlation functions. For any two nearest-neighbour sites \(j\) and \(k\), there are two additional sites \(i\) and \(l\) (which we call conditioning sites) coupled to both of them. (Fig. 4a). As a dopant on either conditioning site may affect the correlation strength between \(j\) and \(k\), a four-point correlation function is required to determine the influence of the background on the \(j-k\) bond. We define a conditional four-point correlator as the spin correlator on the shared bond, conditioned on the occupancy observables \(\hat{n}^{a},\hat{n}^{b}\) on sites \(i\) and \(l\), \[C_{ab}^{(4)}\equiv\frac{\langle\hat{n}_{i}^{a}\hat{S}_{j}^{z}\hat{S}_{k}^{z} \hat{n}_{l}^{b}\rangle}{\langle\hat{n}_{i}^{a}\hat{n}_{l}^{b}\rangle}, \tag{4}\] where the labels \(a,b\in\{h,s\}\). Fig. 4b shows the four-point correlator at \(U/t=11.8\) for three different occupancies of the conditioning sites (Fig. 4b). Below half-filling, we find that \(C_{hh}^{(4)}<C_{hs}^{(4)}<C_{ss}^{(4)}\). This indicates that antiferromagnetic correlations associated with the kinetic mechanism are stronger than those due to superexchange, which is the only mechanism at play on the level of the four-site plaquette when the conditioning sites are singly-occupied. Above half filling, where holes are due to virtual fluctuations, there is no enhancement of anti-ferromagnetic correlations from kinetic magnetism and all three correlators are similar in value. The stronger kinetic AFM correlations below half-filling can be understood from the exact diagonalization spectrum of the four-site plaquette (Fig. 4a). The excitation gap from the ground state is of order \(J=4t^{2}/U\) when the conditioning sites are singly-occupied, which is substantially smaller than the \(t\) scale gap when these sites have one or two holes. This makes kinetic magnetism more robust compared to superexchange magnetism in our finite temperature system (\(T/J\sim 2.8\)). In this work, we have directly imaged itinerant spin polarons in a triangular Hubbard system by measuring three- and four-point correlation functions. We have characterised their evolution with doping and interactions, and compared the strength of correlations induced Figure 4: **Comparing antiferromagnetic correlations due to kinetic and superexchange mechanisms.****a,** Four point correlators on a diamond plaquette in the Mott insulating regime (\(U/t=11.8\)) elucidate the difference between kinetic and superexchange magnetism. The energy spectra are shown for a diamond plaquette with one up spin and two down spins (green) and two up spins and two down spins (orange). They exhibit gaps from the ground state of order \(t\) and \(J\) respectively. **b,** Conditional four-point correlators \(C^{(4)}\) show the spin-spin correlator on a single bond in the presence of two holes (red), one hole and one singly occupied site (green), and two singly occupied sites (blue). Below half filling, bonds have increased antiferromagnetic spin correlations with increasing neighbors that are holes. DQMC theory shown as colored bands. Error bars represent \(1\) s.e.m. by superexchange and kinetic effects. In future work, it will be interesting to search for complex multi-particle bound states that are expected to arise in frustrated systems [16], as well as the many-body states that can emerge from their self-organisation. For example, kinetic frustration may lead to hole-pairing mechanisms and superconductivity at high temperatures [57, 58, 59, 15, 16, 60]. Another direction for future studies is the measurement of the binding energy of polarons, either using spectroscopic techniques or by identifying polarization plateaus in the response to an effective Zeeman field in a spin-imbalanced system [17, 18]. **Acknowledgements:** We acknowledge Markus Greiner, David Huse, Rhine Samajdar, Lawrence Cheuk, Eun-Ah Kim, Daniel Khomskii, Annabelle Bohrdt, Fabian Grusdt, Henning Schlomer and Gil Refael for helpful discussions. We also thank Siddarth Dandavate for early assistance in performing the DQMC simulations. The experimental work was supported by the NSF (grant no. 2110475), the David and Lucile Packard Foundation (grant no. 2016-65128) and the ONR (grant no. N00014-21-1-2646). M.L.P. acknowledges support from the NSF Graduate Research Fellowship Program. E. D. acknowledges support from the ARO (grant no. W911NF-20-1-0163) and the SNSF (project 200021_212899). I.M. acknowledges support from Grant No. PID2020-114626GB-I00 from the MICIN/AEI/10.13039/501100011033 and Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya, cofunded by the European Union Regional Development Fund within the ERDF Operational Program of Catalunya (Project No. QuantumCat, Ref. 001-P-001644). **Author contributions:** E.D., I.M. and W.S.B. conceived the study and supervised the experiment. M.L.P., B.M.S. and Z.Z.Y. performed the experiments and analyzed the data. All authors contributed to writing the manuscript. **Competing interests:** The authors declare no competing interests. \({}^{*}\) These authors contributed equally to this work. \({}^{\dagger}\) Email: [email protected]
2308.03684
Active Noise Control based on the Momentum Multichannel Normalized Filtered-x Least Mean Square Algorithm
Multichannel active noise control (MCANC) is widely utilized to achieve significant noise cancellation area in the complicated acoustic field. Meanwhile, the filter-x least mean square (FxLMS) algorithm gradually becomes the benchmark solution for the implementation of MCANC due to its low computational complexity. However, its slow convergence speed more or less undermines the performance of dealing with quickly varying disturbances, such as piling noise. Furthermore, the noise power variation also deteriorates the robustness of the algorithm when it adopts the fixed step size. To solve these issues, we integrated the normalized multichannel FxLMS with the momentum method, which hence, effectively avoids the interference of the primary noise power and accelerates the convergence of the algorithm. To validate its effectiveness, we deployed this algorithm in a multichannel noise control window to control the real machine noise.
Dongyuan Shi, Woon-Seng Gan, Bhan Lam, Shulin Wen, Xiaoyi Shen
2023-08-07T15:59:38Z
http://arxiv.org/abs/2308.03684v1
Active Noise Control based on the Momentum Multichannel Normalized Filtered-x Least Mean Square Algorithm ###### Abstract **Multichannel active noise control (MCANC) is widely utilized to achieve significant noise cancellation area in the complicated acoustic field. Meanwhile, the filter-x least mean square (FxLMS) algorithm gradually becomes the benchmark solution for the implementation of MCANC due to its low computational complexity. However, its slow convergence speed more or less undermines the performance of dealing with quickly varying disturbances, such as piling noise. Furthermore, the noise power variation also deteriorates the robustness of the algorithm when it adopts the fixed step size. To solve these issues, we integrated the normalized multichannel FxLMS with the momentum method, which hence, effectively avoids the interference of the primary noise power and accelerates the convergence of the algorithm. To validate its effectiveness, we deployed this algorithm in a multichannel noise control window to control the real machine noise.** ## 1 Introduction Active noise control (ANC) is a technique that utilizes the loudspeaker generating "anti-noise" wave with the negative amplitude of the unwanted noise to cancel this acoustic disturbance [1, 2, 3, 4]. Compared to passive noise cancellation strategy, such as the noise barrier, ANC exhibits more effectiveness in mitigating the low-frequency noise without occupying large space, affecting air ventilation, and destroying the natural environment. Hence, the ANC technique is widely applied in many different fields, including the headphones [5], windows [6, 7, 8, 9, 10, 11, 12], and the open space [13]. However, in many real scenario, ANC can only achieve the local noise control around the error sensor [14]. To enlarge the size of the noise reduction area, the multichannel ANC (MCANC) is usually applied at the expense of the system complexity [15]. With the significant development of powerful processors and other digital devices [16], such as digital signal processors, analog-to-digital converters (ADC), and digital-to-analog converters (DAC), it becomes feasible to realize the active control with adaptive algorithms [17, 18]. Among these algorithms, the filtered-x least mean square (FxLMS) algorithm is prevalent in various applications since its low computational complexity [19]. Besides, it utilizes a filtered reference signal to update the control filter and compensates for the delay involved by the secondary path, which enhances the system robustness. Since its satisfactory performance in the single-channel ANC, FxLMS is also extended to the multichannel FxLMS (McFxLMS) algorithm while maintaining similar advantages in the MCANC application [20]. Meanwhile, some other FxLMS-based algorithms have been proposed to further reduce computations [21, 22], improve convergence [23, 24], or cope with the output saturation issue [25, 26, 27, 28, 29, 30]. Nevertheless, these FxLMS-based algorithms are always haunted by a practical issue [31]: the step size bound is sensitive to the reference signal's power. An inappropriate step-size selection usually results in the divergence or slow convergence problem. For the same reason, the stability of the McFxLMS algorithm will be deteriorated in canceling the varying noise when its step size is chosen as a constant. To solve this issue, the variable step-size methods seem to be an effective strategy. However, most of these variable step-size mechanisms will severely aggravate the computational load, especially for the multichannel system. Under this situation, the multichannel normalized FxLMS (MNFxLMS) algorithm [32, 33] becomes a better choice because it can avoid the influence of input power by pre-whitening the referenced signal while slightly increasing computations. In that, MNFxLMS is undoubtedly suitable to deal with the primary noise with a massive power variation over time. To further improve its convergence, we integrate the momentum technique to MNFxLMS in this paper. In the new algorithm, the momentum term [34, 35, 36] accumulates the previous gradient information to accelerate the convergence of MNFxLMS, which hence, leads to a satisfactory noise reduction performance when dealing with the quick-varying noise. The momentum MNFxLMS also smooths the varied gradient and reduces the high-frequency disturbance on the control filter's weight. Furthermore, this paper carries out the simulations on the proposed algorithm, which is used to deal with real quick-varying noise in measured paths. This paper is organized as the following descriptions: Section 2 revisits the multichannel normalized FxLMS algorithm; Section 3 proposes the momentum MNFxLMS algorithm and addresses a brief analysis. Section 4 exhibits the simulation results of the McFxLMS, MNFxLMS, and momentum MNFxLMS algorithms, and Section 5 summaries the whole paper. ## 2 The multichannel normalized filtered-X LMS algorithm In this paper, we consider a \(J\times K\times M\) multichannel active noise control (MCANC) system, which uses \(J\) reference microphones and \(K\) secondary sources to cancel the disturbances at \(M\) error microphones, as shown in Figure 1. In this figure, **P**, **W**, and **S** denote the transfer functions of the primary paths, control filters, and secondary paths, respectively. \(\hat{\textbf{S}}\) stands for the estimate of **S** and is obtained through the offline system identification. Figure 1: Block diagram of a multichannel ANC with \(J\) microphones, \(K\) secondary sources, and \(M\) error microphones [37]. The control signal of the \(k\)th secondary source can be expressed as \[y_{\mathrm{k}}(n)=\sum_{\mathrm{j=1}}^{\mathrm{J}}\mathbf{w}_{\mathrm{kj}}^{ \mathrm{T}}(n)\mathbf{x}_{\mathrm{j}}(n) \tag{1}\] where \(\mathbf{w}_{\mathrm{kj}}(n)\) denotes the \(kj\)th control filter that models the \(j\)th reference to drive the \(k\)th secondary source and is expressed as \[\mathbf{w}_{\mathrm{kj}}(n)=\begin{bmatrix}w_{\mathrm{kj},1}(n)&w_{\mathrm{kj },2}(n)&\cdots&w_{\mathrm{kj},\mathrm{N}}(n)\end{bmatrix}^{\mathrm{T}}\in \mathbb{R}^{\mathrm{N}\times 1}\] which has \(N\) taps, and \(\mathbf{x}_{\mathrm{j}}(n)\) is the \(j\)th reference vector give by \[\mathbf{x}_{\mathrm{j}}(n)=\begin{bmatrix}x_{\mathrm{j}}(n)&x_{\mathrm{j}}(n- 1)&\cdots&x_{\mathrm{j}}(n-N-1)\end{bmatrix}^{\mathrm{T}}\in\mathbb{R}^{ \mathrm{N}\times 1}.\] T and \(\mathbb{R}\) represent the transpose operation and the real number, respectively. The error signal at the \(m\)th microphone can be written as \[e_{\mathrm{m}}(n)=d_{\mathrm{m}}(n)+\sum_{\mathrm{k=1}}^{\mathrm{K}}y_{ \mathrm{k}}(n)*s_{\mathrm{mk}}(n) \tag{2}\] where \(*\) denotes the linear convolution, and \(s_{\mathrm{mk}}(n)\) stands for the secondary path from the \(k\)th secondary source to the \(m\)th error microphone. Based on the principle of the minimal disturbance [33], we define an increment of the \(kj\)th control filter at the \(n+1\) iteration as \[\delta\mathbf{w}_{\mathrm{kj}}(n+1)=\mathbf{w}_{\mathrm{kj}}(n+1)-\mathbf{w}_ {\mathrm{kj}}(n) \tag{3}\] and an equality constrain as \[d_{\mathrm{m}}(n)+\sum_{\mathrm{j=1}}^{\mathrm{J}}\sum_{\mathrm{k=1}}^{ \mathrm{K}}\mathbf{w}_{\mathrm{kj}}^{\mathrm{T}}(n+1)\mathbf{x}^{\prime}{}_{ \mathrm{jkm}}(n)=0 \tag{4}\] where \(\mathbf{x}^{\prime}{}_{\mathrm{jkm}}(n)\) represents the \(jkm\)th filtered reference signal obtained from \[\mathbf{x}^{\prime}{}_{\mathrm{jkm}}(n)=\mathbf{x}_{\mathrm{j}}(n)*s_{\mathrm{ mk}}(n)\in\mathbb{R}^{\mathrm{N}\times 1}. \tag{5}\] To solve Equation 3 and Equation 4, we construct a Lagrange function as \[J(n)=\sum_{\mathrm{j=1}}^{\mathrm{J}}\sum_{\mathrm{k=1}}^{\mathrm{K}}\left\| \delta\mathbf{w}_{\mathrm{kj}}(n+1)\right\|^{2}+\sum_{\mathrm{t=1}}^{\mathrm{ M}}\lambda_{\mathrm{t}}\left[d_{\mathrm{t}}(n)+\sum_{\mathrm{j=1}}^{\mathrm{J}} \sum_{\mathrm{k=1}}^{\mathrm{K}}\mathbf{w}_{\mathrm{kj}}^{\mathrm{T}}(n+1) \mathbf{x}^{\prime}{}_{\mathrm{jkt}}(n)\right] \tag{6}\] where \(\|\cdot\|\) and \(\lambda_{\mathrm{t}}\) denote the 2-norm and the \(n\)th Lagrange multiplier, respectively. According to the Lagrange multiplier method [33], we derive the solution to minimize Equation 6 as the following procedures: (1). The gradient of Equation 6 with respect to \(\mathbf{w}_{\mathrm{kj}}(n+1)\) is derived as \[\frac{\partial J(n)}{\partial\mathbf{w}_{\mathrm{kj}}(n+1)}=2\left[\mathbf{w} _{\mathrm{kj}}(n+1)-\mathbf{w}_{\mathrm{kj}}(n)\right]+\sum_{\mathrm{t=1}}^{ \mathrm{M}}\lambda_{\mathrm{t}}\mathbf{x}^{\prime}{}_{\mathrm{jkt}}(n). \tag{7}\] Setting Equation 7 to \(\mathbf{0}\) yields \[\mathbf{w}_{\mathrm{kj}}(n+1)=\mathbf{w}_{\mathrm{kj}}(n)-\frac{1}{2}\sum_{ \mathrm{t=1}}^{\mathrm{M}}\lambda_{\mathrm{t}}\mathbf{x}^{\prime}{}_{\mathrm{ jkt}}(n) \tag{8}\] (2). By substituting Equation 8 into Equation 4: \[d_{\rm m}(n)=-\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{\rm K}\left[{\bf w}_{\rm kj}(n )-\frac{1}{2}\sum_{\rm t=1}^{\rm M}\lambda_{\rm t}{\bf x^{\prime}}_{\rm jkt}(n) \right]^{\rm T}{\bf x^{\prime}}_{\rm jkm}(n) \tag{9}\] we can obtain \[e_{\rm m}(n)=\frac{1}{2}\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{\rm K}\sum_{\rm t= 1}^{\rm M}\lambda_{\rm t}{\bf x^{\prime}}_{\rm jkt}^{\rm T}(n){\bf x^{\prime}}_ {\rm jkm}(n)\approx\frac{1}{2}\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{\rm K} \lambda_{\rm m}\left\|{\bf x^{\prime}}_{\rm jkm}(n)\right\|^{2} \tag{10}\] where it is assumed that \({\bf x^{\prime}}_{\rm jkt}(n)\) and \({\bf x^{\prime}}_{\rm jkm}(n)\) are orthogonal (\(t\neq m\)) [32]. Hence, from Equation 10, the Lagrange multiplier is derived as \[\lambda_{\rm m}=\frac{2e_{\rm m}(n)}{\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{\rm K }\left\|{\bf x^{\prime}}_{\rm jkm}(n)\right\|^{2}}. \tag{11}\] (3). Substituting Equation 11 into Equation 8 yields \[{\bf w}_{\rm kj}(n+1)={\bf w}_{\rm kj}(n)-\sum_{\rm m=1}^{\rm M}\frac{e_{\rm m }(n)}{\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{\rm K}\left\|{\bf x^{\prime}}_{\rm jkm }(n)\right\|^{2}}{\bf x^{\prime}}_{\rm jkm}(n). \tag{12}\] To control the magnitude of the increment of the control filter, we introduce a positive multiplier \(\widetilde{\mu}\) (\(0<\widetilde{\mu}<1\)) in Equation 12: \[{\bf w}_{\rm kj}(n+1)={\bf w}_{\rm kj}(n)-\widetilde{\mu}\sum_{\rm m=1}^{\rm M }\frac{e_{\rm m}(n){\bf x^{\prime}}_{\rm jkm}(n)}{\sum_{\rm j=1}^{\rm J}\sum_{ \rm k=1}^{\rm K}\left\|{\bf x^{\prime}}_{\rm jkm}(n)\right\|^{2}+\varepsilon} \tag{13}\] where \(\varepsilon\) is a small positive scalar to guarantee the division result within the finite value. Equation 13 is so-call multichannel normalized filtered-x least mean square (MNFxLMS) algorithm [32]. Equation 13 can be rewritten as \[{\bf w}_{\rm kj}(n+1)={\bf w}_{\rm kj}(n)-\sum_{\rm m=1}^{\rm M}\mu_{\rm m}( n)e_{\rm m}(n){\bf x^{\prime}}_{\rm jkm}(n) \tag{14}\] where the equivalent step size of the MNFxLMS algorithm is given by \[\mu_{\rm m}(n)=\frac{\widetilde{\mu}}{\sum_{\rm j=1}^{\rm J}\sum_{\rm k=1}^{ \rm K}\left\|{\bf x^{\prime}}_{\rm jkm}(n)\right\|^{2}+\varepsilon} \tag{15}\] which is inversely proportional to the power of the reference signal. Therefore, the MNFxLMS algorithm effectively avoids the influence of input power variation and enforces the adaptive algorithm's robustness. ## 3 The Momentum MNFxLMS Algorithm To further fasten the convergence of the MNFxLMS algorithm, we integrate the momentum mechanism [34, 35, 36] into the updating equation of Equation 13 as \[{\bf w}_{\rm kj}(n+1)={\bf w}_{\rm kj}(n)-\eta_{\rm kj}(n) \tag{16}\] where \(\eta_{\rm kj}(n)\) denotes the momentum of the algorithm given by \[\eta_{\rm jk}(n)=\gamma\cdot\eta_{\rm kj}(n-1)+\widetilde{\mu}\sum_{\rm m=1}^{ \rm M}\frac{e_{\rm m}(n)\mathbf{x^{\prime}}_{\rm jkm}(n)}{\sum_{\rm j=1}^{\rm J} \sum_{\rm k=1}^{\rm K}\left\|\mathbf{x^{\prime}}_{\rm jkm}(n)\right\|^{2}+ \varepsilon}. \tag{17}\] In Equation 17, \(\gamma\in(0,1)\) stands for the forgetting factor, which decides the degree of the influence of previous gradients on the weight increment. Since the momentum term of Equation 17 accumulates the previous gradients, it is evident that the convergence of the proposed algorithm will be significantly accelerated if these gradients have the same direction. It is worth noting that the z-transform expression of Equation 17 can be written as \[H(z)=\frac{1}{1-\gamma z^{-1}}\Delta(z) \tag{18}\] where \(H(z)\) and \(\Delta(z)\) represent the z-transform of \(\eta(n)\) and the last term in the left side of Equation 17, respectively. The magnitude response of Equation 18 is shown in Figure 2. It can figure out that, for the quick-varying gradient, the momentum term works like a low-pass filter, which attenuates the high-frequency disturbance on the control filter's weights. However, for the low-frequency varied gradient, its amplitude will be amplified so that to improve the convergence of the algorithm. ## 4 Simulation Result To carry out the simulations on the McFxLMS, MNFxLMS, and momentum MNFxLMS algorithms, we measured the primary and secondary paths from a 4-channel ANC system installed in a noise chamber, as shown in Figure 3. This wooden chamber has a dimension of 1.2 m\(\times\)1.2 m\(\times\)1.2 m and a aperture with the size of \(60\ \rm cm\times 50\) cm at its facade. A noise source is placed inside the chamber and put 1m way from the aperture, which has four secondary mounted around its frame. There are four error microphones fixed in a grid kept 50 cm away from the secondary sources. The Figure 2: Magnitude response of the momentum function with different values of the forgetting factor \(\gamma\). reference microphone of this MCANC system is placed close to the noise source. The impulse responses of measured primary and secondary paths are illustrated in Figure 4. Furthermore, the control filter and secondary path estimate in all algorithms have \(512\) and \(256\) taps, Figure 4: Impulse responses of primary paths and secondary paths: \(\mathbf{p}_{j}\) denotes the path from the primary source to the \(j\)th error microphone; \(\mathbf{s}_{\text{mk}}\) represents the secondary path form the \(k\)th secondary source to the mth error microphone, and \(j,k,m=1,2,3,4\). Figure 3: The front view of a \(4\)-channel active noise control system, which has \(1\) reference microphone, \(4\) secondary source and \(4\) error sensors. respectively. The forgetting factor of the momentum MNFxLMS algorithm is set to 0.9. ### The cancellation of varying broadband noise In the first simulation, the 4-channel ANC system is utilized to cancel the broadband noise. In the first 20 seconds, the broadband noise has the frequency of 200 to 800 Hz and the amplitude of 10 dB; in the next 20 seconds, the amplitude of this noise changes to 15 dB, and its frequency becomes 100 to 1600 Hz. During the simulation, the McFxLMS algorithm has a step size of 0.000001, while the step sizes of the MNFxLMS and momentum MNFxLMS are set to 0.001. Figure 5 exhibits the four error signal's waveform of the algorithms. From this result, it can be found that the McFxLMS algorithm diverges when it meets the large input power. Meanwhile, the MNFxLMS and momentum MNFxLMS algorithm can remain a satisfactory converge even though the primary noise has a significant variation. It is because the step size in MNFxLMS will automatically adjust with the input power, as shown in Equation 15. Furthermore, with the assistance of the gradient accumulation, the momentum MNFxLMS algorithm shows a faster convergence behavior than the conventional MNFxLMS. ### The cancellation of a real piling noise In the second simulation, the primary noise becomes a real piling noise, as shown in Figure 6. To ensure the convergence of the McFxLMS algorithm, its step size is set to 0.0001 at the expense of its convergence speed. Meanwhile, the MNFxLMS and momentum MNFxLMS can have a greater step size of 0.001. Under this situation, the momentum MNFxLMS algorithm achieves the fastest convergence among these algorithms, as illustrated in Figure. 6. However, all these three algorithms obtain similar noise reduction levels around 20 dB at the steady-state. Figure 5: Time histories of error signals at the four error microphones. The step size of the McFxLMS algorithm is set to \(0.000001\), and step sizes of the MNFxLMS and momentum MNFxLMS algorithms are set to \(0.001\). The forgetting factor of \(\gamma\) is \(0.9\). ### The cancellation of an fMRI noise Figure 6: Time history of a real piling noise at the first error microphone. The step size of the McFxLMS algorithm is set to \(0.0001\), and the step sizes of the MNFxLMS and momentum MNFxLMS algorithms are set to \(0.001\). The forgetting factor of \(\gamma\) is \(0.9\). Figure 7: Time history of an fMRI noise at the first error microphone. The step size of the McFxLMS algorithm is set to \(0.0001\), and the step sizes of the MNFxLMS and momentum MNFxLMS algorithms are set to \(0.01\). The forgetting factor of \(\gamma\) is \(0.9\). In the final simulation, the primary noise is a real functional magnetic resonance imaging (fMRI) machine noise [38], as shown in Figure 6. To ensure the convergence of the McFxLMS algorithm, its step size is set to 0.0001. The MNFxLMS and momentum MNFxLMS algorithms have a step size of 0.001. Under this situation, the momentum MNFxLMS algorithm achieves the fastest convergence among these algorithms, as illustrated in Figure. 6. All these three algorithms obtain similar noise reduction levels around 21.2 dB at the steady-state. ## 5 Conclusions To assist the multichannel active noise control (MCANC) system in canceling quick-varying noise, the paper integrates the momentum method with the multichannel normalized filtered-x least mean square (MNFxLMS) algorithm. The MNFxLMS algorithm avoids the input power's influence on the step size bound, and the momentum method accelerates the convergence by accumulating the previous gradient information. The momentum MNFxLMS algorithm combines these two advantages and hence, exhibits a satisfactory performance in canceling the quick-varying noise. The simulation on the measured paths of a 4-channel ANC system verifies the effectiveness of the proposed algorithm in dealing with the real piling and fMRI noise. ## 6 Acknowledgements This research is supported by the Singapore Ministry of National Development and the National Research Foundation, Prime Minister's Office under the Cities of Tomorrow (CoT) Research Programme (CoT Award No. COT-V4-2019-1). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Singapore Ministry of National Development and National Research Foundation, Prime Minister's Office, Singapore.
2310.07870
Hierarchical planning-scheduling-control -- Optimality surrogates and derivative-free optimization
Planning, scheduling, and control typically constitute separate decision-making units within chemical companies. Traditionally, their integration is modelled sequentially, but recent efforts prioritize lower-level feasibility and optimality, leading to large-scale, potentially multi-level, hierarchical formulations. Data-driven techniques, like optimality surrogates or derivative-free optimization, become essential in addressing ensuing tractability challenges. We demonstrate a step-by-step workflow to find a tractable solution to a tri-level formulation of a multi-site, multi-product planning-scheduling-control case study. We discuss solution tractability-accuracy trade-offs and scaling properties for both methods. Despite individual improvements over conventional heuristics, both approaches present drawbacks. Consequently, we synthesize our findings into a methodology combining their strengths. Our approach remains agnostic to the level-specific formulations when the linking variables are identified and retains the heuristic sequential solution as fallback option. We advance the field by leveraging parallelization, hyperparameter tuning, and a combination of off- and on-line computation, to find tractable solutions to more accurate multi-level formulations.
Damien van de Berg, Nilay Shah, Ehecatl Antonio del Rio-Chanona
2023-10-11T20:21:34Z
http://arxiv.org/abs/2310.07870v1
# Hierarchical planning-scheduling-control - Optimality surrogates and derivative-free optimization ###### Abstract Planning, scheduling, and control typically constitute separate decision-making units within chemical companies. Traditionally, their integration is modelled sequentially, but recent efforts prioritize lower-level feasibility and optimality, leading to large-scale, potentially multi-level, hierarchical formulations. Data-driven techniques, like optimality surrogates or derivative-free optimization, become essential in addressing ensuing tractability challenges. We demonstrate a step-by-step workflow to find a tractable solution to a tri-level formulation of a multi-site, multi-product planning-scheduling-control case study. We discuss solution tractability-accuracy trade-offs and scaling properties for both methods. Despite individual improvements over conventional heuristics, both approaches present drawbacks. Consequently, we synthesize our findings into a methodology combining their strengths. Our approach remains agnostic to the level-specific formulations when the linking variables are identified and retains the heuristic sequential solution as fallback option. We advance the field by leveraging parallelization, hyperparameter tuning, and a combination of off- and on-line computation, to find tractable solutions to more accurate _multi-level_ formulations. keywords: Black-box optimization; Optimization with embedded surrogates ; Integrating operations and control + Footnote †: journal: XXX ## 1 Introduction ### Background Companies within the process industries rely on mathematical optimization for their operations to remain competitive in an environment of increasingly stringent safety, environmental, and economic requirements [1]. This gives rise to the field of enterprise-wide optimization (EWO) with the ultimate goal to coordinate all decision-making within a company [2, 3]. Significant value can be captured by integrating units across all hierarchical levels of decision-making (from design, planning, scheduling, to control). We adopt the terminology of Chu and You [4] and distinguish between the sequential, monolithic, and hierarchical integration of optimization models as shown in Figure 1. Conventionally, decision-making happens _sequentially_: upper-level decisions are taken while disregarding lower-level considerations, and then fed as setpoints to the lower levels. There is however no guarantee that these setpoints are feasible in lower-level problems. Sequential decisions historically arise out of necessity due to tractability and organizational constraints. In the _monolithic_ approach, lower-level feasibility considerations are included into upper-level optimization problems. This comes at the expense of a significant drop in computational tractability due to heterogeneity in model formulations and vastly different time horizons between levels. In the _hierarchical_ approach, upper-level decision-makers consider not only lower-level feasibility, but also optimality, accounting for how lower-level decisions affect upper-level objectives. This results in multi-level formulations, which are numerically intractable and mathematically difficult [5]. There are two main tools that can be leveraged to alleviate the computational burden of integrated optimization problems. The first are relaxation or aggregation techniques [6] with the aim to relax the constraints, granularity, or detail of the integrated optimization models. This includes the replacement of some model parts - for example the lower level - by surrogate models [7, 8, 9] that are easier to handle by numerical solvers. Introducing surrogates into the original formulation inevitably incurs the risk of losing solution quality. As such, decision-makers inherently trade-off surrogate accuracy with optimization tractability when they choose the type of surrogate. The second approach exploits the mathematical structure of integrated decision-making problems via decomposition algorithms [10, 11]. Hierarchical problems could be viewed as coordination problems between one or multiple leader(s) and follower(s): each player has decision freedom over their respective problem but needs to coordinate on a sparse set of complicating variables. Traditional sequential decision architectures often lead to the coordinated variables to be few relative to the upper- or lower-level specific variables. As such, Lagrangean, and Benders decomposition [12, 13] for example lend themselves well to tractably solving hierarchically integrated problems. The interest and successes in Machine Learning (ML) over the last decade have successfully diffused through the chemical engineering literature in recent years [14; 15]. Advances in ML have also inspired new developments in hierarchical planning, scheduling, and control. While surrogates - also known as meta-, scale-bridging, or reduced-order models [16; 17; 7] - have been established tools in the process systems engineering literature, ML has shifted attention to other approximation models [18]: artificial neural networks, decision trees, Gaussian Processes, and support vector machines. We use 'feasibility surrogates' and 'optimality surrogates' as umbrella terms for any approximation model with the aim of mapping lower-level feasibility, or lower-level optimal response variables (objective or decision variables) of upper-level complicating variables or'setpoints'. This approach has analogues and similarities across many disciplines, from multi-parametric model predictive control and explicit optimal policies [19], to amortized learning [20] in ML and value functions [21] in Reinforcement Learning (RL). The developments in ML have also heavily influenced the use of surrogates in derivative-free optimization (DFO) [22]. DFO deals with the optimization of systems without having explicit or cheap access to gradient information. DFO algorithms are typically classified into either 'direct' methods that directly handle function evaluations, or'model-based' methods that rely on the intermediate construction and optimization of surrogates. Although there are subtle differences, here, we use the term DFO synonymously with black-box, simulation-based, zeroth-order, or gradient-free optimization [23]. As DFO gains traction within the chemical engineering community [24], DFO has been exploited to solve integration problems that traditionally are targeted via decomposition algorithms. This includes coordination [25] and multi-level problems in process systems engineering [26] and even in the hierarchical integration of process operations [27]. When the computational budget is available to attempt integration, we argue that data-driven techniques can be used to close the gap between monolithic and hierarchical approaches. In Section 1.2, we explore related works. We highlight how data-driven techniques fall into either the optimality surrogate or DFO approach and highlight novel aspects of our work. In Section 1.3, we emphasize the aim of our work and novel aspects. ### Related works There is ample review literature about solution strategies and applications in the general domain of integrated planning-scheduling-control (iPSC). Some works narrow in on the integration of scheduling and control (iSC) [28], others focus on uncertainty [29], their application to sequential batch processes [30], or the difference between top-down and bottom-up approaches [31; 32]. Andres-Martinez and Ricardez-Sandoval [6] present the most recent review at the time of writing and classify works according to their application to integrated planning and scheduling (iPS), iSC, or iPSC. While most reviews are structured following the scope of integration, we are more interested in coming up with a typology of solution approaches. We discuss different solution methods and highlight their relationship to the general paradigms of surrogate modelling, decomposition, and derivative-free optimization (DFO). While we initially zone in on monolithic or simultaneous formulations, we argue that the same typology of solution approaches can be applied to hierarchical, multi-level formulations. #### 1.2.1 Surrogates We use the term'surrogate models' to encompass all approximation models that reduce model complexity for the sake of tractability. Surrogates could be loosely regarded as relaxation or aggregation techniques and as such inevitably incur the risk of losing solution quality by forfeiting model accuracy. These approaches can be categorized into model reduction or system identification techniques [7]. While the former aims to derive a low-order model from detailed dynamic models (i.e. orthogonal decomposition etc.), system identification techniques learn a dynamic model from data. Tsay and Baldea [8] argue that data presents a natural bridge between different hierarchical layers and consequently advocate for the use Figure 1: Hierarchical decision-making architectures: In the sequential approach, higher-level solutions feed into lower levels as setpoints. Each level solves their own optimization problem to meet these setpoints without explicit considerations of the lower levels. The monolithic approach resembles the sequential approach, but where feasibility in lower levels is satisfied via explicit incorporation of their constraints. The hierarchical approach could be viewed as a multi-level leader-follower game. A single ‘tri-level’ optimization problem is solved, where the planning is subject to the optimal solution of the scheduling level, and the scheduling is constrained on the optimal solution of the control level. of data-driven 'time-scale bridging models' to integrate operations and control. Scale-bridging models are compatible with the 'aggregate method' [9], where surrogates are used to map the linking functions between subproblems arising from the decomposition of different hierachical layers. Surrogates differ by the amount of detail they capture from lower levels. As such, Tsay and Baldea [8] distinguish between static and dynamic models. Static surrogates for instance do not explicitly account for transient data/dynamics. They could for instance learn only scheduling-relevant quantities from the control problem in iSC [17]. They can also be used to map the feasible space of lower levels [33]. This is related to feasibility analysis [34]. Convex region surrogates are especially popular as they map feasible sets using disjoint polytopes on historical operating data. Hence, the feasible region can be embedded into mixed-integer linear programming (MILP) formulations, avoiding the use of nonlinear terms and additional complexity. Nonlinear regression techniques could also be used as static models in mapping continuous outputs rather than binary feasibility: Gaussian Processes, support vector machines, piece-wise affine regression, or artificial neural networks would be well-suited to map the (optimal) cost of a lower level. Dynamic surrogates explicitly account for time as an additional input. They are necessary when the dynamics of the lower level become relevant. Data-driven dynamic surrogates are overwhelmingly influenced by the system identification literature. Relevant techniques include Hammerstein-Wiener, linear state-space models [35; 36], and Kalman filtering for online updates [37]. The difference between learning dynamic surrogates or upper-level relevant variables divides the community: The system identification literature champions the learning of dynamic models from data which can subsequently be deployed within optimization-based feedback control such as model predictive control (MPC) [38]; The RL literature advocates for the approximation of optimal policies or feedback laws directly from data. Rawlings and Maravelias [39] argue that Q-Learning is not competitive with standard system identification on a simple, linear two air-zone heating, ventilation, and air conditioning system in the presence of small amounts of Gaussian noise. However, they find promise in the use of neural networks to learn the response of an MPC controller on a linear dynamic model with quadratic stage cost, and linear constraints. #### 1.2.2 Decomposition Planning, scheduling, and control problems are often tackled by different business units within the same company, since they cover widely different timespans, types of decisions, constraints, and objectives. As such, the interactions between hierarchical decision-making layers, also called linking or complicating variables, naturally end up being sparse. Examples include planning targets and batch processing times in iPS or iSC problems respectively. When moving away from sequential to monolithic or even hierarchical integrated problem formulations, this gives rise to mathematical optimization structures that can be exploited by decomposition techniques [10]. While these techniques describe a wide range of approaches, they typically consist of solving problems by breaking them up into smaller subproblems, which are solved separately in parallel or sequentially. This often involves many iterations over the subproblems coordinated by the solution of a'master problem'. Lagrangean decomposition can be used to break up the iPSC problem into separate, more manageable iPS and control problems [11]. The coupling variables can even be learned via systematic network approaches. Mitrai and Daoutidis [40] use community detection to find the mathematical structure in an iSC problem and the linking variables that can be exploited using Generalized Benders decomposition. As expected, they recover the scheduling problem in the first level and dynamic optimization subproblems in the second level. In later work, Mitrai and Daoutidis [41] use Multi-cut generalized Benders decomposition for iPSC by learning the problem structure and find that the planning-scheduling and control constraints are coupled only via the transition times. Decomposition can also be combined with surrogates. Kelley et al. [12] use Hammerstein-Wiener models to integrate control dynamics into scheduling and solve the resulting formulation using Lagrangean decomposition. Ji and Gu [13] use a combination of Generalized Benders Decomposition and genetic algorithms in the simultaneous solution of iSC. #### 1.2.3 Derivative-free optimization Derivative-free optimization (DFO) is used to optimize systems without explicit gradient expressions [22]. As such, DFO does not need access to the system's explicit model formulations. DFO leverages input-output evaluations, or 'data', directly to find the optimum of the problem at hand through 'direct' or model-based methods. Direct methods comprise of a wide range of techniques from random search, to grid search, and more sophisticated algorithms like the simplex method (Nelder-Mead [42]), adaptive mesh search (NOMAD [43]) or DIviding REctangles (DIRECT [44]). Evolutionary methods [45] could also be considered a sophisticated version of random and hence direct search. Developments in model-based DFO are closely related to surrogate techniques, as model-based DFO relies on the intermediate construction and optimization of approximation models to find the optimum. Popular methods range from more exploitative linear or quadratic trust region methods (COBYLA [46] and BOBYQA [47]), to more explorative Bayesian Optimization using Gaussian Processes [48] or gradient-boosted trees [49]. DFO can exploit the same mathematical problem structure as decomposition approaches in iPSC. DFO approaches are especially prevalent in iPS, when the number of complicating variables (i.e. the planning targets) is few. Early attempts mostly use genetic algorithms [50; 51] or surrogates/metamodels [16] under the name of simulation-based optimization [52]. #### 1.2.4 DFO and single-level reformulation of multi-level problems Accounting for game-theoretical considerations in the optimization of integrated supply chains and process systems often involves multi-level formulations [53]. While sometimes simultaneous formulations for iPSC can be solved without previous solution approaches [54], the nested decision-making architecture of bi-level formulations require specialized approaches. Colson et al. [55] discuss general bi-level optimization solution approaches. Solving multi-level problems requires their reformulation into a form suitable for conventional single-level optimization. These techniques can be categorized into two main approaches: DFO and single-level reformulation. DFO extends to hierarchical and as such bi-level rather than simultaneous formulations by using data-driven optimization to find optimal planning or scheduling setpoints given the optimal response of the scheduling [27] or control level [56] respectively. On the other hand, single-level reformulation involves transforming the lower-level optimization problem into its Karush-Kuhn-Tucker (KKT) conditions. These can then be embedded into the upper level as big-M constraints, or using branching techniques on the complementarity conditions [57]. KKT reformulation has been used to embed linear MPC into scheduling [58]. This can also be applied to the integation of supply chain design and operation: all possible lower-level problems are enumerated, reformulated into their KKT conditions, embedded into the upper level, and finally solved using decomposition [59]. KKT reformulation is still computationally expensive even for linear cases and generally loses any theoretical guarantees in nonconvex iPSC. #### 1.2.5 Surrogates for multi-level problems Rather than embed lower levels into the upper level using KKT reformulations, we can 'learn' the optimal lower-level or follower response as a function of the leader's variables [60]. This could be considered an extension of the surrogate approach: rather than learn lower-level _feasibility_ from historical operating data, we can learn lower-level _optimality_ from the optimal solution of scheduling or control problems. We call 'feasibility surrogates' and 'optimality surrogates' the use of time-scale bridging models applied to simultaneous and hierarchical approaches respectively. The line between monolithic and hierarchical formulations can be blurry in feasibility analysis [34] and often boils down to practical details - if feasible operating data is available or needs to be obtained via solving feasibility problems. The application of 'optimality surrogates' in the integration of design, operations, and control appears in different domains under various names, from decomposition algorithms using'response functions' [61], to multi-parametric programming [62; 63; 64]. More recently, successes in Reinforcement Learning and Machine Learning have introduced new terminology and different perspectives. Sachio et al. [21] have used Reinforcement Learning to learn an optimal policy on MPC simulations before embedding into the design problem. Software advances such as OMLT [65; 66; 67] are streamlining the integration of optimization and ML pipelines by automating reformulations of neural networks and decision trees and their subsequent embedding into Pyomo as constraint blocks that can be handled by MILP solvers. This could catalyze new advances in the safe deployment of learnt optimality surrogates to mitigate any model approximation risks, such as adversarial approaches or safe Reinforcement Learning [68; 69]. ### Aims and novelty While there is ample literature related to using optimality surrogates or DFO in solving integrated planning-scheduling-control problems, van de Berg et al. [70] are the first to attempt a tri-level solution of the iPSC problem by using DFO to optimize the planning level and an approximate scheduling-control surrogate. In this work, we strive to answer the questions that are left unanswered. We provide various methodologies that outperform the sequential approach on a small but realistic multi-site, multi-product integrated planning-scheduling-control case study: * We provide a tutorial-like exposition of the DFO and surrogate approaches and how we can evaluate solution qualities based on increasingly complex single- to tri-level evaluations. * We investigate if we can integrate the scheduling-control layers as a surrogate into the planning and if this surrogate should be trained on scheduling-only, approximate, or exact scheduling-control data. * We investigate if we can use DFO to find the tri-level solution without relying on model approximations. * We explore practical techniques for balancing solution accuracy and computational time through combining the two methods. In what follows, we illustrate our multi-site, multi-product, hierarchical planning-scheduling-control case study in Section 2, and detail how the various levels are connected. In Section 3, we present, in tutorial fashion, three combinations of the DFO and optimality surrogate approaches that can be leveraged towards a tractable tri-level planning-scheduling-control solution of our problem. We also highlight smart practical considerations that are crucial to making these approaches scalable. In Section 4, we present our results on the tractability-accuracy trade-off between the different methods on multiple solution quality and solution time metrics. We emphasize the role that understanding the interplay of ML and optimization pipelines has on navigating the inherent accuracy-tractability trade-off before concluding in Section 5. ## 2 Case study In this section, we present the planning, scheduling, and control layers from our multi-site, multi-product problem as shown in Figure 2. We then discuss how the decisions between the levels are interlinked as shown in Figure 3. We refer to Section A for detailed optimization formulations of the high-level planning, scheduling, and control formulations presented in (1), (2), and (3). The implementations of the planning, scheduling, and control formulations can also be found in the Github repository under [https://github.com/OptiMaL-PSE-Lab/DD-Hierarchical](https://github.com/OptiMaL-PSE-Lab/DD-Hierarchical). ### Planning The highest level of decision-making in our case study is planning. While planning model formulations can vary, production planning usually involves the solution of a (mixed-integer) linear program (MI)LP to determine the optimal production targets that optimize an economic objective over a specific planning Figure 2: Outline of the hierarchical planning A), scheduling B), and control C) levels in the multi-site, multi-product case study. A) shows the planning state network denoting the materials (\(\circ\)) as well as processing (\(\rightarrow\)), transport (\(\rightarrow\)), and sales (\(\rightarrow\)) across the three sites (\(\square\)). Scheduling and control are only relevant for the second site. B) shows the state network of the scheduling problem that includes an additional intermediate material. The production of I3 and P1-P4 needs to be scheduled in batches on 2 machines in 7 event time points. C) Each batch production is then subject to an optimal control problem. horizon. Our planning problem (1) involves the solution of an LP over a planning horizon of 12 months. The state network of the planning is illustrated in Figure 2A. We consider three sites, where in the first site, raw material RM (assumed unlimited) undergoes two intermediate processing stages (I1 and I2). I2 is then transported to the two other sites, denoted as I21 and I22. In the second and third site, intermediate material I22 and I21 undergo further processing into one of four or one of two products (P1-P4 and P5-P6) respectively, which are then sold to meet external customer demand. The goal of the planning is to determine the production of each material given processing yields (black arrows), the material transport given lead times (blue arrows), and the sales given customer demand (green arrows) that maximize sales and minimize transportation, storage, and production cost. The LP is subject to inventory material balances on each material, site-wide resource utilization limits, and upper and lower bounds on inventory for each material. The full formulation can be found in (A.1). \[\begin{array}{ll}\min.&\text{Transport + Storage + Production - Sales}\\ \text{s.t.}&\text{Initial conditions}\\ &\text{Inventory mass balances with production and transportation}\\ &\text{Resource limits}\\ &\text{Sales limits}\\ &\text{Safe storage constraints}\end{array} \tag{1}\] ### Scheduling The optimal planning targets for each of the 12 planning timesteps are then fed to the scheduling layer. At each planning timestep, a scheduling problem determines the resource allocation required to reach the planning targets in minimal time, i.e. for each of the 7 scheduling timesteps (events), for each of the two machines, which production (job) happens and for how long. Our scheduling formulation is based on Maravelias and Grossmann [71] and relies on the state-task network approach with a common continuous-time representation for all units. The formulation also accounts for variable batch sizes, variable processing times and sequence-dependent changeover times. In our integrated framework, the scheduling layer involves the solution of a MILP (2) at each planning step involving discrete variables in the assignment constraints (if a batch production happens in a specific machine at a given time) and changeover constraints (if we switch from one production to another at a given event time point and machine) on top of the planning-level mass balances. The batch constraints, production recipes, production duration, and changeover duration are machine- and sequence-specific. There is also a more accurate estimate of the inventory and resource limits available compared to the planning layer. The scheduling layer is only considered for site 2 given the simplicity of the state network in the other sites. Explicitly accounting for scheduling in the other sites is compatible with our proposed methodologies at a potential increase in solution time, especially since the scheduling problems are only loosely connected via few planning variables. The state-task network corresponding to site 2 is depicted in Figure 2B, where on top of the state network from Figure 2A, we also consider a perishable, bottleneck intermediate I3 between I22 and products P1 and P2. I3 is produced on-demand for each planning timestep with any unused production after the planning period going to waste. The full formulations can be found in (A.2) to (A.6). \[\begin{array}{ll}\min.&\text{Makespan}\\ \text{s.t.}&\text{Inventory mass balances with production}\\ &\text{Batch constraints}\\ &\text{Duration constraints}\\ &\text{Assignment constraints}\\ &\text{Changeover constraints}\\ &\text{Resource limits}\\ &\text{Meet monthly planning production targets}\end{array} \tag{2}\] ### Control The control formulations are based on Mishra et al. (2017). Each batch control takes the batch targets for each job-machine allocation as determined in the scheduling layer, and determines the optimal cooling flowrate that achieves said batch target while minimizing a combination of processing time and energy cost. This involves the solution of an optimal control problem (3) subject to final time quality conditions and nonlinear differential expressions of the batch kinetics and energy transfer. The full formulation can be found in (A.7). \[\begin{array}{rl}\min\,.&\text{Processing time \& energy cost}\\ \text{s.t.}&\text{Mass balances given by differential equations}\\ &\text{Kinetics dependent on state and control variables}\\ &\text{Energy cost dependent on control variables}\\ &\text{Final time quality conditions}\end{array} \tag{3}\] ### Integration iPSC problems are either solved sequentially by ignoring all lower-level considerations, monolithically by only accounting for lower-level feasibility, or hierarchically by explicitly accounting for lower-level optimality as depicted in Figure 1. While the sequential and monolithic approaches are often employed for tractability or practical reasons, the hierarchical approach most accurately reflects organizational decision-making. Figure 3 shows the information flow between the layers. These connecting or complicating variables link the three decision-making units that can otherwise be solved autonomously by each level. At the highest level, the 12 optimal monthly planning variables feed as setpoints into a separate scheduling problem. For each planning timestep, all batch assignment targets as determined in the scheduling layer then feed as setpoints into the control level. In the scheduling layer, we need to decide on and as such explicitly consider all 5 possible productions for each of the 14 (7 events by 2 machines) optimal control problems. If we want to evaluate the scheduling layer for a given batch assignment however, we only need to solve for the 14 optimal control problems with the assigned production. Each optimal control solution then feeds back to the optimal processing times and corrects the makespan corresponding to a given sequence. The optimal changeover and cooling costs from the scheduling and control layer then feed back into the planning level where they correct the economic objective. While it is in principle possible to solve a monolithic formulation of the iPSC, this is intractable in our case without employing surrogates or decomposition. First, the nonlinear dynamics corresponding to up to 70 job-machine assignments would have to be embedded into the scheduling, exploding the dimensionality of the scheduling and turning the integrated problem mixed-integer _nonlinear_. Then, the changeover, assignment, and duration constraints of these 12 integrated problems would have to be embedded into the planning. A hierarchical solution is not just intractable but mathematically difficult given the nested layers of decision-making. Each tri-level optimization requires the solution of up to 840 control problems: the top layer (planning) is subject to the optimal solution of 12 scheduling optimization problems that themselves are constrained on the optimal solution of up to 70 optimal control problems each. Data-driven techniques have become essential to the solution of iPSC formulations. In the next section, we want to show how we can exploit data-driven techniques to solve hierarchical (multi-level) formulations. While it is unrealistic to expect to find the optimal solution to tri-level optimization formulations (even the simplest linear-linear bilevel optimization problems are NP-hard [5]), our aim is to find a tractable solution to the tri-level hierarchical formulation that outperforms the conventional sequential solution. In this work, we prioritize finding an approximate solution to an accurate model rather than finding the optimal solution to an approximate model. We acknowledge however that there are many reasons to use the sequential or monolithic approach, i.e. when the lower-level objectives are aligned with upper-level objectives, or when any potential gain in the tri-level formulation does not justify the increase in compute. Figure 3: Linking variables between the planning, scheduling, and control levels. Each planning problem feeds the planning targets for its 12 timesteps to separate scheduling problems to be solved in parallel. Each of the 12 scheduling problems, in deciding which of the 5 productions to schedule on the 2 machine at the 7 events, obtains the optimal batch processing time and cost as a function of the batch size targets for each of the 70 possible batch assignments. The optimal scheduling and control finally feed back into the planning. ## 3 Methodology To streamline the exposition of how we can use surrogates and derivative-free optimization to solve our iPSC problem, we first show how we can use both techniques in solving general bi-level problems. ### Using data to solve a canonical bi-level problem Problem (4) presents a general bi-level formulation where the leader determines the set of variables that appear only in their level \(\mathbf{x}_{up}\), and the connecting variables \(\mathbf{x}_{u\to l}\) that appear as setpoints in the lower level. Given the linking variables \(\mathbf{x}_{u\to l}\), the follower optimizes their objective \(f_{low}(\cdot)\) by manipulating the variables that are specific to the lower level \(\mathbf{x}_{low}\) and the connecting variables that also appear in the upper level \(\mathbf{x}_{l\to u}\). As such, the leader optimizes their objective \(f_{up}(\cdot)\) while explicitly accounting for the optimal response of the follower \(\mathbf{x}_{l\to u}^{*}\) to \(\mathbf{x}_{u\to l}\). We consider the complicating variables \(\mathbf{x}_{u\to l}\) and \(\mathbf{x}_{l\to u}\) to be continuous, but not necessarily \(\mathbf{x}_{up}\) and \(\mathbf{x}_{low}\). No such restrictions need to be placed on any of the variables but continuous variables are generally easier to handle. \[\underset{\begin{subarray}{c}\mathbf{x}_{up},\mathbf{x}_{u \to l},\\ \mathbf{x}_{low}^{*},\mathbf{x}_{l\to u}^{*}\end{subarray}}{\text{min}} f_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l},\mathbf{x}_{l \to u}^{*}) \tag{4a}\] \[\text{s.t.} \mathbf{h}_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l},\mathbf{x}_{l \to u}^{*})=\mathbf{0}\] (4b) \[\mathbf{g}_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l},\mathbf{x}_{l \to u}^{*})\leq\mathbf{0}\] (4c) \[\mathbf{x}_{low}^{*},\mathbf{x}_{l\to u}^{*}\in\quad\underset{ \begin{subarray}{c}\mathbf{x}_{low},\mathbf{x}_{l\to u}\\ \mathbf{x}_{low},\mathbf{x}_{l\to u}\end{subarray}}{\text{arg min}}. f_{low}(\mathbf{x}_{low},\mathbf{x}_{l\to u},\mathbf{x}_{u \to l})\] (4d) \[\text{s.t.} \mathbf{h}_{low}(\mathbf{x}_{low},\mathbf{x}_{l\to u},\mathbf{x}_{u \to l})=\mathbf{0}\] \[\mathbf{g}_{low}(\mathbf{x}_{low},\mathbf{x}_{l\to u},\mathbf{x}_{u \to l})\leq\mathbf{0}\] We now present two different ways that data-driven techniques can be exploited towards a solution of bi-level problems. #### 3.1.1 Optimality surrogates We first present optimality surrogates as a solution approach to multi-level problems. Crucial to solving bi-level optimization problems is their reformulation into a single level, such that the problem can be exploited using standard optimization solvers. In the optimality surrogate approach, we first sample various combinations of the complicating variables \(\mathbf{x}_{u\to l}\) and solve the corresponding lower-level optimization problem to extract the optimal response \(\mathbf{x}_{l\to u}^{*}\). Then, we can use any supervised learning technique to construct a model - neural networks \(\mathcal{NN}(\cdot)\) in our case - to map \(\mathbf{x}_{l\to u}^{*}\) to \(\mathbf{x}_{u\to l}\). The explicit expression \(\mathbf{x}_{l\to u}^{*}=\mathcal{NN}(\mathbf{x}_{u\to l})\) can then be embedded as a constraint into the upper level, replacing the lower-level optimization problem, and collapsing the bi-level into a single-level formulation (5) that can be readily implemented in standard optimization software. \[\begin{array}{ll}\min_{\mathbf{x}_{up},\mathbf{x}_{u\to l}, }&f_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l},\mathbf{x}_{l\to u}^{*})\\ \text{s.t.}&\mathbf{h}_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l}, \mathbf{x}_{l\to u}^{*})=\mathbf{0}\\ &\mathbf{g}_{up}(\mathbf{x}_{up},\mathbf{x}_{u\to l}, \mathbf{x}_{l\to u}^{*})\leq\mathbf{0}\\ &\mathbf{x}_{l\to u}^{*}=\mathcal{NN}(\mathbf{x}_{u\to l}) \text{(neural network)}\end{array} \tag{5}\] In principal, it is possible to construct surrogates that present accurate 'explicit control laws' for certain kinds of parameterized optimization problems [73]. However, parametric programming explodes in complexity with the size of the lower levels [74]. Consequently, we train approximate optimality surrogates, forfeiting theoretical guarantees that the lower-level is solved to optimality. As such, care should be taken in choosing the type and architecture of the surrogate when trading off solution accuracy and tractability. We suggest using neural networks with piecewise linear activation functions (ReLU) as optimality surrogates since their expressions can be reformulated into mixed-integer _linear_ constraints [75], avoiding the introduction of nonlinear terms into upper level objectives. Other techniques like decision trees could be used to the same end. There is ongoing debate as to whether it is favourable to use mixed-integer linear formulations with discrete solvers or to use global solvers on the full-space formulations of the surrogates [68]. We note that the concept of optimality (or feasibility) surrogates can also be used in monolithic formulations. For example, if (binary) data on the feasible operating region is available, we can map the constraints which can be integrated into the upper level as \(\mathcal{NN}(\mathbf{x}_{u\to l})\leq 0\). #### 3.1.2 Derivative-free optimization In our second approach, we leverage recent advances in derivative-free optimization (DFO). The idea of using derivative-free optimization (DFO) for bi-level formulations is to fix as many upper-level variables as necessary such that the upper level is fully determined. First, we observe that when \(\mathbf{x}_{u\to l}\) is fixed, we can obtain \(\mathbf{x}_{l\to u}^{*}\) by solving the lower-level optimization problem. On top of this, we can split up the upper level variables \(\mathbf{x}_{up}\) into a set of decision variables \(\mathbf{x}_{DFO}\) and \(\mathbf{x}_{sim}\), such that \(\mathbf{x}_{sim}\) can be obtained by solving the system of equality constraints \(\mathbf{h}_{up}(\cdot)=\mathbf{0}\) at \(\mathbf{x}_{DFO}\) and \(\mathbf{x}_{u\to l}\) fixed. As such, we can use DFO to find \(\mathbf{x}_{u\to l}\) and \(\mathbf{x}_{DFO}\) that minimize the black-box objective consisting of the upper-level objective \(f_{up}(\cdot)\) augmented by a penalization of any violation of \(\mathbf{g}_{up}(\cdot)\leq\mathbf{0}\). The extent of penalization can be tuned via the penalty parameter \(\rho\), which as a rule of thumb can be chosen to be an order of magnitude higher than the objective terms \(f_{up}(\cdot)\). We essentially solve the general bi-level formulation (4) as a single-level DFO problem (6) where each evaluation consists in the expensive solution of the lower-level optimization problem \(\mathcal{LOW}(\cdot)\) equivalent to (4d). \[\begin{split}\underset{\mathbf{x}_{DFO},\mathbf{x}_{u\to l}}{ \text{min.}}&\mathcal{BB}(\mathbf{x}_{DFO},\mathbf{x}_{u\to l}) \text{(Black-box)}\\ \text{where}&\mathbf{x}_{l\to u}^{*}\leftarrow\mathcal{LOW}( \mathbf{x}_{u\to l})\text{ \qquad(lower problem)}\\ &\mathbf{x}_{sim}\leftarrow\mathbf{h}_{up}(\mathbf{x}_{sim}, \mathbf{x}_{DFO},\mathbf{x}_{u\to l},\mathbf{x}_{l\to u}^{*})= \mathbf{0}\\ &\mathbf{x}_{up}=[\mathbf{x}_{sim},\mathbf{x}_{DFO}]\\ &\text{penalty}=\rho||\max(\mathbf{0},\mathbf{g}_{up}(\mathbf{x}_ {up},\mathbf{x}_{u\to l},\mathbf{x}_{l\to u}^{*}))||^{2}\\ &\mathcal{BB}(\mathbf{x}_{DFO},\mathbf{x}_{u\to l})=f_{up}( \mathbf{x}_{up},\mathbf{x}_{u\to l},\mathbf{x}_{l\to u}^{*})+\text{penalty} \end{split} \tag{6}\] The left arrows \(\leftarrow\) denote that we obtain \(\mathbf{x}_{sim}\) and \(\mathbf{x}_{l\to u}^{*}\) from the solution of the set of equality constraints \(\mathbf{h}_{up}(\cdot)=\mathbf{0}\) and of the lower-level optimization instance \(\mathcal{LOW}(\cdot)\) within the black-box simulation respectively. In principle, we can use any kind of DFO solver for the single-level DFO problem (6). However, since each evaluation involves the solution of an optimization problem, this severely restricts the available evaluation budget especially since these problems tend to be higher-dimensional than the applications at which DFO excels. This prevents the practical applicability of many over-explorative methods like evolutionary search, and Bayesian Optimization. We suggest using exploitative trust-region methods and leveraging as much problem knowledge as possible in finding a good initial guess. #### 3.1.3 Differences between the two methods Model-based DFO and the surrogate approach could be easily confused as they both rely on surrogates. The surrogate approach constructs relevant input-output models of the lower level before embedding these surrogates into the upper level, where a single optimization formulation is solved using algebraic modelling languages such as Pyomo or GAMS. In model-based DFO, surrogates are only used in the 'outer level' to trade-off exploitation and exploration and determine the next sample input to the simulation. Although we have opted for model-based DFO methods in this work, direct DFO methods could be used (e.g., simplex, metaheuristics) instead requiring no surrogates whatsoever. In the 'inner level', the simulation then calls the original lower-level optimization problem (e.g. in Pyomo [76]) as a black-box to return relevant quantities to the upper level. More importantly, both approaches present vastly different model accuracy versus solution tractability trade-offs. While the surrogate approach inevitably runs the risk of losing solution quality through model inaccuracies, DFO scales poorly with the expense of lower-level problems and the number of DFO dimensions. This motivates a careful investigation into the intricacies of our case study and how these can be used to integrate the planning-scheduling and the scheduling-control layer. ### Hierarchical planning-scheduling-control as tri-level formulation Let us formally present the mathematical problem we are addressing. Problem (7) formalizes the hierarchical iPSC formulation described in Section 2.4. The upper level consists of LP formulations to determine the planning-specific and complicating planning target variables (\(\mathbf{x}_{p}\) and \(\mathbf{x}_{prod}\)) that minimize the planning-level economic objective \(f_{p}(\cdot)\) augmented by the optimal scheduling and control costs \(c_{s}^{*}\) and \(c_{c}^{*}\). \(c_{s}^{*}\) is obtained by constraining the upper level on solving 12 MILP problems. Their aim is to find the scheduling-specific and batch target variables (\(\mathbf{x}_{s}\) and \(\mathbf{x}_{batch}\)) that minimize the makespan required to fulfil \(\mathbf{x}_{prod}\). The optimal batch processing times \(t_{f}^{*}\) however rely on optimal control operation. Each of the 12 scheduling problems are subject to \((14-70)\) nonlinear optimal control problems that determine the control-specific variables \(\mathbf{x}_{c}\) and processing times \(t_{f}\) to meet the batch targets (\(\mathbf{x}_{batch}\)) that minimize a mixture of energy cost and processing time \(f_{c}(\cdot)\). \[\begin{array}{ll}\min_{\begin{subarray}{c}\mathbf{x}_{p},\mathbf{x}_{prod},\\ \mathbf{x}_{x}^{*},c_{s}^{*},\mathbf{x}_{b}^{*},\\ c_{c}^{*},t^{*},\mathbf{x}_{c}^{*}\end{subarray}}&f_{p}(\mathbf{x}_{p}, \mathbf{x}_{prod})+c_{s}^{*}+c_{c}^{*}\\ \text{s.t.}&\mathbf{h}_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})=\mathbf{0}\\ &\mathbf{g}_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})\leq\mathbf{0}\\ &\begin{array}{ll}\mathbf{x}_{x}^{*},\mathbf{x}_{b}^{*},c_{s}^{*},\\ \mathbf{x}_{c}^{*},t^{*},c_{c}^{*}\end{array}\in&\underset{\begin{subarray}{ c}\mathbf{x}_{x},\mathbf{x}_{batch},c_{s},\\ \mathbf{x}_{ctrl},t_{f}^{*},c_{ctrl}\end{subarray}}{\arg\min}.&f_{s}( \mathbf{x}_{s},\mathbf{x}_{batch},t_{f}^{*},\mathbf{x}_{prod})\\ &\text{s.t.}&\mathbf{h}_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},c_{s},t_{f}^{*},\mathbf{x}_{prod})=\mathbf{0}\\ &\mathbf{g}_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},t_{f}^{*},\mathbf{x}_{prod} )\leq\mathbf{0}\\ &x_{ctrl}^{*},c_{ctrl}^{*},t_{f}^{*}\in&\underset{\mathbf{x}_{c},c_{ctrl},t_{ f}}{\arg\min}.&f_{c}(\mathbf{x}_{c},t_{f},\mathbf{x}_{batch})\\ &\text{s.t.}&\mathbf{h}_{c}(\mathbf{x}_{c},t_{f},c_{ctrl},\mathbf{x}_{batch})= \mathbf{0}\\ &\mathbf{g}_{c}(\mathbf{x}_{c},t_{f},\mathbf{x}_{batch})\leq\mathbf{0}\\ \end{array} \tag{7}\] Before we describe how we can apply the techniques described as applied to the general bi-level formulation (3.1), we first determine four metrics by which we can assess the quality of a planning-level solution in increasing levels of complexity and accuracy. ### Metrics To compare the solution quality of the proposed solution approaches, we define four different solution metrics. Figure 4 illustrates the difference between the four solution quality evaluation metrics accounting for different levels of integration of lower-level information into the upper-level planning objective (8). In each case, we start by extracting the \(\mathbf{x}_{DFO}\) and \(\mathbf{x}_{prod}\) variables, used as input to the DFO (Section 3.5) or obtained from the solution of the conventional optimization instance in the upper level (Section 3.6). We explain in the next section how the planning-level variables are partitioned into \(\mathbf{x}_{DFO}\). This information is sufficient in simulating the planning level and by extension hierarchical solutions by feeding down optimal solutions sequentially. These evaluations constitute a crucial part in applying the DFO method (6) on the planning-scheduling integration and is presented in greater detail in the next section. The upper or _planning-only_ evaluation (8a) only simulates the planning-level objective and any upper-level inequality constraint penalizations \(f_{p}(\cdot)+g_{viol}\). The other metrics differ in how much detail is captured in the optimization at each evaluation: The _bi_ evaluation (8b) calls all scheduling optimization problems in parallel to augment the upper objective by \(c_{s}^{*}\); _approx tri_ (8c) calls integrated scheduling-control optimization formulations instead to obtain the optimal scheduling and approximate optimal control costs \(c_{s}^{*}+\hat{c}_{c}\); _tri_ (8d) solves all scheduling problems first whose optimal solutions feed into the optimal control to obtain the'real' tri-level objective using \(c_{s}^{*}+c_{c}^{*}\). These evaluation metrics present increasingly accurate hierarchical decision-making modelling with the upper (8a) and tri (8d) evaluations emulating the planning decisions according to the sequential and hierarchical frameworks respectively as depicted in Figure 1. Planning-only: \[f_{up}(\cdot)+g_{viol}\] (8a) Bi: \[f_{up}(\cdot)+g_{viol}+c_{s}^{*}\] (8b) Approx tri: \[f_{up}(\cdot)+g_{viol}+c_{s}^{*}+\hat{c}_{c}\] (8c) Tri: \[f_{up}(\cdot)+g_{viol}+c_{s}^{*}+c_{c}^{*}\] (8d) Figure 4: Increasing levels of solution accuracy evaluation. The upper evaluation only evaluates the planning objective and constraints in a simulation. The bi evaluation solves 12 scheduling problems in parallel in each evaluation to account for the scheduling cost in the objective. The approximate tri evaluation embeds control surrogates into the scheduling to account for approximate control cost in the objective. The tri evaluation solves scheduling and control levels in sequence to emulate hierarchical decision-making and adds the ‘real’ hierarchical scheduling and control costs to the planning objective. These metrics become crucial in all three methodologies as described in Figure 5: the DFO, surrogate, and combined approach. In principle, any solution quality metrics (8b)-(8d) can be used as the DFO objective; Similarly, we can map surrogates to map the outputs of scheduling-only (10), approximate scheduling-control (11), or accurate scheduling-control (12) optimization problems called in the bi, approx tri, and tri evaluation metrics (8b)-(8d). A clear distinction between the increasing levels of solution evaluation becomes important in presenting the results. In the next section, we illustrate how we use DFO to find the planning-level variables that optimize any one of the solution metrics from (8). ### DFO in the planning level In order to use DFO for the integration of planning and scheduling, we reformulate the tri-level problem (7) into the canonical DFO formulation (6) for bi-level problems. We split the planning-specific variables \(\mathbf{x}_{p}\) of the tri-level optimization problem (7) into its constituent storage \(\mathbf{x}_{store}\), transport \(\mathbf{x}_{transp}\), and sales \(\mathbf{x}_{sales}\) variables. We observe that given \(\mathbf{x}_{prod}\), \(\mathbf{x}_{transp}\), and \(\mathbf{x}_{sales}\), we can exploit the inventory mass Figure 5: The three proposed methodologies: The first methodology uses DFO to optimize any of the solution metrics in (8b-8d) directly by considering the planning simulation and scheduling-control optimization as a black-box. The second methodology uses hyperparameter optimization to find a surrogate architecture that, trained on one of the solution metrics in (8b-8d), leads to the best solution accuracy-time trade-off after embedding into the planning. The third methodology combines both workflows in using DFO to optimize the solutions obtained from the surrogate approach. balances \(\mathbf{h}_{p}(\cdot)\) to obtain \(\mathbf{x}_{store}\). We now have all the variables required to compute the quantities of interest that appear in the upper level: \(f_{p}(\cdot)\), the penalty term \(g_{viol}\) via \(\mathbf{g}_{p}(\cdot)\), and any (approximate) optimal costs \(c_{s}^{*}\) and \(c_{c}^{*}\) from simulating lower-level optimization problems via \(\mathcal{SC}(\cdot)\). Problem (9) shows the formulations involved in using DFO to find \(\mathbf{x}_{sales},\mathbf{x}_{transp},\mathbf{x}_{prod}\) that optimize the integrated planning-scheduling-control problem as a black-box objective \(\mathcal{BB}(\cdot)\). \[\begin{array}{ll}\underset{\mathbf{x}_{sales},\mathbf{x}_{transp},\mathbf{x }_{prod}}{\text{min.}}&\mathcal{BB}(\mathbf{x}_{sales},\mathbf{x}_{transp}, \mathbf{x}_{prod})\\ \text{where}&\mathbf{x}_{store}\leftarrow\mathbf{h}_{p}(\mathbf{x}_{store}, \mathbf{x}_{sales},\mathbf{x}_{transp},\mathbf{x}_{prod})=\mathbf{0}\\ &\mathcal{BB}(\cdot)\gets f_{p}(\mathbf{x}_{store},\mathbf{x}_{sales}, \mathbf{x}_{transp},\mathbf{x}_{prod})+g_{viol}+c_{s}^{*}+c_{c}^{*}\\ &g_{viol}\leftarrow\mathbf{g}_{p}(\mathbf{x}_{store},\mathbf{x}_{sales}, \mathbf{x}_{transp},\mathbf{x}_{prod})\leq\mathbf{0}\\ &c_{s}^{*},c_{c}^{*}\leftarrow\mathcal{SC}(\mathbf{x}_{prod})\end{array} \tag{9}\] Comparing (9) with the canonical DFO formulation (6), we see that \(\mathbf{x}_{DFO}=[\mathbf{x}_{transp},\mathbf{x}_{sales}]\), \(\mathbf{x}_{sim}=\mathbf{x}_{store}\), \(\mathbf{x}_{u\to l}=\mathbf{x}_{prod}\), and \(\mathcal{LOW}(\cdot)=\mathcal{SC}(\cdot)\). In practice, this means that the DFO solver determines only the complicating planning targets, as well as a subset of the planning-specific variables (the transport and sales variables). This information is sufficient in determining all other relevant quantities in computing the black-box planning-scheduling-control objective. Different formulations of the (integrated) scheduling-control optimization problem \(\mathcal{SC}(\cdot)\) lead to different solution accuracy metrics (8) such that \(\mathbf{x}_{l\to u}=\emptyset,c_{s}^{*},c_{s}^{*}+\hat{c}_{c},\text{ or }c_{s}^{*}+c_{c}^{*},\). _Planning-only_ does not use \(\mathcal{SC}(\cdot)\). In the \(bi\) metric evaluations, \(\mathcal{SC}(\cdot)\) solves the 12 scheduling optimization problems taking the form of (10) to extract \(c_{s}^{*}\) without any restrictions on \(t_{f}\). \[\begin{array}{ll}\mathcal{SC}(\mathbf{x}_{prod})=&\underset{\mathbf{x}_{s}, \mathbf{x}_{batch},c_{s}}{\text{arg min.}}&f_{s}(\mathbf{x}_{s},\mathbf{x}_{ batch},t_{f},\mathbf{x}_{prod})\\ &\text{s.t.}&\mathbf{h}_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},c_{s},t_{f}, \mathbf{x}_{prod})=\mathbf{0}\\ &\mathbf{g}_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},t_{f},\mathbf{x}_{prod}) \leq\mathbf{0}\end{array} \tag{10}\] In the _approx tri_ evaluations, \(\mathcal{SC}(\cdot)\) takes the form of (11) and approximates the optimal control response in (10) for each scheduling problem by embedding the predicted optimal control responses as constraints into the scheduling via \(\hat{c}_{ctrl},\hat{t}_{f}^{*}=\mathcal{CTRL}(\mathbf{x}_{batch})\). In each scheduling problem, we enumerate all 7 events by 2 machines by 5 product combinations of optimal control surrogates. The surrogates are trained as follows: we query the optimal costs and processing times corresponding to 10 uniformly distributed samples between 0 and the upper batch target limit for each equipment-production combination. We then train separate artificial neural networks with 2 hidden layers of 5 nodes each on the equipment-product datasets. No hyperparameter tuning is necessary at this point since this simple default architecture gives a good fit on the well-behaved optimal control response and remains tractable after embedding into the scheduling as discussed in Section 4.1. \[\begin{split}\mathcal{SC}(\mathbf{x}_{prod})=\underset{ \begin{subarray}{c}\mathbf{x}_{s},\mathbf{x}_{batch},c_{s},\\ \hat{c}_{ctrl},\hat{t}_{f}^{*}\end{subarray}}{\arg\min}& f_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},\hat{t}_{f}^{*}, \mathbf{x}_{prod})\\ \text{s.t.}&\mathbf{h}_{s}(\mathbf{x}_{s},\mathbf{x}_{ batch},c_{s},\hat{t}_{f}^{*},\mathbf{x}_{prod})=\mathbf{0}\\ &\mathbf{g}_{s}(\mathbf{x}_{s},\mathbf{x}_{batch},\hat{t}_{f}^{*},\mathbf{x}_{prod})\leq\mathbf{0}\\ &\hat{c}_{ctrl},\hat{t}_{f}^{*}=\mathcal{CTRLA}(\mathbf{x}_{ batch})\end{split} \tag{11}\] In the _tri_ evaluations, two sequential optimization layers are solved within \(\mathcal{SC}(\cdot)\): first, all 12 scheduling problems (10) are solved in parallel. Then, the optimal batch responses \(\mathbf{x}_{batch}^{*}\) for each scheduling problem as determined in (10) feed into the 7 events by 2 machines optimal control problems as shown in (12). The output of the control problem then returns the actual optimal control cost and processing times for feedback into the scheduling and planning level. In our case, the tri evaluations happen to be cheaper than the approx tri evaluations as discussed in Section 4.3. However, this was not known _a priori_ and is not expected to be the case when the size of the control problem increases. As a result, we use the tri evaluation only to check the solution quality in the first two methodologies and for minimal fine-tuning in the third. \[\underset{\begin{subarray}{c}\mathbf{x}_{c},c_{ctrl},t_{f}\end{subarray}}{ \arg\min} \quad f_{c}(\mathbf{x}_{c},t_{f},\mathbf{x}_{batch}^{*})\] (12) s.t. \[\mathbf{h}_{c}(\mathbf{x}_{c},t_{f},c_{ctrl},\mathbf{x}_{batch}^{ *})=\mathbf{0}\] \[\mathbf{g}_{c}(\mathbf{x}_{c},t_{f},\mathbf{x}_{batch}^{*})\leq \mathbf{0}\] Key to the tractable integration using DFO is parallelization in the scheduling layer. In our case, we see that the scheduling-level capacity for P1, P2, P3, and P4 production (Figure 3A) at any given time step is coupled to the planning level and other scheduling problems only via the inventory levels of I22. In the planning level, we are only interested in the changeover costs of the scheduling layer. As such, rather than solving the scheduling problems sequentially in time, we fix I22 inventory in the scheduling problems and solve all 12 scheduling problems in parallel leading to significant computational savings. In doing so, we rely on the planning-level safe storage constraints to ensure that we have enough I22 inventory for the production of any given planning target. Parallelization at the control level is in principle also possible, but in our case limited by the availability of compute nodes. ### Methodology 1: Derivative-free optimization The first methodology involves using DFO to find \(\mathbf{x}_{prod},\mathbf{x}_{transp},\text{ and }\mathbf{x}_{sales}\) that optimize the solution metrics in (8). However, all solution metrics apart from planning-only (8a) are considered expensive as they call at least one additional level of scheduling optimization problems in each evaluation. As such, the DFO budget is severely limited to a couple of hundred evaluations given the high dimensionality of 178 DFO variables. This prevents even the most exploitative solvers from making meaningful progress. To mitigate this issue, we hot-start the DFO search by using the Pyomo [76] solution of the planning-only optimization problem (13) as an initial guess. \[\begin{split}\underset{\mathbf{x}_{p},\mathbf{x}_{prod}}{\text{ min.}}& f_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})\\ \text{s.t.}&\mathbf{h}_{p}(\mathbf{x}_{p},\mathbf{x} _{prod})=\mathbf{0}\\ &\mathbf{g}_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})\leq\mathbf{0} \end{split} \tag{13}\] Then, we sequentially optimize the bi (8b) and approx tri metrics (8c) using 500 evaluations each, where the solution of one feeds into the other one as an initial guess. We assume _a priori_ that the tri metric (8d) is intractable, and as such only use it to check solution quality. In principle, instead of this cascading approach, we could optimize the solution metric of interest from the start in the same solution time, which we discuss in Sections 4.2 and 4.5. Any DFO solver (e.g., Bayesian optimization, decision trees, direct methods) can be used for this task. We choose the state-of-the-art exploitative trust region optimizer Py-BOBYQA [77], since the limited budget only allows for a fine-tuning of the solution around the initial guess rather than an extensive exploration of the solution space. ### Methodology 2: Optimality surrogates In Section 3.4 we describe how we can use optimality surrogates to map the optimal control response as a function of the batch targets \(\hat{c}_{ctrl},\hat{t}_{f}^{*}=\mathcal{CTRL}(\mathbf{x}_{batch})\). Similarly, for each planning step, we can construct optimality surrogates that return the optimal scheduling and control costs at a planning step as a function of the 4 production planning targets \(\hat{c}_{s}^{*},\hat{c}_{c}^{*}=[\mathcal{SCH}-\mathcal{CTRL}](\mathbf{x}_{prod})\). The neural network \([\mathcal{SCH}-\mathcal{CTRL}](\cdot)\) is trained to map the output of the (integrated) scheduling-control optimization problem\(\mathcal{SC}(\cdot)\), which can take the form of the scheduling-control optimization problems (10)-(12), associated with the bi, approx tri, and tri evaluation metrics respectively. As such, we could approximate the solution to the tri-level problem (7) by solving the single-level problem (14). \[\begin{split}\underset{\begin{subarray}{c}\mathbf{x}_{p},\mathbf{x} _{prod},\\ \hat{e}_{s}^{*}\hat{e}_{c}^{*}\end{subarray}}{\text{min.}}& f_{p}( \mathbf{x}_{p},\mathbf{x}_{prod})+\hat{c}_{s}^{*}+\hat{c}_{c}^{*}\\ \text{s.t.}&\mathbf{h}_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})= \mathbf{0}\\ &\mathbf{g}_{p}(\mathbf{x}_{p},\mathbf{x}_{prod})\leq\mathbf{0}\\ &\hat{c}_{s,t}^{*},\hat{c}_{c,t}^{*}=[\mathcal{SCH}-\mathcal{CTRL}]_{t}( \mathbf{x}_{prod}),\quad t=1,\ldots,12\end{split} \tag{14}\] Before training, we sample 1,000 combinations of the planning targets \(\mathbf{x}_{prod}\) where the upper bound is obtained from the planning-level resource limit for each product. Each sample involves the solution of the integrated scheduling-control optimization problem \(\mathcal{SC}(\cdot)\) as formulated in the scheduling-only (10) or approximate scheduling-control (11) formulations associated with the bi and approx tri metrics respectively. We use log-sampling within these bounds since uniform sampling oversamples infeasible operations as in practice the targets of the 4 products is rarely close to the upper bound. In the results section, we also repeat this workflow with surrogates trained on the accurate scheduling-control optimization problems (12) to investigate the effect of inaccurate control information. We add fixed penalty terms to the planning objective if the scheduling (or control) problems become infeasible. Alternatively, adding soft penalties between the suggested and closest feasible set of planning targets to the DFO objective might lead to smoother surrogates. Deciding on the architecture of the integrated scheduling-control surrogate is more difficult than in mapping the optimal control surrogate. The bigger the neural network, the more capable it is (in principle) of accurately capturing the optimal response function, but the more computationally expensive the planning formulation becomes after embedding of the surrogate. To navigate this accuracy-tractability trade-off, we use multi-objective Bayesian optimization for 12 evaluations to find the number of nodes in the first and second hidden layer \(n_{1},n_{2}\) that best trade-off accuracy and tractability. Each Bayesian optimization evaluation includes: the training of a network with hidden layers \(n_{1},n_{2}\); its embedding into the planning layer; optimization of the planning after embedding; and a solution quality check on all metrics of (8). The optimization time are returned with the approx tri (8c) or tri (8d) solution metrics and fed to the Bayesian optimization to suggest the next iteration of hyperparameters. ### Methodology 3: Combining both approaches The third methodology consists of combining the first two approaches to bring out the best of both worlds: We train a collection of the most promising network architectures identified in the off-line hyperparameter tuning from Methodology 2. Then, we optimize the planning problems with the embedded surrogates in parallel and evaluate their solution quality using the tri evaluation (8d) as the most accurate metric we can afford. Finally, we use DFO (in parallel) on the best solution(s) from the surrogates following Methodology 1 to fine-tune the solution towards the tri-level optimum using the tri evaluation (8d) in the remaining online evaluation budget. ### Software and practical considerations We use Py-BOBYQA [77] as our DFO solver of choice for the DFO approach (Methodology 1) and the OpenBOX implementation [78] for the multi-objective Bayesian optimization using Gaussian processes in the hyperparameter tuning as part of the surrogate approach (Methodology 2). The rest of the 'conventional' optimization formulations are implemented in Pyomo [76]. While the control instances of Equation (12) are solved using Ipopt [79], the optimality surrogate formulation (14) and all formulations for the integrated scheduling-control optimization \(\mathcal{SC}(\cdot)\) from (10) to (11) are solved using Gurobi [80]. We use PyTorch [81] to train the neural networks and embed them as constraints into Pyomo blocks using OMLT [82]. We run the main DFO and surrogate scripts on the Imperial College Research Computing HPC service. Most scripts run on a maximum walltime of 8 hours, and we increase the number of compute nodes and memory as needed, usually within the range of 64-128 and 10-50gb. Notable exceptions include the DFO approach on the approx evaluation using 18 hours of walltime, 256 nodes, and 25 gb memory, or the surrogate hyperparameter tuning on the tri evaluation using 8 hours of walltime, 128 nodes, and 100 gb. In the next section, we present the accuracy-tractability trade-offs involved in incorporating the control surrogates into the scheduling, which constitutes the basis of moving from bi to approx tri evaluations. We then present the solution time and accuracy results of Methodology 1 and Methodology 2. In both approaches we are assuming that we have access to tri evaluations, but that they are expensive, meaning that we can only use them for checking the solution quality. In Methodology 3, we then present the solution time and accuracy results and discuss the advantages of combining both approaches together with relevant practical considerations. Throughout this, we place special emphasis in our discussion on distinguishing between the different solution accuracy metrics, between on- and off-line solution times, and the implications of model inaccuracies. ## 4 Results and Discussion Methodologies 1 and 2 heavily rely on the approx tri metric evaluation (embedding control surrogates into scheduling) as a proxy for the tri evaluation. In the next section, we show why optimal control surrogates are well-suited for mapping the control problem, and we discuss the trade-offs involved in moving from bi to approx tri evaluations. ### Integrating scheduling and control Figure 6 gives an example of the predicted and actual optimal processing time and energy cost samples of the optimal control training set for one of the products. The mapping for both predictions is well-behaved, almost linear with a slight concave curvature. This mapping could be approximated reasonably well with linear or piecewise-affine linear regression techniques. This goes to show that even though the dynamics of the optimal control problem might be nonlinear, the relationship between the connecting variables in the integration of scheduling and control can be captured reasonably well with simple surrogates. In our case, a small neural network with 5 neurons for the two hidden layers each is sufficient. Table 1 summarizes the on- and off-line optimization times involved in the solution of the scheduling-only problem (10) and the scheduling problem with embedded optimal control surrogates (11). Solving these problems constitutes a key part in obtaining the bi and approx tri evaluations (8b) and (8c) respectively. The online optimization time increases from 10 seconds to over 4 minutes. Even though the surrogates are small, the approx evaluation requires the enumeration of each one of the 5 products by 7 events by 2 machines optimal batch production surrogates. Each of the 70 surrogates introduces discrete variables that lead to an explosion in solution time. While the online optimization time increases significantly after Figure 6: Predicted and actual optimal batch processing time and energy requirements corresponding to 10 uniformly sampled training and test batch targets. integration of the surrogates, the off-line control problem sampling and training of the neural network remain negligible. Training the neural network for 1,000 epochs takes at most a minute. Given the well-behaved nature of the mapping, we only require 10 control samples for each machine-product assignment, adding another one-off 5 minutes that are negligible in the overall workflow. ### Methodology 1: Integration into planning via DFO Optimizing the scheduling-only problem or scheduling problem with embedded surrogates as presented in the previous section constitutes the bottleneck in the bi and approx tri evaluations. Figure 7 illustrates the solution accuracy obtained and time required to perform DFO on the bi and then on the approx tri metrics for 500 evaluations each. #### 4.2.1 Solution accuracy Figure (a)a shows the different solution metrics obtained in the DFO. We start with the bi DFO (DFO on the bi metric) using the solution obtained from the optimization of the planning-only problem in Pyomo. This planning-only solution is infeasible for all evaluation metrics accounting for lower level information. The planning-only solution then feeds into the bi DFO where the bi metric is improved from around 6 to -3 in 500 evaluations. Then, the bi DFO solution feeds into another round of 500 approx tri DFO evaluations, improving the approx metric from the bi DFO solution from about -2 to -3. This goes to show that DFO reliably improves the solution according to the evaluation metric used in the objective. Yet, there is no guarantee that an increase in cheaper bi/approx evaluations comes with an improvement in more accurate evaluations that are not used as the objective function of the DFO. This explains why the tri evaluation shows no improvement - all 3 tri evaluations remain infeasible in some scheduling and/or control problems. Realistically, given the expense of the scheduling-level optimization problems and the high dimensionality in the planning-level variables, we only have enough budget to fine-tune the solution obtained in the \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Time} \\ \hline & Optimization & Control sampling (N=10x5) & Training \\ \hline Scheduling only & 10 sec & / & / \\ \hline Scheduling-control surrogate & 250 sec & 5 min & 1 min \\ \hline \end{tabular} \end{table} Table 1: Online optimization, control surrogate sampling and surrogate training times involved in the solution of the scheduling-only and scheduling with embedded control surrogates problems initial guess. This is confirmed in B showing the planning profile associated with the DFO solutions. However, this fine-tuning can render the planning-only solution feasible with respect to more accurate metrics, resulting in large increases in solution quality. While we use fixed costs to penalize in-feasibilities in the scheduling and control level, DFO would make more consistent progress if soft penalty violations would be used between the suggested and nearest feasible set of planning and batch targets. #### 4.2.2 Solution time Figure (b)b shows the solution times involved in finding the planning-only initial guess and in subsequently performing 500 DFO evaluations on the bi and approx metric each on three configurations of the case study: in _low_, we only solve a single average scheduling problem for all planning steps wherein we account for the optimal control of only a single product; in _distr_, we solve in parallel a separate scheduling problem for each planning step accounting for the optimal control of only a single product; _all_ refers to the full case study as described in Section 2, where at each planning step, we solve in parallel all scheduling problem where the control of all 5 products is performed optimally. Comparing the 'low' and 'distr' configurations allows to investigate the effect of parallelization while comparing the 'distr' and 'all' configurations allows to investigate the effect of increasing the number of control problems per scheduling instance. We see that the planning-only Pyomo instance returns a solution in milliseconds. When going from the 'low' to the 'distr' configuration, the bi and approx DFO times only increase by a factor of 3-4 from around 20 to 80 and from around 60 to 200 seconds. This highlights the benefits of parallelization: rather than seeing an increase in computational time proportional to the number of planning steps (12), we only see it increase by a factor of around 4. In the best case scenario, this parallelization would keep the solution time constant. In practice however, each evaluation call is limited by the time it takes for the slowest scheduling instance to be solved. Increasing the number of optimal control problems when moving from 'distr' to 'all' only increases the approx DFO time. Increasing the number of control problems by 5 increases the solution time by a factor of 10, from 100 to almost 1,000 seconds. This is to be expected given the discrete nature of the variables introduced in the surrogates. In Table 1, we claim that the optimization instances in a single bi or approx evaluation take 10 and 250 seconds respectively. However, Figure (b)b shows that 500 evaluations take around 100 and 1,000 seconds respectively. While this may seem contradictory, this means that most of the DFO samples are quickly determined to be infeasible. #### 4.2.3 Cascading approach The question arises if rather than use a cascading approach of feeding the DFO solutions of less accurate metrics as initial guesses into the DFO of more accurate metrics, we can obtain better solutions by using the available budget to have more samples to optimize the expensive metric of interest directly. The answer to this is case-dependent and is influenced by the extent to which the metrics corresponding to the different levels of integration are correlated. In our case, the cascading approach is justified since the DFO on the bi metric is cheaper by an order of magnitude and still makes significant progress on the approx evaluation. However, if the ultimate goal is to optimize the tri evaluation, then neither the optimization of the bi nor the approx metric make significant progress. We continue this discussion in Section 4.5. ### Methodology 2: Integration into planning via surrogates In Methodology 2, rather than perform DFO on the bi or approx evaluations, we train optimality surrogates as a function of the planning targets to map the scheduling and control costs from the scheduling-only (10) and approximate scheduling-control problems (11). Since these instances constitute a major part of the bi and approx tri evaluations, we refer to the surrogates trained on them as bi or approx surrogates respectively. In Figure 8, for the bi and approx surrogates, we present three architectures each that return a Pareto solution between solution time and best approx evaluation after optimization of the planning level with the embedded surrogate. Figure 7: Solution accuracy and time for the planning-only instance, the DFO approach on the bi metric, and the DFO approach on the approx tri metric #### 4.3.1 Solution accuracy Figure (a)a shows that the solutions obtained of the approx surrogates are more consistent and achieve better solution quality than those obtained by the bi surrogates. This suggests that using more informative evaluations to build the surrogate translates into better solution accuracy. As opposed to the DFO where using the bi evaluation metric as the objective significantly improves the bi evaluation, the bi surrogates fail to consistently find a good bi evaluation. This might be because of the way the surrogates are trained. In the hyperparameter tuning, both the bi and approx surrogate architectures are optimized based on the approx metric evaluation, as it was expected _a priori_ to be the most accurate computationally affordable metric available. The bi surrogates, only trained on scheduling-only problems used in the bi evaluations, might miss valuable information that relates surrogate architecture, training accuracy, and optimization solution quality, leading to poorer, less consistent performance. However, it is surprising to see that the solution check on the tri metric evaluations corresponding to the approx surrogates are significantly better than the ones obtained in the DFO. This suggests a different explanation for why the surrogate approach performs surprisingly well. Rather than the surrogate approach performing well _despite_ the inevitable small model inaccuracies, the surrogate approach might perform well _because_ of them. In the DFO solutions, we are starting off from the planning-only solution that is optimal in the planning-level formulation only. This planning-only solution is probably active in some constraints that quickly become infeasible when more accurate information from lower-level scheduling or control becomes available. Small inaccuracies in the surrogates might give a suboptimal solution that is further away from the constraints and as such more 'robust' to uncertainties in the lower level. This hypothesis is supported by the planning solution profiles of the surrogate approaches being significantly different to the profiles observed for the centralized and DFO solutions as seen in B. It is in principle possible that model inaccuracies might be biased towards giving solutions in the infeasible space, but these models should be filtered out by the hyperparameter tuning process being biased towards models giving'safer' solutions. #### 4.3.2 Consistency in solution accuracy The major risk of the surrogate approach lies in selecting an architecture or training a specific model that leads to poor performance. To determine the effect of the model architecture on performance, Figure 9 displays the solution accuracy associated with the 12 model architectures queried in the hyperparameter tuning. Additionally, we investigate how much of the variability in bi and approx tri surrogate model performance is due to them not being trained on accurate control information. To this end, we follow the same workflow to train tri surrogates on accurate scheduling-control instances as used in the tri metric (84). Overall, the best of the 12 samples display similar performance among the three surrogate types on all solution metrics. In general, we see that the bi and tri surrogates display a similar median and variation in solution accuracy, with the bi surrogates being slightly more consistent in the tri metric evaluation. While the approx surrogates display a better median evaluation in the bi and tri metric evaluation, they also display more variability in solution performance. This is surprising to see given Figure 7(a) where the best approx tri surrogate Pareto solutions seem to be more consistent than their bi surrogate counterparts. This can be explained however by the approx surrogates being not just subject to model inaccuracy in the scheduling-control surrogates but themselves already trained on imperfect optimal control surrogates. The question still remains as to the tri surrogates' inability to clearly outperform the other surrogates. One possible explanation could be that accounting for the optimal control makes the solution space too ill-behaved to be accurately captioned by the networks. Overall, the surrogates display too similar performance to argue that one type outperforms the other. Ideally, in the hyperparameter tuning, we would also account for the sensitivity of the surrogate performance with respect to different training configurations or slightly updated/modified data, which calls for additional work on model validation or robust/adversarial selection approaches. Figure 8: Three solution accuracy and time Pareto solutions each of surrogates trained on the bi and approx tri metrics with the two numbers in the label referring to the number of nodes in the two hidden layers #### 4.3.3 Solution time Figure (b)b shows the online optimization time required to solve the embedded problem as shown in Formulation (14). The online optimization times of the bi and approx tri surrogates range widely based on the size of the networks from 100 milliseconds to 2 hours. It also seems like the type of surrogate matters less than the number of hidden nodes. For example, the bi surrogate with hidden layers of sizes 10 and 1 leads to a solution time comparable to that of the approx surrogate with sizes 5 and 2. The biggest surrogate (78 and 40 nodes) times out after 2 hours of solution time and displays the worst of the Pareto solutions in terms of accuracy. For the bi surrogates there seems to be a trade-off between using bigger and more accurate surrogates leading to better solutions at the expense of high computational costs as the best solution is attributed to the medium-sized surrogate of 42 and 20 nodes. However, for the approx surrogates, there does not seem to be much difference in solution quality found, since the best solution is found using the smallest network. This makes intuitive sense: by constructing an approximation of an approximation, approx surrogates might present similar, smoothed response surfaces across architectures. While Figure (b)b suggests that the online optimization time depends primarily on the surrogate size rather than the type of evaluation it is trained on, Table 2 shows that the difference in solution time between the different types of surrogates is shifted off-line to the sampling, training, and hyperparameter tuning steps. The training times are essentially negligible with each surrogate being trained in a maximum of 90 seconds. The hyperparameter tuning time can be adjusted based on the available budget. In our case, for each surrogate type, we limit the online optimization time per embedded problem to 2 hours and the hyperparameter tuning time involving 12 surrogates to 8 hours. Essentially, the bottleneck in moving from Figure 9: Box-and-whisker plot of the upper, bi, approx tri, and tri solution metric evaluations associated with all 12 samples of the hyperparameter tuning for the bi, approx tri, and tri surrogates. The box represents the quartiles while the whiskers demonstrate variability outside the box plots. Outliers are shown as isolated data points outside of the whiskers. bi to approx surrogates is in the sampling step. We sample 1,000 bi, approx, and tri evaluation samples, which takes between 8 and 24 hours. As discussed in the DFO section, the 8 and 24 hours for the bi and approx evaluation samples are considerably less than what would be expected by extrapolating the evaluation times shown in Table 1 by the number of samples. This is expected since only around half of the samples turn out to be feasible. It is surprising to see that the tri sampling time is shorter than the approx sampling time. It turns out that solving the planning problems, the scheduling problems, and then the 168 (2 machines by 7 events by 12 planning steps) optimal control problems in sequence is 5 times faster on average than solving the scheduling problems with 70 optimal control surrogates (7 events by 2 machines by 5 productions) embedded. In other words, it is not trivial to see that embedding all possible optimal control combinations into the scheduling displays worse scaling than evaluating the tri metric by solving the levels hierarchically. We expect that as the optimal control problems become more expensive, the tri evaluation becomes more expensive than the approx evaluation. In hindsight, for the case study at hand, we conclude that since the tri evaluation is faster than the approx tri evaluation, the performance of using approx tri evaluations in the DFO or their corresponding approximate scheduling-control problem (11) for surrogate training is not worth the compute. ### Comparing the DFO and surrogate approaches In the above two sections, we highlight the accuracy-tractability trade-offs associated with both approaches. DFO does not introduce additional inaccuracies if the lower levels are solved to global optimality. As such, DFO guarantees a solution at least as good as the initial guess on the evaluation metric used as its objective. The main drawback of DFO is its poor scaling in the number of planning-level variables and in the computational expense of the lower-level problems. Realistically, given these budget restrictions, we \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & \multicolumn{5}{|c|}{Time} \\ \cline{2-6} \multicolumn{1}{c}{} & \multicolumn{1}{|c|}{Related} & \multicolumn{1}{|c|}{Online} & \multicolumn{1}{|c|}{Hyperparameter} & \multicolumn{1}{|c|}{Training} & \multicolumn{1}{|c|}{Sampling} \\ \multicolumn{1}{c}{} & \multicolumn{1}{|c|}{evaluation} & \multicolumn{1}{|c|}{tuning} & \multicolumn{1}{|c|}{tuning} & \multicolumn{1}{|c|}{(N=1,000)} \\ \hline Scheduling-only & Bi & 0.1min-2hrs & 8 hrs & up to 12x90 sec & 8 hours \\ \hline Approximate & Approx & 0.1min-2hrs & 8 hrs & up to 12x90 sec & 24 hours \\ scheduling-control & tri & 0.1min-2hrs & 8 hrs & up to 12x90 sec & 8 hours \\ \hline Scheduling-control & Tri & 0.1min-2hrs & 8 hrs & up to 12x90 sec & 8 hours \\ \hline \end{tabular} \end{table} Table 2: On-line optimization, and off-line hyperparameter tuning, training, and sampling times associated with the construction of surrogates on the scheduling-only, approximate scheduling-control, and accurate scheduling-control can only expect to fine-tune the solutions around the initial guess. Generally, the surrogate approach scales better at the potential expense of solution quality. A main advantage of the surrogate approach is that tractability and accuracy can be easily adapted to the problem at hand by using Bayesian optimization (which ironically counts as DFO too). Empirically, we see that even the optima obtained with smaller, more tractable, and potentially less accurate models can display tri-level metric solutions that outperform any of the solutions obtained via DFO. However, by using surrogates, we forfeit any guarantee in the solution quality obtained. While surrogate techniques promise on-line computational savings, we have to invest our efforts off-line into the model selection and validation process. In the next section, we synthesize these findings into a proposed method that leverages the advantages of both methods. ### Methodology 3: Bringing out the best in both approaches Figure 10 shows the improvement in solution quality and increase in computational expense incurred by running DFO using the tri evaluation metric on the planning-only, DFO and surrogate solutions. #### 4.5.1 Solution accuracy Figure (a)a shows that running 500-1,000 DFO evaluations using the tri metric on the previous DFO and surrogate solutions significantly improves their tri evaluation. The best tri evaluations are found by performing DFO on the bi and tri surrogate solutions for a standard 500 evaluations, improving the tri metric from about -1 and -2 to around -2.5 respectively. We use the same standard budget on the approx DFO solution, improving the solution from about 9.5 to 8.5. However, by running DFO on the planning-only and bi DFO solutions using 1,000 and 800 evaluations, we find a solution that is just as good in less time than is required to perform the approx DFO in the first place. Performing tri DFO on the bi DFO solution even improves the solution to 7.5. This links back to the previous discussion in Section 4.2 of whether it is favourable to perform DFO on increasing levels of metric complexity in a cascading manner or if we should use the available budget to only optimize the targeted metric from the start. In our case, we find that we should use the cascading approach but avoid the more expensive approx evaluation in favour of the tri evaluation. #### 4.5.2 Solution time Figure (b)b compares the computational time required to fine-tune the solutions using tri DFO with the time it takes to obtain the initial guesses in the first place. 500 tri DFO evaluations on the approx DFO and surrogate solutions take about 240 minutes, while the initial 500 approx DFO evaluations take about 1,000 minutes in the first place. 1,000 tri DFO evaluations take about 430 minutes starting from the planning-only solution to get to a final tri evaluation of 8.5. 800 tri DFO evaluations on the bi DFO solution take about 390 minutes to achieve a solution of 7.5. In both cases, the tri DFO constitutes the computational bottleneck compared to the 0.2 and 70 minutes needed to obtain the planning-only and bi DFO solutions in the first place. Either way, both approaches take significantly less time than the complete cascading approach including the 500 bi, approx and tri DFO evaluations taking over 20 hours cumulatively. We note again that the average DFO evaluation time is skewed by quick infeasible runs. While the bi DFO approach outperforms the other DFO approaches for a given time budget, we conclude that better solutions can be obtained in less time when hot-starting the tri DFO with the surrogate approach. In our case, the 500 tri DFO evaluations take about 2-3 times longer than the surrogate optimization. The relative solution time between the tri evaluations and surrogate optimization changes significantly with the number of infeasible samples in the DFO and the size of the surrogates. However, this ratio can be manipulated by the DFO budget and the maximum surrogate runtime. #### 4.5.3 Practical considerations Optimality surrogates elegantly trade-off tractability and accuracy. However, their variability in solution performance is concerning. In order to mitigate these drawbacks, we suggest finding safety in numbers and choosing multiple surrogates - either the same model architectures trained on slightly different datasets, or different architectures potentially even trained on different kinds of evaluations. We can then embed these surrogates separately into different planning instances and optimize the resulting formulation in parallel in the online optimization step. All solutions are then checked on the most accurate evaluation metric available. Finally, we perform DFO on the most accurate evaluation metric in the hopes of finding a better solution in the remaining time budget. At this stage, the solution quality is guaranteed to be as good as the best surrogate solution for a chosen evaluation metric. Alternatively, we can still fall back on the much cheaper planning-only solution, which represents the traditional'sequential' solution approach. This leverages the available on- and off-line optimization budget to its fullest. We can leverage parallelization to mitigate model inaccuracy risks on-line. We also still retain the advantage of the surrogate approach of having shifted a lot of compute off-line. The amount of effort put into the offline sampling, training, hyperparameter tuning, and model validation step can be adjusted to the tolerance for inaccuracies in the problem at hand. The remainder of the available online solution time can then be used to its fullest to fine-tune the surrogate solutions via DFO. ## 5 Conclusion In this work we present how derivative-free optimization (DFO) and optimality surrogates can be used to optimize general bi-level formulations. We show how these techniques can be used to increase the solution quality of a multi-level, multi-site hierarchical planning-scheduling-control problem with respect to the conventional sequential solution approach. In tackling a task as ambitious as trying to solve these large-scale tri-level formulations, we need to use tools that improve the solution quality in as little time as possible. At the same time, we need to mitigate the risk of the solution becoming unusable through model inaccuracies. We synthesize our findings into a methodology that leverages the advantages of both techniques: We use derivative-free optimization (Bayesian optimization) in the off-line model selection process to efficiently trade-off solution accuracy and tractability. We then mitigate the risk of model inaccuracies in the surrogates by employing the surrogate approach in parallel on a combination of multiple model instances. After verifying the solution quality on the most accurate affordable solution metric, we can finally use DFO to fine-tune the best solution(s) in the remaining online optimization budget. While we can make general observations about the trade-offs involved in both methods, it should be Figure 10: Increase in solution time and quality in using DFO to optimize the planning-only, DFO, and surrogate solutions on the tri evaluation metric noted that these are limited to the case study at hand. While the DFO approach is agnostic to the integrated scheduling-control formulation, the efficacy of the approach depends on the relative size of the three layers. Similarly, the surrogate approach is agnostic to the lower-level problems as long as the bottom-up complicating variables can be mapped as a function of the top-down complicating variables. Future work would investigate and stress-test different planning network configurations, different types of scheduling formulations, and different surrogate formulations in the control. The surrogate technique in particular opens up many avenues of research. For instance, instead of using multi-objective Bayesian optimization, we could optimize only solution quality under time limit constraints. We should also investigate how to account for model consistency in the hyperparameter tuning, i.e. the variability of model architecture performance when trained on different data. Ultimately, our work demonstrates the level of maturity of data-driven techniques and their ease of integration into more conventional optimization frameworks. This prompts us to re-evaluate their potential in addressing more accurate hierarchical formulations, even in larger-scale, industrial case studies. AcknowledgementsD.v.d.B. gratefully acknowledges financial support from the EPSRC DTA studentship award and the Bansal bursary. We also acknowledge computational resources and support provided by the Imperial College Research Computing Service ([http://doi.org/10.14469/hpc/2232](http://doi.org/10.14469/hpc/2232)) Data Availability StatementAll data necessary for replication of results can be found with the code under [https://github.com/OptiMaL-PSE-Lab/DD-Hierarchical](https://github.com/OptiMaL-PSE-Lab/DD-Hierarchical). List of Figures * Hierarchical decision-making architectures: In the sequential approach, higher-level solutions feed into lower levels as setpoints. Each level solves their own optimization problem to meet these setpoints without explicit considerations of the lower levels. The monolithic approach resembles the sequential approach, but where feasibility in lower levels is satisfied via explicit incorporation of their constraints. The hierarchical approach could be viewed as a multi-level leader-follower game. A single 'tri-level' optimization problem is solved, where the planning is subject to the optimal solution of the scheduling level, and the scheduling is constrained on the optimal solution of the control level. Outline of the hierarchical planning A), scheduling B), and control C) levels in the multi-site, multi-product case study. A) shows the planning state network denoting the materials (\(\circ\)) as well as processing (\(\rightarrow\)), transport (\(\rightarrow\)), and sales (\(\rightarrow\)) across the three sites (\(\square\)). Scheduling and control are only relevant for the second site. B) shows the state network of the scheduling problem that includes an additional intermediate material. The production of I3 and P1-P4 needs to be scheduled in batches on 2 machines in 7 event time points. C) Each batch production is then subject to an optimal control problem. * 3 Linking variables between the planning, scheduling, and control levels. Each planning problem feeds the planning targets for its 12 timesteps to separate scheduling problems to be solved in parallel. Each of the 12 scheduling problems, in deciding which of the 5 productions to schedule on the 2 machine at the 7 events, obtains the optimal batch processing time and cost as a function of the batch size targets for each of the 70 possible batch assignments. The optimal scheduling and control finally feed back into the planning. * 4 Increasing levels of solution accuracy evaluation. The upper evaluation only evaluates the planning objective and constraints in a simulation. The bi evaluation solves 12 scheduling problems in parallel in each evaluation to account for the scheduling cost in the objective. The approximate tri evaluation embeds control surrogates into the scheduling to account for approximate control cost in the objective. The tri evaluation solves scheduling and control levels in sequence to emulate hierarchical decision-making and adds the'real' hierarchical scheduling and control costs to the planning objective. * 5 The three proposed methodologies: The first methodology uses DFO to optimize any of the solution metrics in (8b-8d) directly by considering the planning simulation and scheduling-control optimization as a black-box. The second methodology uses hyperparameter optimization to find a surrogate architecture that, trained on one of the solution metrics in (8b-8d), leads to the best solution accuracy-time trade-off after embedding into the planning. The third methodology combines both workflows in using DFO to optimize the solutions obtained from the surrogate approach. * 6 Predicted and actual optimal batch processing time and energy requirements corresponding to 10 uniformly sampled training and test batch targets. Solution accuracy and time for the planning-only instance, the DFO approach on the bi metric, and the DFO approach on the approx tri metric * 8 Three solution accuracy and time Pareto solutions each of surrogates trained on the bi and approx tri metrics with the two numbers in the label referring to the number of nodes in the two hidden layers * 9 Box-and-whisker plot of the upper, bi, approx tri, and tri solution metric evaluations associated with all 12 samples of the hyperparameter tuning for the bi, approx tri, and tri surrogates. The box represents the quartiles while the whiskers demonstrate variability outside the box plots. Outliers are shown as isolated data points outside of the whiskers. * 10 Increase in solution time and quality in using DFO to optimize the planning-only, DFO, and surrogate solutions on the tri evaluation metric * 11 Solution dashboard and planning profile of the planning-only problem
2301.11953
Fano 4-folds with $b_2>12$ are products of surfaces
Let X be a smooth, complex Fano 4-fold, and rho(X) its Picard number. We show that if rho(X)>12, then X is a product of del Pezzo surfaces. The proof relies on a careful study of divisorial elementary contractions f: X->Y such that the image S of the exceptional divisor is a surface, together with the author's previous work on Fano 4-folds. In particular, given f: X->Y as above, under suitable assumptions we show that S is a smooth del Pezzo surface with -K_S given by the restriction of -K_Y.
Cinzia Casagrande
2023-01-27T19:12:30Z
http://arxiv.org/abs/2301.11953v2
# Fano \(4\)-folds with \(b_{2}>12\) are products of surfaces ###### Abstract. Let \(X\) be a smooth, complex Fano \(4\)-fold, and \(\rho_{X}\) its Picard number. We show that if \(\rho_{X}>12\), then \(X\) is a product of del Pezzo surfaces. The proof relies on a careful study of divisorial elementary contractions \(f\colon X\to Y\) such that \(\dim f(\operatorname{Exc}(f))=2\), together with the author's previous work on Fano \(4\)-folds. In particular, given \(f\colon X\to Y\) as above, under suitable assumptions we show that \(S:=f(\operatorname{Exc}(f))\) is a smooth del Pezzo surface with \(-K_{S}=(-K_{Y})_{|S}\). 2020 _Mathematics Subject Classification._ 14J45,14J35,14E30. ## 1. Introduction Smooth, complex Fano varieties have been classically intensively studied, and have attracted a lot of attention also in the last decades, due to their role in the framework of the Minimal Model Program. The Fano condition is a natural positivity condition of the tangent bundle, and it ensures a rich geometry, from both the points of view of birational geometry and of families of rational curves. It has been known since the \(90\)'s that Fano varieties form a bounded family in each dimension. Del Pezzo surfaces are known classically, and the classification of Fano \(3\)-folds have been in achieved in the \(80\)'s, there are \(105\) families. Starting from dimension \(4\), there are probably too many families to get a complete classification; still we aim to better understand and describe the behavior and properties of these varieties. In this paper we focus on Fano \(4\)-folds \(X\) with "large" Picard number \(\rho_{X}\); let us recall that since \(X\) is Fano, \(\rho_{X}\) is equal to the second Betti number \(b_{2}(X)\). We show the following result. **Theorem 1.1**.: _Let \(X\) be a smooth Fano \(4\)-fold with \(\rho_{X}>12\). Then \(X\cong S_{1}\times S_{2}\), where \(S_{i}\) are del Pezzo surfaces._ To the author's knowledge, all known examples of Fano \(4\)-folds which are not products of surfaces have \(\rho\leq 9\), so that we do not know whether the condition \(\rho>12\) in Th. 1.1 is sharp. We refer the reader to [10, SS6] for an overview of known Fano \(4\)-folds with \(\rho\geq 6\); there are few examples and it is an interesting problem to construct new ones. As \(\rho_{S_{1}\times S_{2}}=\rho_{S_{1}}+\rho_{S_{2}}\), and del Pezzo surfaces have \(\rho\leq 9\), Th. 1.1 implies the following. **Corollary 1.2**.: _Let \(X\) be a smooth Fano \(4\)-fold. Then \(\rho_{X}\leq 18\)._ Let us note that Th. 1.1 and Cor. 1.2 generalize to dimension 4 the analogous result for Fano 3-folds, established by Mori and Mukai in the 80's: **Theorem 1.3** ([16], Th. 1.2).: _Let \(X\) be a smooth Fano 3-fold with \(\rho_{X}>5\). Then \(X\cong S\times\mathbb{P}^{1}\) where \(S\) is a del Pezzo surface. In particular \(\rho_{X}\leq 10\)._ The proof of Th. 1.1 relies on a careful study of _elementary contractions of \(X\) of type \((3,2)\)_, together with the author's previous work on Fano 4-folds. To explain this, let us introduce some notation. Let \(X\) be a Fano 4-fold. A _contraction_ is a surjective morphism \(f\colon X\to Y\), with connected fibers, where \(Y\) is normal and projective; \(f\) is _elementary_ if \(\rho_{X}-\rho_{Y}=1\). As usual, an elementary contraction can be of fiber type, divisorial, or small. We say that an elementary contraction \(f\colon X\to Y\) is _of type \((3,2)\)_ if it is divisorial with \(\dim S=2\), where \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\subset Y\). Such \(f\) can have at most finitely many 2-dimensional fibers; outside the images of these fibers, \(Y\) and \(S\) are smooth, and \(f\) is just the blow-up of the surface \(S\). If \(y_{0}\in S\) is the image of a two-dimensional fiber, then either \(Y\) or \(S\) are singular at \(y_{0}\); these singularities have been described by Andreatta and Wisniewski, see Th. 2.1. In any case, \(Y\) has at most isolated locally factorial and terminal singularities, while \(S\) can be not normal. We denote by \(\mathcal{N}_{1}(X)\) the real vector space of one-cycles with real coefficients, modulo numerical equivalence; we have \(\dim\mathcal{N}_{1}(X)=\rho_{X}\). For any closed subset \(Z\subset X\), we set \[\mathcal{N}_{1}(Z,X):=\iota_{*}(\mathcal{N}_{1}(Z))\subset\mathcal{N}_{1}(X)\] where \(\iota\colon Z\hookrightarrow X\) is the inclusion, so that \(\mathcal{N}_{1}(Z,X)\) is the subspace of \(\mathcal{N}_{1}(X)\) spanned by classes of curves in \(Z\), and \(\dim\mathcal{N}_{1}(Z,X)\leq\rho_{Z}\). We study an elementary contraction \(f\colon X\to Y\) of type \((3,2)\) under the hypothesis that: \[\dim\mathcal{N}_{1}(E,X)\geq 4.\] In particular this implies that \(Y\) is Fano too (Lemma 2.2). We would like to compare \((-K_{Y})_{|S}\) to \(-K_{S}\), but since \(S\) may be singular, we consider the minimal resolution of singularities \(\mu\colon S^{\prime}\to S\) and set \(L:=\mu^{*}((-K_{Y})_{|S})\), a nef and big divisor class on \(S^{\prime}\). We show that \(K_{S^{\prime}}+L\) is semiample (Lemma 3.1). Then our strategy is to look for curves in \(S^{\prime}\) on which \(K_{S^{\prime}}+L\) is trivial, using other elementary contractions of \(X\) of type \((3,2)\) whose exceptional divisor intersects \(E\) in a suitable way. Hence let us assume that \(X\) has another elementary contraction \(g_{1}\) of type \((3,2)\) whose exceptional divisor \(E_{1}\) intersects \(E\), and such that \(E\cdot\Gamma_{1}=0\) for a curve \(\Gamma_{1}\) contracted by \(g_{1}\). Set \(D:=f(E_{1})\subset Y\). We show that an irreducible component \(C_{1}\) of \(D\cap S\) is a \((-1)\)-curve contained in the smooth locus \(S_{\text{reg}}\), and such that \(-K_{Y}\cdot C_{1}=1\) (Lemma 3.2, see Fig. 3.1 on p. 7). If \(C^{\prime}_{1}\subset S^{\prime}\) is the transform of \(C_{1}\), we have \((K_{S^{\prime}}+L)\cdot C^{\prime}_{1}=0\). Finally let us assume that \(X\) has three elementary contractions \(g_{1},g_{2},g_{3}\), all of type \((3,2)\), satisfying the same assumptions as \(g_{1}\) above. We also assume that \(E_{1}\cdot\Gamma_{2}>0\) and \(E_{1}\cdot\Gamma_{3}>0\), where \(E_{1}=\operatorname{Exc}(g_{1})\) and \(\Gamma_{2},\Gamma_{3}\) are curves contracted by \(g_{2},g_{3}\) respectively. Then we show that \(S\) is a smooth del Pezzo surface with \(-K_{S}=(-K_{Y})_{|S}\) (Th. 3.6 and Prop. 3.10); let us give an overview of the proof. The previous construction yields three distinct \((-1)\)-curves \(C_{1}^{\prime},C_{2}^{\prime},C_{3}^{\prime}\subset S^{\prime}\) such that \((K_{S^{\prime}}+L)\cdot C_{i}^{\prime}=0\) and \(C_{1}^{\prime}\) intersects both \(C_{2}^{\prime}\) and \(C_{3}^{\prime}\). This shows that the contraction of \(S^{\prime}\) given by \(K_{S^{\prime}}+L\) cannot be birational, namely \(K_{S^{\prime}}+L\) is not big. We also rule out the possibility of a contraction onto a curve, and conclude that \(K_{S^{\prime}}+L\equiv 0\). Finally we show that \(\omega_{S}\cong\mathcal{O}_{Y}(K_{Y})_{|S}\), where \(\omega_{S}\) is the dualizing sheaf of \(S\), and conclude that \(S\) is smooth and del Pezzo. We believe that these results can be useful in the study of Fano 4-folds besides their use in the present work. It would be interesting to generalize this technique to higher dimensions. Let us now explain how we use these results to prove Th. 1.1. We define the _Lefschetz defect_ of \(X\) as: \[\delta_{X}:=\max\bigl{\{}\operatorname{codim}\mathcal{N}_{1}(D,X)\,|\,D \subset X\text{ a prime divisor}\bigr{\}}.\] This invariant, introduced in [10], measures the difference between the Picard number of \(X\) and that of its prime divisors; we refer the reader to [10] for a survey on \(\delta_{X}\). Fano 4-folds with \(\delta_{X}\geq 3\) are classified, as follows. **Theorem 1.4** ([10], Th. 3.3).: _Let \(X\) be a smooth Fano \(4\)-fold. If \(\delta_{X}\geq 4\), then \(X\cong S_{1}\times S_{2}\) where \(S_{i}\) are del Pezzo surfaces, and \(\delta_{X}=\max_{i}\rho_{S_{i}}-1\)._ **Theorem 1.5** ([11], Prop. 1.5).: _Smooth Fano \(4\)-folds with \(\delta_{X}=3\) are classified. They have \(5\leq\rho_{X}\leq 8\), and if \(\rho_{X}\in\{7,8\}\) then \(X\) is a product of surfaces._ Therefore in our study of Fano 4-folds we can assume that \(\delta_{X}\leq 2\), that is, \(\operatorname{codim}\mathcal{N}_{1}(D,X)\leq 2\) for every prime divisor \(D\subset X\). To prove that \(\rho_{X}\leq 12\), we look for a prime divisor \(D\subset X\) with \(\dim\mathcal{N}_{1}(D,X)\leq 10\). To produce such a divisor, we look at contractions of \(X\). If \(X\) has an elementary contraction of fiber type, or a divisorial elementary contraction \(f\colon X\to Y\) with \(\dim f(\operatorname{Exc}(f))\leq 1\), it is not difficult to find a prime divisor \(D\subset X\) such that \(\dim\mathcal{N}_{1}(D,X)\leq 3\), hence \(\rho_{X}\leq 5\) (Lemmas 2.5 and 2.6). The case where \(X\) has a small elementary contraction is much harder and is treated in [10], where the following result is proven. **Theorem 1.6** ([10], Th. 1.1).: _Let \(X\) be a smooth Fano 4-fold. If \(X\) has a small elementary contraction, then \(\rho_{X}\leq 12\)._ We are left with the case where every elementary contraction \(f\colon X\to Y\) is of type \((3,2)\). In this case we show (Th. 4.1) that, if \(\rho_{X}\geq 8\), we can apply our previous study of elementary contractions of type \((3,2)\), so that if \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\subset Y\), then \(S\) is a smooth del Pezzo surface. This implies that \(\dim\mathcal{N}_{1}(S,Y)\leq\rho_{S}\leq 9\), \(\dim\mathcal{N}_{1}(E,X)=\dim\mathcal{N}_{1}(S,Y)+1\leq 10\), and finally that \(\rho_{X}\leq 12\), proving Th. 1.1. The structure of the paper is as follows. In SS2 we gather some preliminary results. Then in SS3 we develop our study of elementary contractions of type \((3,2)\), while in SS4 we prove Th. 1.1. ### Notation We work over the field of complex numbers. Let \(X\) be a projective variety. We denote by \(\mathcal{N}_{1}(X)\) (respectively, \(\mathcal{N}^{1}(X)\)) the real vector space of one-cycles (respectively, Cartier divisors) with real coefficients, modulo numerical equivalence; \(\dim\mathcal{N}_{1}(X)=\dim\mathcal{N}^{1}(X)=\rho_{X}\) is the Picard number of \(X\). Let \(C\) be a one-cycle of \(X\), and \(D\) a Cartier divisor. We denote by \([C]\) (respectively, \([D]\)) the numerical equivalence class in \(\mathcal{N}_{1}(X)\) (respectively, \(\mathcal{N}^{1}(X)\)). We also denote by \(D^{\perp}\subset\mathcal{N}_{1}(X)\) the orthogonal hyperplane to the class \([D]\). The symbol \(\equiv\) stands for numerical equivalence (for both one-cycles and divisors), and \(\sim\) stands for linear equivalence of divisors. \(\operatorname{NE}(X)\subset\mathcal{N}_{1}(X)\) is the convex cone generated by classes of effective curves, and \(\operatorname{NE}(X)\) is its closure. An _extremal ray_\(R\) is a one-dimensional face of \(\operatorname{NE}(X)\). If \(D\) is a Cartier divisor in \(X\), we write \(D\cdot R>0\), \(D\cdot R=0\), and so on, if \(D\cdot\gamma>0\), \(D\cdot\gamma=0\), and so on, for a non-zero class \(\gamma\in R\). We say that \(R\) is \(K\)-negative if \(K_{X}\cdot R<0\). Suppose that \(X\) has terminal and locally factorial singularities, and is Fano. Then \(\operatorname{NE}(X)\) is a convex polyhedral cone. Given a contraction \(f\colon X\to Y\), we denote by \(\operatorname{NE}(f)\) the convex subcone of \(\operatorname{NE}(X)\) generated by classes of curves contracted by \(f\); we recall that there is a bijection between contractions of \(X\) and faces of \(\operatorname{NE}(X)\), given by \(f\mapsto\operatorname{NE}(f)\). Moreover \(\dim\operatorname{NE}(f)=\rho_{X}-\rho_{Y}\), in particular \(f\) is elementary if and only if \(\operatorname{NE}(f)\) is an extremal ray. When \(\dim X=4\), we say that an extremal ray \(R\) is of type \((3,2)\) if the associated elementary contraction \(f\) is of type \((3,2)\), namely if \(f\) is divisorial with \(\dim f(\operatorname{Exc}(f))=2\). We also set \(E_{R}:=\operatorname{Exc}(f)\) and denote by \(C_{R}\subset E_{R}\) a general fiber of \(f_{|E_{R}}\); note that \(E_{R}\cdot C_{R}=-1\). We will also consider the cones \(\operatorname{Eff}(X)\subset\mathcal{N}^{1}(X)\) of classes of effective divisors, and \(\operatorname{mov}(X)\subset\mathcal{N}_{1}(X)\) of classes of curves moving in a family covering \(X\). Since \(X\) is Fano, both cones are polyhedral; we have the duality relation \(\operatorname{Eff}(X)=\operatorname{mov}(X)^{\vee}\). ## 2. Preliminaries In this section we gather some preliminary results that will be used in the sequel. Andreatta and Wisniewski have classified the possible \(2\)-dimensional fibers of an elementary contraction of type \((3,2)\) of a smooth Fano \(4\)-fold. In doing this, they also describe precisely the singularities both of the target, and of the image of the exceptional divisor, as follows. **Theorem 2.1** ([1], Theorem on p. 256).: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(S:=f(\operatorname{Exc}(f))\)._ _Then \(f\) can have at most finitely many \(2\)-dimensional fibers. Outside the images of these fibers, \(Y\) and \(S\) are smooth, and \(f\) is the blow-up of \(S\)._ _Let \(y_{0}\in S\subset Y\) be the image of a \(2\)-dimensional fiber; then one of the following holds:_ 1. \(S\) _is smooth at_ \(y_{0}\)_, while_ \(Y\) _has an ordinary double point at_ \(y_{0}\)_, locally factorial and terminal;_ 2. \(Y\) _is smooth at_ \(y_{0}\)_, while_ \(S\) _is singular at_ \(y_{0}\)_. More precisely either_ \(S\) _is not normal at_ \(y_{0}\)_, or it has a singularity of type_ \(\frac{1}{3}(1,1)\) _at_ \(y_{0}\) _(as the cone over a twisted cubic)._ _In particular the singularities of \(Y\) are at most isolated, locally factorial, and terminal._ Now we give some simple preliminary results on extremal rays of type \((3,2)\). **Lemma 2.2**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\); set \(E:=\operatorname{Exc}(f)\). If \(\dim\mathcal{N}_{1}(E,X)\geq 4\), then \(E\cdot R\geq 0\) for every extremal ray \(R\) of \(X\) different from \(\operatorname{NE}(f)\), and \(Y\) is Fano._ Proof.: It follows from [12, Lemma 2.16 and Rem. 2.17] that \(\operatorname{NE}(f)\) is the unique extremal ray of \(X\) having negative intersection with \(E\), \(-K_{X}+E=f^{*}(-K_{Y})\) is nef, and \((-K_{X}+E)^{\perp}\cap\operatorname{NE}(X)=\operatorname{NE}(f)\), so that \(-K_{Y}\) is ample. **Lemma 2.3**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(R_{1},R_{2}\) extremal rays of \(X\) of type \((3,2)\) such that \(\dim\mathcal{N}_{1}(E_{R_{1}},X)\geq 4\) and \(E_{R_{1}}\cdot R_{2}=0\)._ _Then \(E_{R_{2}}\cdot R_{1}=0\) and \(R_{1}+R_{2}\) is a face of \(\operatorname{NE}(X)\) whose associated contraction is birational, with exceptional locus \(E_{R_{1}}\cup E_{R_{2}}\)._ Proof.: Let \(H\) be a nef divisor on \(X\) such that \(H^{\perp}\cap\operatorname{NE}(X)=R_{2}\), and set \(H^{\prime}:=H+(H\cdot C_{R_{1}})E_{R_{1}}\). Then \(H^{\prime}\cdot C_{R_{1}}=H^{\prime}\cdot C_{R_{2}}=0\), and if \(R_{3}\) is an extremal ray of \(\operatorname{NE}(X)\) different from \(R_{1}\) and \(R_{2}\), we have \(E_{R_{1}}\cdot R_{3}\geq 0\) by Lemma 2.2, hence \(H^{\prime}\cdot R_{3}>0\). Therefore \(H^{\prime}\) is nef and \((H^{\prime})^{\perp}\cap\operatorname{NE}(X)=R_{1}+R_{2}\) is a face of \(\operatorname{NE}(X)\). If \(\Gamma\subset X\) is an irreducible curve with \([\Gamma]\in R_{1}+R_{2}\), then \(H^{\prime}\cdot\Gamma=0\), so that either \(E_{R_{1}}\cdot\Gamma<0\) and \(\Gamma\subset E_{R_{1}}\), or \(H\cdot\Gamma=0\), \([\Gamma]\in R_{2}\) and \(\Gamma\subset E_{R_{2}}\). This shows that the contraction of \(R_{1}+R_{2}\) is birational with exceptional locus \(E_{R_{1}}\cup E_{R_{2}}\). Finally we have \(E_{R_{2}}\cdot R_{1}=0\) by [12, Lemma 2.2(b) and its proof]. **Lemma 2.4**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(R_{1},R_{2}\) distinct extremal rays of \(X\) of type \((3,2)\) with \(\dim\mathcal{N}_{1}(E_{R_{i}},X)\geq 4\) for \(i=1,2\). If there exists a birational contraction \(g\colon X\to Z\) with \(R_{1},R_{2}\subset\operatorname{NE}(g)\), then \(E_{R_{1}}\cdot R_{2}=E_{R_{2}}\cdot R_{1}=0\)._ Proof.: We note first of all that \(E_{R_{i}}\cdot R_{j}\geq 0\) for \(i\neq j\) by Lemma 2.2. Suppose that \(E_{R_{1}}\cdot R_{2}>0\). Then \(E_{R_{1}}\cdot(C_{R_{1}}+C_{R_{2}})=E_{R_{1}}\cdot C_{R_{2}}-1\geq 0\). Moreover \(E_{R_{2}}\cdot R_{1}>0\) by Lemma 2.3, so that \(E_{R_{2}}\cdot(C_{R_{1}}+C_{R_{2}})\geq 0\). On the other hand for every prime divisor \(D\) different from \(E_{R_{1}},E_{R_{2}}\) we have \(D\cdot(C_{R_{1}}+C_{R_{2}})\geq 0\), therefore \([C_{R_{1}}+C_{R_{2}}]\in\operatorname{Eff}(X)^{\vee}=\operatorname{mov}(X)\). Since \([C_{R_{1}}+C_{R_{2}}]\in\operatorname{NE}(g)\), \(g\) should be of fiber type, a contradiction. **Lemma 2.5**.: _Let \(X\) be a smooth Fano \(4\)-fold with \(\delta_{X}\leq 2\), and \(g\colon X\to Z\) a contraction of fiber type. Then \(\rho_{Z}\leq 4\)._ Proof.: This follows from [10]; for the reader's convenience we report the proof. If \(\dim Z\leq 1\), then \(\rho_{Z}\leq 1\). If \(Z\) is a surface, take any prime divisor \(D\subset X\) such that \(g(D)\subsetneq Z\), so that \(\mathcal{N}_{1}(g(D),Z)=\{0\}\) if \(g(D)=\{pt\}\), and \(\mathcal{N}_{1}(g(D),Z)=\mathbb{R}[g(D)]\) if \(g(D)\) is a curve. Consider the pushforward of one-cycles \(g_{*}\colon\mathcal{N}_{1}(X)\to\mathcal{N}_{1}(Z)\), and note that \(\dim\ker g_{*}=\rho_{X}-\rho_{Z}\). We have \(g_{*}(\mathcal{N}_{1}(D,X))=\mathcal{N}_{1}(g(D),Z)\) and \(\dim\mathcal{N}_{1}(g(D),Z)\leq 1\), thus \(\operatorname{codim}\mathcal{N}_{1}(D,X)\geq\rho_{Z}-1\), and \(\delta_{X}\leq 2\) yields \(\rho_{Z}\leq 3\). If \(\dim Z=3\), then as in [10, proof of Cor. 1.6] one shows that there exists a prime divisor \(D\subset X\) such that \(\dim\mathcal{N}_{1}(g(D),Z)\leq 2\), and reasoning as before we get \(\rho_{Z}\leq 4\). **Lemma 2.6** ([10], Rem. 2.17(1)).: _Let \(X\) be a smooth Fano \(4\)-fold. If \(X\) has a divisorial elementary contraction not of type \((3,2)\), then \(\rho_{X}\leq 5\)._ ## 3. Showing that \(S\) is a del Pezzo surface In this section we study elementary contractions of type \((3,2)\) of a Fano \(4\)-fold. We focus on the surface \(S\) which is the image of the exceptional divisor; as explained in the Introduction, our goal is to show that under suitable assumptions, \(S\) is a smooth del Pezzo surface. Recall that \(S\) has isolated singularities by Th. 2.1. **Lemma 3.1**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\), and assume that \(\dim\mathcal{N}_{1}(E,X)\geq 4\)._ _Let \(\mu\colon S^{\prime}\to S\) be the minimal resolution of singularities, and set \(L:=\mu^{*}((-K_{Y})_{|S})\). Then \(K_{S^{\prime}}+L\) is semiample._ Proof.: Note that \(-K_{Y}\) is Cartier by Th. 2.1, and ample by Lemma 2.2, so that \(L\) is nef and big on \(S^{\prime}\), and for every irreducible curve \(\Gamma\subset S^{\prime}\), we have \(L\cdot\Gamma=0\) if and only if \(\Gamma\) is \(\mu\)-exceptional. Consider the pushforward of one-cycles \(f_{*}\colon\mathcal{N}_{1}(X)\to\mathcal{N}_{1}(Y)\). Then \(f_{*}(\mathcal{N}_{1}(E,X))=\mathcal{N}_{1}(S,Y)\), therefore \(\rho_{S^{\prime}}\geq\rho_{S}\geq\dim\mathcal{N}_{1}(S,Y)\geq 3\). Let \(R\) be a \(K_{S^{\prime}}\)-negative extremal ray of \(\overline{\operatorname{NE}}(S^{\prime})\). The contraction associated to \(R\) can be onto a point (if \(S^{\prime}\cong\mathbb{P}^{2}\)), onto a curve (so that \(\rho_{S^{\prime}}=2\)), or the blow-up of a smooth point (see for instance [12, Th. 1-4-8]). Since \(\rho_{S^{\prime}}>2\), \(R\) is generated by the class of a \((-1)\)-curve \(\Gamma\), that cannot be \(\mu\)-exceptional, because \(\mu\) is minimal. Then \(L\cdot\Gamma>0\) and \((K_{S^{\prime}}+L)\cdot\Gamma=L\cdot\Gamma-1\geq 0\). Moreover, if \(\gamma\in\overline{\operatorname{NE}}(S^{\prime})_{K_{S^{\prime}}\geq 0}\), then \((K_{S^{\prime}}+L)\cdot\gamma=K_{S^{\prime}}\cdot\gamma+L\cdot\gamma\geq 0\). By the Cone Theorem, we conclude that \(K_{S^{\prime}}+L\) is nef on \(S^{\prime}\), and also semiample by the Base-Point-Free Theorem. **Lemma 3.2**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\), and assume that \(\dim\mathcal{N}_{1}(E,X)\geq 4\)._ Proof.: We first show that \(\dim\mathcal{N}_{1}(E,X)\geq 4\). By the Base-Point-Free Theorem, we have \(\dim\mathcal{N}_{1}(E,X)\geq 4\). \(\dim\mathcal{N}_{1}(E,X)\geq 4\). Let \(\mu\colon S^{\prime}\to S\) be the minimal resolution of singularities, and set \(L:=\mu^{*}((-K_{Y})_{|S})\)._ _Suppose that \(X\) has an extremal ray \(R_{1}\) of type \((3,2)\) such that:_ \[E\cdot R_{1}=0\quad\text{and}\quad E\cap E_{R_{1}}\neq\emptyset.\] _Set \(D:=f(E_{R_{1}})\subset Y\)._ _Then \(D_{|S}=C_{1}+\dots+C_{r}\) where \(C_{i}\) are pairwise disjoint \((-1)\)-curves contained in \(S_{\text{reg}}\), \(E_{R_{1}}=f^{*}(D)\), and \(f_{*}(C_{R_{1}})\equiv_{Y}C_{i}\). Moreover if \(C_{i}^{\prime}\subset S^{\prime}\) is the transform of \(C_{i}\), we have \((K_{S^{\prime}}+L)\cdot C_{i}^{\prime}=0\) for every \(i=1,\dots,r\)._ Proof.: By Lemma 2.3 we have \(E_{R_{1}}\cdot\operatorname{NE}(f)=0\) and \(\operatorname{NE}(f)+R_{1}\) is a face of \(\operatorname{NE}(X)\), whose associated contraction \(h\colon X\to Z\) is birational with \(\operatorname{Exc}(h)=E\cup E_{R_{1}}\). We have a diagram (see Fig. 3.1): (3.4) where \(g\) is an elementary, \(K\)-negative, divisorial contraction, with \(\operatorname{Exc}(g)=D\) (recall that \(Y\) is is locally factorial by Th. 2.1, and Fano by Lemma 2.2). Since \(E_{R_{1}}\cdot\operatorname{NE}(f)=E\cdot R_{1}=0\), both \(h(E)\) and \(h(E_{R_{1}})\) are surfaces in \(Z\), and the general fiber of \(h\) over these surfaces is one-dimensional. Moreover \(h(E)\cap h(E_{R_{1}})\) is finite, and the connected components of \(E\cap E_{R_{1}}\) are \(2\)-dimensional fibers of \(h\) over these points. Using the classification of the possible \(2\)-dimensional fibers of \(h\) in [1], as in [1, Lemma 4.15] we see that every connected component Figure 3.1. The varieties in Lemma 3.2. (which is non-empty by assumption) is isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) with normal bundle \(\mathcal{O}(-1,0)\oplus\mathcal{O}(0,-1)\), for \(i=1,\ldots,r\). Set \(C_{i}:=f(T_{i})\), so that \(D\cap S=f(E\cap E_{R_{1}})=f(\cup_{i}T_{i})=\cup_{i}C_{i}\). Then \(C_{i}\cong\mathbb{P}^{1}\), \(C_{i}\cap C_{j}=\emptyset\) if \(i\neq j\), and \(f\) has fibers of dimension one over \(C_{i}\), therefore \(C_{i}\subset S_{\text{reg}}\) and \(C_{i}\subset Y_{\text{reg}}\) by Th. 2.1. Moreover \(g(D)=h(E_{R_{1}})\) is a surface, namely \(g\) is of type \((3,2)\), and \(C_{i}\) is a one-dimensional fiber of \(g\) contained in \(Y_{\text{reg}}\), hence \(K_{Y}\cdot C_{i}=D\cdot C_{i}=-1\). We also have \(E_{R_{1}}=f^{*}(D)\) and \(f_{*}(C_{R_{1}})\equiv_{Y}C_{i}\). Since \(C_{i}\subset S_{\text{reg}}\), it is a Cartier divisor in \(S\), and we can write \(D_{|S}=m_{1}C_{1}+\cdots+m_{r}C_{r}\) with \(m_{i}\in\mathbb{Z}_{>0}\) for every \(i=1,\ldots,r\). In \(S\) we have \(C_{i}\cdot C_{j}=0\) for \(i\neq j\), hence for \(i\in\{1,\ldots,r\}\) we get \[-1=D\cdot C_{i}=(m_{1}C_{1}+\cdots+m_{r}C_{r})\cdot C_{i}=m_{i}C_{i}^{2}\] and we conclude that \(m_{i}=1\) and \(C_{i}^{2}=-1\), so that \(C_{i}\) is a \((-1)\)-curve in \(S\). Finally \(-K_{S}\cdot C_{i}=-K_{Y}\cdot C_{i}=1\), hence if \(C_{i}^{\prime}\subset S^{\prime}\) is the transform of \(C_{i}\), we have \((K_{S^{\prime}}+L)\cdot C_{i}^{\prime}=0\). **Corollary 3.5**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(E:=\operatorname{Exc}(f)\), and assume that \(\dim\mathcal{N}_{1}(E,X)\geq 4\). Suppose that \(X\) has an extremal ray \(R_{1}\) of type \((3,2)\) such that \(E\cdot R_{1}=0\)._ _Then \(R_{1}^{\prime}:=f_{*}(R_{1})\) is an extremal ray of \(Y\) of type \((3,2)\), and \(E_{R_{1}}=f^{*}(E_{R_{1}^{\prime}})\)._ Proof.: If \(E\cap E_{R_{1}}\neq\emptyset\), we are in the setting of Lemma 3.2; consider the elementary contraction \(g\colon Y\to Z\) as in (3.4). Then \(\operatorname{NE}(g)=f_{*}(R_{1})=R_{1}^{\prime}\) is an extremal ray of \(Y\) of type \((3,2)\), and \(f^{*}(E_{R_{1}^{\prime}})=E_{R_{1}}\). If \(E\cap E_{R_{1}}=\emptyset\), then we still have a diagram as (3.4), where \(g\) is locally isomorphic to the contraction of \(R_{1}\) in \(X\), and the statement is clear. **Theorem 3.6**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\), and assume that \(\dim\mathcal{N}_{1}(E,X)\geq 4\)._ _Suppose that \(X\) has two extremal rays \(R_{1},R_{2}\) of type \((3,2)\) such that:_ \[E_{R_{1}}\cdot R_{2}>0\text{ and }E\cdot R_{i}=0,\ E\cap E_{R_{i}}\neq\emptyset \text{ for }i=1,2.\] _Then one of the following holds:_ 1. \(S\) _is a smooth del Pezzo surface and_ \(-K_{S}=(-K_{Y})_{|S}\)_;_ 2. \(E_{R_{1}}\cdot C_{R_{2}}=E_{R_{2}}\cdot C_{R_{1}}=1\)_._ Proof.: We apply Lemma 3.2 to \(f,R_{1}\) and to \(f,R_{2}\). Write \(f(E_{R_{1}})_{|S}=C_{1}+\cdots+C_{r}\), and let \(\Gamma_{2}\) be an irreducible component of \(f(E_{R_{2}})_{|S}\), so that \(C_{1},\ldots,C_{r},\Gamma_{2}\) are \((-1)\)-curves contained in \(S_{\text{reg}}\), and \(\Gamma_{2}\equiv f_{*}(C_{R_{2}})\). Then \[0<E_{R_{1}}\cdot C_{R_{2}}=f^{*}(f(E_{R_{1}}))\cdot C_{R_{2}}=f(E_{R_{1}})\cdot \Gamma_{2}=(C_{1}+\cdots+C_{r})\cdot\Gamma_{2}, \tag{3.7}\] hence \(C_{i}\cdot\Gamma_{2}>0\) for some \(i\), say \(i=1\). Let \(\mu\colon S^{\prime}\to S\) be the minimal resolution of singularities, and set \(L:=\mu^{*}((-K_{Y})_{|S})\). Moreover let \(\Gamma_{2}^{\prime}\) and \(C_{1}^{\prime}\) in \(S^{\prime}\) be the transforms of \(\Gamma_{2}\) and \(C_{1}\) respectively; then \(\Gamma^{\prime}_{2}\) and \(C^{\prime}_{1}\) are disjoint from the \(\mu\)-exceptional locus, are \((-1)\)-curves in \(S^{\prime}\), \((K_{S^{\prime}}+L)\cdot C^{\prime}_{1}=(K_{S^{\prime}}+L)\cdot\Gamma^{\prime}_{2}=0\), and \(C^{\prime}_{1}\cdot\Gamma^{\prime}_{2}>0\). Recall that \(K_{S^{\prime}}+L\) is semiample by Lemma 3.1. In particular, the face \((K_{S^{\prime}}+L)^{\perp}\cap\overline{\operatorname{NE}}(S^{\prime})\) contains the classes of two distinct \((-1)\)-curves which meet. This means that the associated contraction cannot be birational, and we have two possibilities: either \(K_{S^{\prime}}+L\equiv 0\), or the contraction associated to \(K_{S^{\prime}}+L\) is onto a curve. We show that these two cases yield respectively \((i)\) and \((ii)\). Suppose first that \(K_{S^{\prime}}+L\equiv 0\); in particular \(-K_{S^{\prime}}\) is nef and big, namely \(S^{\prime}\) is a weak del Pezzo surface. Set for simplicity \(\mathcal{F}:=\mathcal{O}_{Y}(K_{Y})_{|S}\), invertible sheaf on \(S\), and let \(\omega_{S}\) be the dualizing sheaf of \(S\). We have \(K_{S^{\prime}}\equiv\mu^{*}(\mathcal{F})\), and since \(S^{\prime}\) is rational, we also have \(\mathcal{O}_{S^{\prime}}(K_{S^{\prime}})\cong\mu^{*}(\mathcal{F})\). By restricting to the open subset \(\mu^{-1}(S_{\text{reg}})\), we conclude that \((\omega_{S})_{|S_{\text{reg}}}\cong\mathcal{F}_{|S_{\text{reg}}}\). Now we use the following. **Lemma 3.8**.: _Let \(S\) be a reduced and irreducible projective surface with isolated singularities, and \(\omega_{S}\) its dualizing sheaf. If there exists an invertible sheaf \(\mathcal{F}\) on \(S\) such that \((\omega_{S})_{|S_{\text{reg}}}\cong\mathcal{F}_{|S_{\text{reg}}}\), then \(S\) is normal and \(\omega_{S}\cong\mathcal{F}\)._ This should be well-known to experts, we include a proof for lack of references. We postpone the proof of Lemma 3.8 and carry on with the proof of Th. 3.6. By Lemma 3.8 we have that \(S\) is normal and \(\omega_{S}\cong\mathcal{F}\), in particular \(\omega_{S}\) is locally free. If \(y_{0}\) is a singular point of \(S\), then by Th. 2.1\(y_{0}\) is a singularity of type \(\frac{1}{3}(1,1)\), but this contradicts the fact that \(\omega_{S}\) is locally free. We conclude that \(S\) is smooth, and finally that \(-K_{S}=(-K_{Y})_{|S}\) is ample, so that \(S\) is a del Pezzo surface, and we have \((i)\). Assume now that \(K_{S^{\prime}}+L\) yields a contraction \(g\colon S^{\prime}\to B\) onto a smooth curve. Let \(F\subset S^{\prime}\) be a general fiber \(F\) of \(g\), so that \(-K_{S^{\prime}}\cdot F=L\cdot F\). Since \(F\) is not \(\mu\)-exceptional, we have \(L\cdot F>0\) and hence \(-K_{S^{\prime}}\cdot F>0\). Thus there is a non-empty open subset \(B_{0}\subseteq B\) such that \((-K_{S^{\prime}})_{|g^{-1}(B_{0})}\) is \(g\)-ample, therefore \(g_{|g^{-1}(B_{0})}\colon g^{-1}(B_{0})\to B_{0}\) is a conic bundle, \(F\cong\mathbb{P}^{1}\), and \(-K_{S^{\prime}}\cdot F=2\). The curves \(C^{\prime}_{1}\) and \(\Gamma^{\prime}_{2}\) are components of the same fiber \(F_{0}\) of \(g\), and \(-K_{S^{\prime}}\cdot F_{0}=2=-K_{S^{\prime}}\cdot(C^{\prime}_{1}+\Gamma^{ \prime}_{2})\). For any irreducible curve \(C_{0}\) contained in \(F_{0}\) we have \(-K_{S^{\prime}}\cdot C_{0}=L\cdot C_{0}\geq 0\), so that if \(C_{0}\) is different from \(C^{\prime}_{1}\) and \(\Gamma^{\prime}_{2}\), we must have \(-K_{S^{\prime}}\cdot C_{0}=L\cdot C_{0}=0\) and \(C_{0}\) is \(\mu\)-exceptional. Thus \(C_{0}\cap(C^{\prime}_{1}\cup\Gamma^{\prime}_{2})=\emptyset\), and since \(F_{0}\) is connected, we conclude that \(F_{0}=C^{\prime}_{1}+\Gamma^{\prime}_{2}\) and \(F_{0}\subset g^{-1}(B_{0})\), hence \(F_{0}\) is isomorphic to a reducible conic. This also shows that \(C^{\prime}_{i}\) for \(i>1\) are contained in different fibers of \(g\), so that \[C_{1}\cdot\Gamma_{2}=\Gamma_{2}\cdot C_{1}=1\quad\text{and}\quad C_{i}\cdot \Gamma_{2}=0\quad\text{for every $i=2,\ldots,r$},\] and finally using (3.7) \[E_{R_{1}}\cdot C_{R_{2}}=(C_{1}+\cdots+C_{r})\cdot\Gamma_{2}=1.\] Similarly we conclude that \(E_{R_{2}}\cdot C_{R_{1}}=1\) **Remark 3.9**.: In the setting of Th. 3.6\((i)\), we cannot conclude that \(Y\) is smooth. A priori \(Y\) could have isolated singularities at some \(y_{0}\in S\); by [1] in this case \(f^{-1}(y_{0})\cong\mathbb{P}^{2}\). Proof of Lemma 3.8.: Recall that \(S\) has isolated singularities. The surface \(S\) is reduced, thus it satisfies condition \((S_{1})\), namely \[\operatorname{depth}\mathcal{O}_{S,y}\geq 1\quad\text{for every $y\in S$}.\] Then by [1, Lemma 1.3] the dualizing sheaf \(\omega_{S}\) satisfies condition \((S_{2})\): \[\operatorname{depth}\omega_{S,y}\geq 2\quad\text{for every $y\in S$},\] where \(\operatorname{depth}\omega_{S,y}\) is the depth of the stalk \(\omega_{S,y}\) as an \(\mathcal{O}_{S,y}\)-module. Then, for every open subset \(U\subset S\) such that \(S\smallsetminus U\) is finite, we have \(\omega_{S}=j_{*}((\omega_{S})_{|U})\), where \(j\colon U\hookrightarrow S\) is the inclusion, see [1, Rem. 1.8]. This is analogous to the properties of reflexive sheaves on normal varieties, see [1, Propositions 1.3 and 1.6], and can be proved using local cohomology [10, 11]. Hence we have \(\omega_{S}=j_{*}((\omega_{S})_{|S_{\text{reg}}})\), where \(j\colon S_{\text{reg}}\hookrightarrow S\) is the inclusion. Since \(\mathcal{F}\) is locally free, we get \[\omega_{S}=j_{*}((\omega_{S})_{|S_{\text{reg}}})\cong j_{*}(\mathcal{F}_{|S_{ \text{reg}}})=\mathcal{F},\] in particular \(\omega_{S}\) is an invertible sheaf and for every \(y\in Y\) we have \(\omega_{S,y}\cong\mathcal{O}_{S,y}\) as an \(\mathcal{O}_{S,y}\)-module, thus \(\operatorname{depth}\mathcal{O}_{S,y}=2\). Therefore \(S\) has property \((S_{2})\), and it is normal by Serre's criterion. **Proposition 3.10**.: _Let \(X\) be a smooth Fano \(4\)-fold and \(f\colon X\to Y\) an elementary contraction of type \((3,2)\). Set \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\), and assume that \(\dim\mathcal{N}_{1}(E,X)\geq 4\)._ _Suppose that \(X\) has three distinct extremal rays \(R_{1},R_{2},R_{3}\) of type \((3,2)\) such that:_ \[E\cdot R_{i}=0,\ E\cap E_{R_{i}}\neq\emptyset\text{ for }i=1,2,3,\text{ and }E_{R_{1}}\cdot R_{j}>0\text{ for }j=2,3.\] _Then \(S\) is a smooth del Pezzo surface and \(-K_{S}=(-K_{Y})_{|S}\)._ Proof.: We apply Th. 3.6 to \(f,R_{1},R_{2}\) and to \(f,R_{1},R_{3}\). Let us keep the same notation as in the proof of Th. 3.6; moreover we denote by \(\Gamma_{3}\) an irreducible component of \(f(E_{R_{3}})_{|S}\) and \(\Gamma_{3}^{\prime}\subset S^{\prime}\) its transform. We show that \(K_{S^{\prime}}+L\equiv 0\), which yields the statement by the proof of Th. 3.6. Otherwise, \(K_{S^{\prime}}+L\) yields a contraction \(g\colon S^{\prime}\to B\) onto a curve, and \(F_{0}=C_{1}^{\prime}+\Gamma_{2}^{\prime}\) is a fiber of \(g\). On the other hand also \(\Gamma_{3}^{\prime}\) is contained in a fiber of \(g\), it is different from \(C_{1}^{\prime}\) and \(\Gamma_{2}^{\prime}\), and \(C_{1}^{\prime}\cdot\Gamma_{3}^{\prime}>0\), which is impossible. **Corollary 3.11**.: _Let \(X\) be a smooth Fano \(4\)-fold with \(\delta_{X}\leq 2\). Suppose that \(X\) has four distinct extremal rays \(R_{0},R_{1},R_{2},R_{3}\) of type \((3,2)\) such that:_ \[E_{R_{0}}\cdot R_{i}=0\text{ for }i=1,2,3,\text{ and }E_{R_{1}}\cdot R_{j}>0 \text{ for }j=2,3.\] _Then one of the following holds:_ 1. \(\dim\mathcal{N}_{1}(E_{R_{i}},X)\leq 3\) _for some_ \(i\in\{0,1,2,3\}\)_, in particular_ \(\rho_{X}\leq 5\) _._ * \(\dim\mathcal{N}_{1}(E_{R_{0}},X)\leq 10\)_, in particular_ \(\rho_{X}\leq 12\)_._ _Moreover if_ \(f\colon X\to Y\) _is the contraction of_ \(R_{0}\) _and_ \(S:=f(E_{R_{0}})\)_, then_ \(S\) _is a smooth del Pezzo surface and_ \(-K_{S}=(-K_{Y})_{|S}\)_._ Proof.: We assume that \(\dim\mathcal{N}_{1}(E_{R_{i}},X)\geq 4\) for every \(i=0,1,2,3\), and prove \((ii)\). We show that \(E_{R_{0}}\cap E_{R_{i}}\neq\emptyset\) for every \(i=1,2,3\). If \(E_{R_{0}}\cap E_{R_{i}}=\emptyset\) for some \(i\in\{1,2,3\}\), then for every curve \(C\subset E_{R_{0}}\) we have \(E_{R_{i}}\cdot C=0\), so that \([C]\in(E_{R_{i}})^{\perp}\), and \(\mathcal{N}_{1}(E_{R_{0}},X)\subset(E_{R_{i}})^{\perp}\). Since the classes \([E_{R_{1}}],[E_{R_{2}}],[E_{R_{3}}]\in\mathcal{N}^{1}(X)\) generate distinct one dimensional faces of \(\operatorname{Eff}(X)\) (see [10, Rem. 2.19]), they are linearly independent, hence in \(\mathcal{N}_{1}(X)\) we have \[\operatorname{codim}\bigl{(}(E_{R_{1}})^{\perp}\cap(E_{R_{2}})^{\perp}\cap(E_ {R_{3}})^{\perp}\bigr{)}=3.\] On the other hand \(\operatorname{codim}\mathcal{N}_{1}(E_{R_{0}},X)\leq\delta_{X}\leq 2\), thus \(\mathcal{N}_{1}(E_{R_{0}},X)\) cannot be contained in the above intersection. Then \(\mathcal{N}_{1}(E_{R_{0}},X)\not\subset(E_{R_{n}})^{\perp}\) for some \(h\in\{1,2,3\}\), hence \(E_{R_{0}}\cap E_{R_{h}}\neq\emptyset\). In particular, since \(E_{R_{0}}\cdot R_{h}=0\), there exists an irreducible curve \(C\subset E_{R_{0}}\) with \([C]\in R_{h}\). For \(j=2,3\) we have \(E_{R_{1}}\cdot R_{j}>0\), and by Lemma 2.3 also \(E_{R_{j}}\cdot R_{1}>0\). This implies that \(E_{R_{0}}\cap E_{R_{i}}\neq\emptyset\) for every \(i=1,2,3\). For instance say \(h=3\): then \(E_{R_{1}}\cdot R_{3}>0\) yields \(E_{R_{1}}\cap C\neq\emptyset\), hence \(E_{R_{0}}\cap E_{R_{1}}\neq\emptyset\). Then there exists an irreducible curve \(C^{\prime}\subset E_{R_{0}}\) with \([C^{\prime}]\in R_{1}\), and \(E_{R_{2}}\cdot R_{1}>0\) yields \(E_{R_{0}}\cap E_{R_{2}}\neq\emptyset\). Finally we apply Prop. 3.10 to get that \(S\) is a smooth del Pezzo surface and \(-K_{S}=(-K_{Y})_{|S}\). Therefore \(\dim\mathcal{N}_{1}(S,Y)\leq\rho_{S}\leq 9\) and \(\dim\mathcal{N}_{1}(E_{R_{0}},X)=\dim\mathcal{N}_{1}(S,X)+1\leq 10\), so we get \((ii)\). ## 4. Proof of Th. 1.1 In this section we show how to apply the results of SS3 to bound \(\rho_{X}\); the following is our main result. **Theorem 4.1**.: _Let \(X\) be a smooth Fano \(4\)-fold with \(\delta_{X}\leq 2\) and \(\rho_{X}\geq 8\), and with no small elementary contraction._ _Then \(\rho_{X}\leq\delta_{X}+10\leq 12\). Moreover every elementary contraction \(f\colon X\to Y\) is of type \((3,2)\), and \(S:=f(\operatorname{Exc}(f))\subset Y\) is a smooth del Pezzo surface with \(-K_{S}=(-K_{Y})_{|S}\)._ In the proof we will use the following terminology: if \(R_{1}\), \(R_{2}\) are distinct one-dimensional faces of a convex polyhedral cone \(\mathcal{C}\), we say that \(R_{1}\) and \(R_{2}\) are _adjacent_ if \(R_{1}+R_{2}\) is a face of \(\mathcal{C}\). A _facet_ of \(\mathcal{C}\) is a face of codimension one, and \(\mathbb{R}\mathcal{C}\) is the linear span of \(\mathcal{C}\). We will also need the following elementary fact. **Lemma 4.2** ([11], Lemma II.2.6).: _Let \(\mathcal{C}\) be a convex polyhedral cone not containing non-zero linear subspaces, and \(R_{0}\) a one-dimensional face of \(\mathcal{C}\). Let \(R_{1},\ldots,R_{m}\) be the one-dimensional faces of \(\mathcal{C}\) that are adjacent to \(R_{0}\). Then the linear span of \(R_{0},R_{1},\ldots,R_{m}\) is \(\mathbb{R}\mathcal{C}\)._ Proof of Th. 4.1.: Let \(f\colon X\to Y\) be an elementary contraction; note that \(\rho_{Y}=\rho_{X}-1\geq 7\). Then \(f\) is not of fiber type by Lemma 2.5, and not small by assumption, so that \(f\) is divisorial. Moreover \(f\) is of type \((3,2)\) by Lemma 2.6. Set \(E:=\operatorname{Exc}(f)\) and \(S:=f(E)\subset Y\); we have \(\dim\mathcal{N}_{1}(E,X)\geq\rho_{X}-\delta_{X}\geq 6\), and if \(R^{\prime}\neq\operatorname{NE}(f)\) is another extremal ray of \(X\), we have \(E\cdot R^{\prime}\geq 0\) by Lemma 2.2. Moreover, if \(R^{\prime}\) is adjacent to \(\operatorname{NE}(f)\), then \(E\cdot R^{\prime}=0\). Indeed the contraction \(g\colon X\to Z\) of the face \(R^{\prime}+\operatorname{NE}(f)\) cannot be of fiber type by Lemma 2.5, thus it is birational and we apply Lemma 2.4. We are going to show that there exists three extremal rays \(R^{\prime}_{1},R^{\prime}_{2},R^{\prime}_{3}\) adjacent to \(\operatorname{NE}(f)\) such that \(E_{R^{\prime}_{1}}\cdot R^{\prime}_{j}>0\) for \(j=2,3\), and then apply Cor. 3.11. Let us consider the cone \(\operatorname{NE}(Y)\). It is a convex polyhedral cone whose extremal rays \(R\) are in bijection with the extremal rays \(R^{\prime}\) of \(X\) adjacent to \(\operatorname{NE}(f)\), via \(R=f_{*}(R^{\prime})\), see [10, SS2.5]. By Cor. 3.5, \(R\) is still of type \((3,2)\), and \(f^{*}(E_{R})=E_{R^{\prime}}\). Thus for every pair \(R_{1},R_{2}\) of distinct extremal rays of \(Y\), with \(R_{i}=f_{*}(R^{\prime}_{i})\) for \(i=1,2\), we have \(E_{R_{1}}\cdot R_{2}=E_{R^{\prime}_{1}}\cdot R^{\prime}_{2}\geq 0\). If \(R_{1}\) and \(R_{2}\) are adjacent, we show that \(E_{R_{1}}\cdot R_{2}=E_{R_{2}}\cdot R_{1}=0\). Indeed consider the contraction \(Y\to Z\) of the face \(R_{1}+R_{2}\) and the composition \(g\colon X\to Z\), which contracts \(R^{\prime}_{1}\) and \(R^{\prime}_{2}\). Again \(g\) cannot be of fiber type by Lemma 2.5, thus it is birational and we apply Lemma 2.4 to get \(E_{R^{\prime}_{1}}\cdot R^{\prime}_{2}=E_{R^{\prime}_{2}}\cdot R^{\prime}_{1}=0\), thus \(E_{R_{1}}\cdot R_{2}=E_{R_{2}}\cdot R_{1}=0\). Fix an extremal ray \(R_{1}\) of \(Y\). We show that there exist two distinct extremal rays \(R_{2},R_{3}\) of \(Y\) with \(E_{R_{1}}\cdot R_{j}>0\) for \(j=2,3\). Indeed since \(E_{R_{1}}\) is an effective divisor, there exists some curve \(C\subset Y\) with \(E_{R_{1}}\cdot C>0\), hence there exists some extremal ray \(R_{2}\) with \(E_{R_{1}}\cdot R_{2}>0\). By contradiction, let us assume that \(E_{R_{1}}\cdot R=0\) for every extremal ray \(R\) of \(Y\) different from \(R_{1},R_{2}\). This means that the cone \(\operatorname{NE}(Y)\) has the extremal ray \(R_{1}\) in the halfspace \(\mathcal{N}_{1}(Y)_{E_{R_{1}}<0}\), the extremal ray \(R_{2}\) in the halfspace \(\mathcal{N}_{1}(Y)_{E_{R_{1}}>0}\), and all other extremal rays in the hyperplane \((E_{R_{1}})^{\perp}\). Fix \(R\neq R_{1},R_{2}\), and let \(\tau\) be a facet of \(\operatorname{NE}(Y)\) containing \(R\) and not \(R_{1}\). Note that \(\mathbb{R}\tau\neq(E_{R_{1}})^{\perp}\), as \(E_{R_{1}}\) and \(-E_{R_{1}}\) are not nef. By Lemma 4.2 the rays adjacent to \(R\) in \(\tau\) cannot be all contained in \((E_{R_{1}})^{\perp}\). We conclude that \(R_{2}\) is adjacent to \(R\), therefore \(E_{R_{2}}\cdot R=0\), namely \(R\subset(E_{R_{2}})^{\perp}\). Summing up, we have shown that every extremal ray \(R\neq R_{1},R_{2}\) of \(Y\) is contained in both \((E_{R_{1}})^{\perp}\) and \((E_{R_{2}})^{\perp}\). On the other hand these rays include all the rays adjacent to \(R_{1}\), so by Lemma 4.2 their linear span must be at least a hyperplane. Therefore \((E_{R_{1}})^{\perp}=(E_{R_{2}})^{\perp}\) and the classes \([E_{R_{1}}],[E_{R_{2}}]\in\mathcal{N}^{1}(Y)\) are proportional, which is impossible, because they generate distinct one dimensional faces of the cone \(\operatorname{Eff}(Y)\) (see [10, Rem. 2.19]). We conclude that there exist two distinct extremal rays \(R_{2},R_{3}\) of \(Y\) with \(E_{R_{1}}\cdot R_{j}>0\) for \(j=2,3\). For \(i=1,2,3\) we have \(R_{i}=f_{*}(R_{i}^{\prime})\) where \(R_{i}^{\prime}\) is an extremal ray of \(X\) adjacent to \(\operatorname{NE}(f)\), so that \(E\cdot R_{i}^{\prime}=0\). Moreover for \(j=2,3\) we have \(E_{R_{1}^{\prime}}\cdot R_{j}^{\prime}=E_{R_{1}}\cdot R_{j}>0\). We apply Cor. 3.11 to \(\operatorname{NE}(f),R_{1}^{\prime},R_{2}^{\prime},R_{3}^{\prime}\). We have already excluded \((i)\), and \((ii)\) yields the statement. We can finally prove the following more detailed version of Th. 1.1. **Theorem 4.3**.: _Let \(X\) be a smooth Fano \(4\)-fold which is not a product of surfaces._ _Then \(\rho_{X}\leq 12\), and if \(\rho_{X}=12\), then there exist \(X\stackrel{{\varphi}}{{-\!\!\!\!\to}}X^{\prime}\stackrel{{ g}}{{\to}}Z\) where \(\varphi\) is a finite sequence of flips, \(X^{\prime}\) is smooth, \(g\) is a contraction, and \(\dim Z=3\)._ Proof.: Since \(X\) is not a product of surfaces, we have \(\delta_{X}\leq 3\) by Th. 1.4. Moreover \(\delta_{X}=3\) yields \(\rho_{X}\leq 6\) by Th. 1.5, while \(\delta_{X}\leq 2\) yields \(\rho_{X}\leq 12\) by Theorems 1.6 and 4.1. If \(\rho_{X}=12\), the statement follows from [11, Theorems 2.7 and 9.1].
2305.12775
Semantic Segmentation of Radar Detections using Convolutions on Point Clouds
For autonomous driving, radar sensors provide superior reliability regardless of weather conditions as well as a significantly high detection range. State-of-the-art algorithms for environment perception based on radar scans build up on deep neural network architectures that can be costly in terms of memory and computation. By processing radar scans as point clouds, however, an increase in efficiency can be achieved in this respect. While Convolutional Neural Networks show superior performance on pattern recognition of regular data formats like images, the concept of convolutions is not yet fully established in the domain of radar detections represented as point clouds. The main challenge in convolving point clouds lies in their irregular and unordered data format and the associated permutation variance. Therefore, we apply a deep-learning based method introduced by PointCNN that weights and permutes grouped radar detections allowing the resulting permutation invariant cluster to be convolved. In addition, we further adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds. Finally, we show that our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
Marco Braun, Alessandro Cennamo, Markus Schoeler, Kevin Kollek, Anton Kummert
2023-05-22T07:09:35Z
http://arxiv.org/abs/2305.12775v1
# Semantic Segmentation of Radar Detections using Convolutions on Point Clouds ###### Abstract For autonomous driving, radar sensors provide superior reliability regardless of weather conditions as well as a significantly high detection range. State-of-the-art algorithms for environment perception based on radar scans build up on deep neural network architectures that can be costly in terms of memory and computation. By processing radar scans as point clouds, however, an increase in efficiency can be achieved in this respect. While Convolutional Neural Networks show superior performance on pattern recognition of regular data formats like images, the concept of convolutions is not yet fully established in the domain of radar detections represented as point clouds. The main challenge in convolving point clouds lies in their irregular and unordered data format and the associated permutation variance. Therefore, we apply a deep-learning based method introduced by PointCNN that weights and permutes grouped radar detections allowing the resulting permutation invariant cluster to be convolved. In addition, we further adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds. Finally, we show that our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds. AIACT 2021 ## 1 Introduction Modern driver assistance systems, up to fully autonomous driving, require a reliable perception of the vehicle's environment. To ensure this, vehicles have been equipped with sensors such as camera, lidar and radar. The data from those sensors are then processed to derive scene information. Machine Learning (ML)-based algorithms, in particular, are successfully used to process camera images by applying Convolutional Neural Networks (CNNs) [1][2]. Unlike the regular data format of camera images, however, lidar and radar sensors produce point clouds (PC). A PC is defined as an unordered collection of \(\mathrm{N}\in\mathrm{N}\) individual points with coordinates \(\mathrm{p}_{i}\in\mathbb{R}^{D}\), \(\mathrm{i}=1\),..., \(\mathrm{N}\), in \(D\in\mathrm{N}\) dimensions, each of which is assigned a feature vector \(\mathrm{f}_{i}\in\mathbb{R}^{F}\), \(\mathrm{i}=1\),..., \(\mathrm{N}\), \(\mathrm{F}\in\mathrm{N}\). CNNs, however, cannot be efficiently applied to PCs due to their irregular and unordered data format. Various approaches [3][4][5] aim to solve this problem. Radar sensors play a major role in autonomous driving vehicles due to their high reliability in challenging weather conditions as well as a wide detection range. PCs returned by radar sensors, however, are sparse and have a lower spatial accuracy compared to lidar data, potentially impeding the task of extracting features from shape information. On the other hand, radar measurements provide intrinsic properties like the relative, radial speed of objects, also known as Doppler velocity v\({}_{r}\), and the radar cross section (\(\sigma\)) as a measure for the reflectivity. In this work, we focus on perceiving the environment of a vehicle based on radar reflections. By integrating radar reflections over time, approaches like occupancy grid maps extract static information around to the ego car [6]. These approaches, however, fail to account for dynamic objects. On the other hand, state-of-the-art tracking algorithms used to perceive motion of objects are computationally expensive. By processing radar detections individually, this issue can be avoided. Therefore, we present a novel approach to perform semantic segmentation on radar PCs, i.e. classifying each prediction individually. We use a ML-based algorithm to derive information from a combination of intrinsic properties of radar detections like doppler velocities along with spatially local correlations. Recent ML state-of-the-art approaches process PCs using PointNet [3] and PointNet++ [4]. We instead decide to use PointCNN [5] as our key feature extractor. PointCNN essentially differs from PointNet and PointNet++ in the way it maintains permutation invariance while processing PCs: Approaches that build up on PointNet use a maxpooling operation on individually extracted features to maintain permutation invariance during processing clusters of points. In contrast, by building up on PointCNN, we use a neural network to weight and permute features of clustered points to obtain permutation invariance. This operation then makes the application of classical CNNs on the feature maps of grouped radar detections possible. Furthermore, we use a pre-processing network (PP) presented in [7] to derive high level patterns by combining the coordinates (\(x\), \(y\)) with \(v_{r}\) and \(\sigma\) of each detection individually and thus optimize the initial representation of radar-related features. We observed, that radar-detections in our dataset are distributed very heterogeneously on the x-y plane. This property potentially has a negative impact on the efficient extraction of information from spatially local correlations of detections. Therefore, we propose to apply multi-scale-grouping (MSG) [4] to cluster detections instead of the k-nearest neighbour algorithms applied in PointCNN. Finally, we show that our approach outperforms the state-of-the-art algorithm PointNet++ as a basic building block for semantic segmentation on radar PCs. ## 2 Related work A broad variety of algorithms like SegNet [8], U-Net [9], DeepLabv3+ [10] and HRNet [11] show superior performance in semantic segmentation of images by building up on CNNs [1]. In order to apply those approaches on PCs, points need to be transferred into a homogenous data format similar to pixels in images. Once the PC is represented in a grid map format, 2D/ 3D CNNs can be applied for feature extraction. Building up on that, shape recognition and semantic segmentation is carried out in [12][13] by processing PCs into grid cells at different angles and then merging the extracted features in 3D. Although these approaches represent a method of adapting CNNs to point clouds, a grid resolution must be defined that exponentially scales the computational operations that are needed for data processing. Therefore, by transferring PCs to a grid map, a compromise must be made between computational effort and loss of structural information by applying a coarse grid. Alternatively, algorithms [3][4][5][14][15][16] were devised to consume PCs without the need of processing them to a 2D/ 3D grid. A central contribution on directly processing PCs is PointNet [3]. By recursively applying shared multi-layer perceptrons (MLP) [17] on each point in a PC individually, permutation invariance is conserved while the algorithm extracts high-level descriptive patterns. This approach is extended in PointNet++ [4] by clustering detections into overlaying sub-regions depending on their relative distance and applying PointNet [3] on each cluster individually. Thus, the network is capable of extracting spatially local correlations to recognize fine-grained patterns. PointNet++ [4] follows an encoder - decoder structure expanded by skip-links between hierarchical related layers: Encoding of cluster-related characteristics into representative points is recurrently performed in set-abstraction (SA) layers. In this part of the network, the amount of points in the processed PCs decreases while each remaining point contains a more and more expressive signature of the original PC. The highly descriptive PC resulting from SA layers is then decoded in feature-propagation (FP) layers. These layers propagate high level features back to the point locations of the initial PC. Skip connections are used to fuse extracted features of FP and SA layers and thus enrich the expressiveness of the resulting PC. Schumann et al. [18] applied PointNet++ to perform semantic segmentation on radar PCs. They convey detections into a two-dimensional environment _p = (x, y)_ with individual features _f = (x, y, v, \(\sigma\))_. As a result, individual probabilities are predicted for each detection of the processed radar PC for dynamic classes _car_, _truck_, _pedestrian_, _pedestrian group_, _bike_ or _static_. Moreover, Feng et al. applied a slightly modified version of PointNet++ on Radar PCs to discretize between vehicles and guardrails [19]. During SA, they calculate statistics such as density of a cluster and append these characteristics to the corresponding local detections. Further approaches from Wohler et al. [20] used a sampling algorithm like DBSCAN [21] to group together points of the same object and then applied a long short term memory (LSTM) [22] to perform semantic segmentation of the resulting grid. The authors prove the superior performance of the Deep Learning (DL) based LSTM algorithm compared to random forest algorithm [23]. ## 3 Method State-of-the-art approaches that perceive the environment of a vehicle by classifying radar PCs [7][18] build up on PointNet++ [4] as a central feature extractor. PointNet++, however, aims to shift the idea of CNNs to the domain of PCs by individually extracting patterns from radar detections within a PC and then forwarding the most expressive features of neighbouring points to the next layer. In this way, permutation invariance is conserved at the expense of inefficient extraction of features. Due to these compromises, previous PointNet++ based approaches show structural weaknesses in pattern recognition from spatial-local correlations compared to conventional CNNs. By utilizing PointCNN [5], we circumvent these issues by pre-processing points within a cluster of radar detections, aiming to obtain a consistent ordering. A classical convolution can then be carried out on the invariant order of points achieved in this way. The authors of PointCNN demonstrate the superior performance of their approach compared to PointNet and PointNet++ for processing sparse PCs. Radar PCs, which are examined in this work, are characterized by this sparsity. By using PointCNN as the basic feature extractor, we therefore present a novel and potentially superior approach for the semantic segmentation of radar PCs. The success of modern ML-based approaches for classification and semantic segmentation lies in the extraction of spatially local correlations in the data [2]. Therefore, we transform our obtained radar detections to a two-dimensional coordinate system (x, y) with radar-related features \(v_{r}\) and \(\sigma\). Then, we define a proportion of radar detections as representative points (RP). These RP then represent regions of interest we want to extract features from. RP are obtained by applying a sampling algorithm like farthest points sampling (FPS) on the PC. Afterwards, we associate radar detections to each RP based on their Euclidian distance by using a grouping mechanism like k-nearest neighbors. RPs, along with their assigned detections, then represent clusters from which we want to derive patterns. As described in previous sections of this work, the direct convolution on those clusters poses challenges due to the irregular and unordered data format of PCs. The authors of PointCNN address this issue by introducing a so-called X-transformation on the detections of each cluster we want to extract descriptive patterns from and call the resulting layer X-Convolution. According to [5], Figure 1 shows the calculations within a X-Convolution that we apply to pre-process each cluster of radar detections to obtain permutation invariance. First, we move the coordinates \(p\) of radar detections within a cluster to a coordinate system around their corresponding RP so that the RP is located at \(\mathrm{p_{RP}}=(0,\,0)\), by performing \[P^{\prime}=P-p \tag{1}\] with P representing the coordinates of RP. We then lift those local coordinates \(P\)' into high level features \(F_{\delta}\) by feeding them individually into a shared MLP\({}_{\delta}\). \[F_{\delta}\ =MLP_{\delta}(P^{\prime}) \tag{2}\] The resulting, coordinate-dependent features \(F_{\delta}\) of each radar detection are then concatenated with the features F of each point in the cluster to get F\({}_{*}\). \[F_{*}=[F_{\delta},F] \tag{3}\] Finally, we want to encode relations between radar detections within the cluster. At this point, we use the property of PointCNN to achieve permutation invariance for unordered points within a cluster. We now use a MLP to extract a K x K matrix by processing P', where K is the number of points within a cluster. \[X\ =MLP\ (P^{\prime}) \tag{4}\] The K x K matrix is called X-Transformation matrix. This resulting X-Transformation matrix depends on the order of detections. According to PointCNN this is desired, since we then apply the X-Transformation matrix to simultaneously weight and permute features F\({}_{*}\). The order of detections in F\({}_{*}\) is now related to the ordering of local coordinates we use to calculate the X-Transformation matrix. Therefore, weighting and permuting F\({}_{*}\) by X by performing a matrix product potentially leads to a permutation invariant feature mapping F\({}_{X}\). \[F_{X}=X\ \cdot\ F_{*} \tag{5}\] F\({}_{X}\) can then be used as an input to a classical convolution. Descriptive patterns that result from X-Convolutions are then aggregated on the RP of each cluster, respectively. With each layer of X-Convolutions, the size of the resulting point cloud is therefore reduced to the number of RPs. Also, each of those resulting points contains encoded patterns from grouped radar detections of the previous layer. Our network follows the encoder-decoder structure. In the encoder part, we recursively apply X-Convolutions and therefore receive decreasingly dense PCs, while each remaining point aggregates extensive patterns from a subsequently enlarging receptive field. Accordingly, as a result of the final Figure 1: X-Convolution: For each cluster of radar detections, we learn a X-Transformation matrix to weight and permute features of the gathered points to obtain permutation invariance. A classical convolution can then be performed on those pre-processed features. encoding layer, we receive a PC that contains a minority of points, each of which aggregates highly descriptive contextual information. These high-level features then need to be propagated back to the high resolution of the initial PC. We achieve this by re-applying X-Convolutions recursively. This time, however, the resulting point cloud contains more points after each layer, which reflects the size of the respective PCs in the encoding part of the network. In addition, during decoding, we use skip links from encoded layers of the same depth to concatenate features. Finally, we use those resulting feature vectors from our decoded, high resolution PC to predict classification scores for each radar detection. Therefore, we apply a shared MLP to process high level feature vectors of each detection individually. PointCNN was originally developed to process dense laser PCs. Radar data, however, is characterized by its low density as well as its high spatial inaccuracy while each radar reflection contains valuable intrinsic properties. We therefore propose some modifications to better adapt PointCNN to the processing of radar data: First, we deploy a shared MLP on the input feature vector containing \(x\), \(y\), \(v_{r}\) and \(\sigma\) as proposed by [7]. In this way, the network can convert radar-specific properties into an extended representation that facilitates the semantic segmentation task. In additions, radar PCs are distributed very heterogeneously throughout the scene. By applying k-nearest neighbour algorithm to group together neighbouring detections, this circumstance leads to clusters of widely spread detections. Features that we obtain by performing X-Convolutions on those clusters do not necessary incorporate spatially local correlations anymore. As a second modification, we therefore deploy multi-scale grouping (MSG) introduced by PointNet++[4] to improve clustering of our approach. In MSG, we specify a number of points that should be clustered as well as the maximum distance of those detections to their corresponding RP. If there are not enough detections within the given radius, we randomly double detections until we reach the desired cluster size. Thus, we can be sure that grouped detections are always in close proximity to one another. ## 4 Experiments ### Data In our work we perform semantic segmentation on three classes: _moving vehicle_, _moving pedestrian_ and _static_ objects. Detections were assigned to a class by the help of annotated bounding boxes. Due to the spatial inaccuracy of radar detections, we increased the scope of the bounding boxes and assigned the class of a bounding box to each detection within its boundaries. Objects were defined as moving if their absolute velocity was above the value of 2.5m/s for vehicles and 0.5 m/s for pedestrians. Statistics on the distribution of classes in our data set can be obtained from Table 1. To test how well our model generalizes on unseen data, we split our data set into 80% for training and 20% for testing. The radar PCs that we process for training and testing our approach initially consist of coordinates \(p_{\theta}\)= [x, y] and features \(f_{\theta}\)= [\(v_{r}\), \(\sigma\)]. One sample on average consists of 184 detections with a variance of 79 detections. Due to the sparsity of our data, we concatenated and ego-motion compensated four consecutive PCs. We aligned the position of all four PCs to the coordinate system of the last frame. \begin{table} \begin{tabular}{l c c c} \hline Class & Vehicle & Pedestrian & Static \\ \hline **Train** & 6.12\% & 2.26\% & 91.62\% \\ & 1 955 937 & 721 865 & 29 282 877 \\ **Test** & 5.63\% & 1.40\% & 92.98\% \\ & 574 321 & 142 647 & 9 493 667 \\ \hline \hline \end{tabular} \end{table} Table 1: Data Split: The relative (below: absolute) occurrence of the individual classes in the test and training data set. We set the size of our assembled PCs to 1200 detections. In case the number of detections was below this threshold, we doubled random detections, in case the amount was above, we randomly subsampled detections. ### Training We train our network described in section 3 for semantic segmentation by backpropagation [24]. In order to indicate the confidence score for classes _moving vehicle_ and _moving pedestrian_ we apply a classification head with two output channels. To ensure a range between 0 and 1 for each confidence score, we apply a sigmoid function to the logits of the network outputs. We then distinguish between the two classes and the _static background_ as follows: If, for a given prediction for a specific detection no confidence score is above a threshold of 0.5, this detection is predicted as _background_. If at least one of the confidence values is greater than 0.5, the detection is classified as the class with the highest confidence value. Due to the huge imbalance between classes in our data as depicted in Table 1, we used Focal Loss [25] with \(\alpha_{vehicle}=0.8\) and \(\alpha_{pedestrian}=0.95\) together with \(\gamma\)= 2. The training was carried out by Adam Optimizer [26] with a learning rate of \(10^{-4}\). We trained each network for 20 epochs and used those with the highest F\({}_{1}\)-Score [27] for comparison. No data augmentation was applied. ### Results In this section we present and discuss the ability of our approach to individually classify radar detections. A visual analysis of our network performance for the class _moving vehicles_ can be obtained from Figure 2. It shows a single frame of radar reflections from our test set in an urban environment. While most detections are correctly classified as _moving vehicles_ or _static background_, some false positive moving vehicle predictions can occasionally be observed. This type of misclassification can be traced back to sensor-related noises caused by time synchronization or mirroring effects. The scene, however, illustrates the capability of our approach to distinguish between detections from the same object versus background noise. Furthermore, the visualization of a radar PC in Figure 2 shows the heterogenous distribution of detections within the scene which, as mentioned in Section 3, motivates us to replace k-nearest neighbour algorithm with MSG to form clusters within the PC. Before we quantitatively compared our approach with previous algorithms, we carried out experiments to find an optimal spatial representation of the input PCs. As stated above, our initial PC consists of coordinates \(x\) and \(y\) together with features \(v_{r}\) and \(\sigma\). Experiments, however, showed that applying \(v_{r}\) as an additional coordinate, e.g. \(p_{0}\) = [x, y, \(v_{r}\)] improved grouping of detections which resulted in a better overall performance. Accordingly, we applied the doppler signature of any radar detection as an additional coordinate rather than a feature. Figure 2: Analysis of our approach in an urban environment consisting of blue (true positive - TP), white (true negative - TN) and yellow (false negative - FN) moving vehicle predictions. Green bounding boxes indicate a dynamic and white a stationary object. To quantify the performance of our approach, we apply the F\({}_{1}\)-Score as the harmonic mean of precision and recall. As depicted in Table 2, we calculate those statistics for each class individually and then determine the average scores for precision, recall and F\({}_{1}\) to provide a general performance measure for each approach. First, we use a vanilla version of PointCNN (second row) for the semantic segmentation task and compare it to PointNet ++ (first row) as defined in [18]. This comparison clearly demonstrates the superior performance of PointCNN on detecting moving objects. In addition, Table 2, shows a discrepancy in memory and computing power requirements between these two algorithms. While PointCNN uses parameter-expensive X-Convolutions, PointNet++ applies shared MLPs as a feature extractor which leads to an extensive number of FLOPs needed for a forward pass of the network. Furthermore, Table 2 reveals the value of the pre-processing network as well as MSG to adapt PointCNN to radar data. First, applying pre-processing to encode features of our initial radar PC increases the overall performance of PointCNN while adding a slight cost in terms of memory and computation. Replacing k-nearest neighbour with MSG, however, improves the capability of the network to cluster detections of large size objects like vehicles. When applied without PP, it slightly decreases the segmentation performance on pedestrians. Finally, when we combine pre-processing with MSG in PointCNN, the resulting algorithm outperforms any of those modification implemented separately as well as the vanilla version of PointCNN. Further, when compared to PointNet++ as proposed in [18], our approach of combining PP with MSG achieves an average improvement of ~4% in F\({}_{1}\)-Score. PointNet++ is widely used as a basic component to extract descriptive patterns from unordered data formats like PCs. Even PCNN [7], the so far best performing network for the semantic segmentation of radar PCs, uses PointNet++ as a central component for feature extraction. When compared in Table 2 our novel version of PointCNN outperforms RadarPCNN by a margin of ~1% in F\({}_{1}\)-Score for the moving vehicle class and a slight margin in average F\({}_{1}\)-Score. Therefore, our approach not only offers an alternative to PointNet++ but also reveals a superior network architecture for the semantic segmentation of radar PCs. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Moving Vehicle} & \multicolumn{3}{c}{Moving Pedestrian} & \multicolumn{3}{c}{Average} \\ Method & Prec. & Recall & F\({}_{1}\) & Prec. & Recall & F\({}_{1}\) & Prec. & Recall & F\({}_{1}\) & Param. & FLOPs \\ \hline **PointNet++**[18] & 72.77 & 82.89 & 77.50 & 46.10 & 75.77 & 57.32 & 59.44 & 79.33 & 67.41 & 418.6K & 1.55G \\ **PointCNN** & 76.48 & 87.59 & 81.66 & 45.57 & 75.71 & 56.89 & 61.3 & 81.65 & 69.28 & 653.2K & 0.84G \\ **PointCNN (PP)** & 79.30 & 85.07 & 82.08 & 46.58 & 76.55 & 57.92 & 62.94 & 80.81 & 70.00 & 663.8K & 0.86G \\ **PointCNN (MSG)** & 80.45 & 84.85 & 82.59 & 43.71 & 75.11 & 55.26 & 62.08 & 79.98 & 68.93 & 653.2K & **0.83G** \\ **RadarPCNN**[7] & 75.78 & **91.44** & 82.88 & **48.64** & **75.82** & **59.27** & 62.21 & **83.63** & 71.07 & **175.3K** & 1.02G \\ **PointCNN (PP+MSG)** & **80.81** & 87.84 & **84.18** & 47.88 & 75.35 & 58.55 & **64.35** & 81.60 & **71.37** & 663.8K & 0.85G \\ \hline \hline \end{tabular} \end{table} Table 2: Semantic segmentation results (%). Figure 3: Confusion matrices of predicted classes using our optimized version of PointCNN. On the left side we show the absolute amount of predictions, on the right side the point counts in relation to all labeled detections of a class are presented. Confusion matrices from Figure 3 provide an extensive insight into the performance of our approach. In the matrix on the left, a large imbalance can be observed between detections for the pedestrian / vehicle and the static background class, with a large excess of occurrence for the latter. Even though this effect should be compensated for by Focal Loss mechanisms that we built in, a tendency for the network to predict detection as background cannot be completely ruled out. This can further be discovered from the matrix on the right side of Figure 3. Here, we obtain a major confusion between detections from vehicles and pedestrians that are falsely predicted as background. We belief that besides the bias towards the background class that we already mentioned, this behaviour can be traced back to the discrepancy in our data between radial doppler speed and the absolute speed of detected objects which makes it hard for the network to rely on doppler signatures. On the other hand, our approach shows robust characteristics in distinguishing between pedestrians and vehicles with a minor false positive rate of \(\sim\)1%, respectively. Figure 4 emphasizes the value of radar-specific, intrinsic properties \(v_{r}\) and \(\sigma\) of each detection in the PCs. The figure shows that removing \(\sigma\) as an input property of the network only marginally decreases the network performance represented by the F1-Score (blue curves) for both vehicle and pedestrian reflections. On the other hand, removing \(v_{r}\) from the input feature vector causes a huge drop in performance. This shows the significant importance of Doppler velocities in classifying detections. ## 5 Conclusion and future work In this work we show how PointCNN can effectively be applied for pattern recognition within radar PCs, outperforming previous approaches that are built upon PointNet++ on the task of semantic segmentation. We exploit the strengths of PointCNN to learn how to pre-process a cluster of radar detections with the help of neural networks so that detections within the cluster can be convolved to extract spatially local correlations. We then implement some tweaks on PointCNN to adjust the algorithm to the characteristics of radar data like low density, heterogenous distributed detections and intrinsic properties of each radar reflection. Therefore, we first apply the idea of a pre-processing network to cast inputs x, y, doppler velocity and radar cross section of our initial radar PC to a representation that makes it easier for the network to extract patterns. In addition, we improve clustering of detections within PointCNN by implementing multi-scale grouping to ensure clusters of neighbouring radar detections. When applied on the semantic segmentation task, our network reaches superior performance compared to previous approaches. Figure 4: Ablation studies on radar PC properties that the network consumes for semantic segmentation. F1-Score as well as Precision and Recall are shown for feature vectors with subsequent removing v\({}_{r}\) and \(\sigma\) until semantic segmentation is carried out exclusively based on the coordinates x and y. The ideas presented in this work can be used to replace PointNet++ as a feature extractor in RadarPCNN. In this way, the attention mechanism of RadarPCNN could be combined with the superior performance of PointCNN on convolving PCs.
2304.05120
Real time enhancement of operator's ergonomics in physical human-robot collaboration scenarios using a multi-stereo camera system
In collaborative tasks where humans work alongside machines, the robot's movements and behaviour can have a significant impact on the operator's safety, health, and comfort. To address this issue, we present a multi-stereo camera system that continuously monitors the operator's posture while they work with the robot. This system uses a novel distributed fusion approach to assess the operator's posture in real-time and to help avoid uncomfortable or unsafe positions. The system adjusts the robot's movements and informs the operator of any incorrect or potentially harmful postures, reducing the risk of accidents, strain, and musculoskeletal disorders. The analysis is personalized, taking into account the unique anthropometric characteristics of each operator, to ensure optimal ergonomics. The results of our experiments show that the proposed approach leads to improved human body postures and offers a promising solution for enhancing the ergonomics of operators in collaborative tasks.
Gerasimos Arvanitis, Nikos Piperigkos, Christos Anagnostopoulos, Aris S. Lalos, Konstantinos Moustakas
2023-04-11T10:21:45Z
http://arxiv.org/abs/2304.05120v1
Real time enhancement of operator's ergonomics in physical human - robot collaboration scenarios using a multi-stereo camera system ###### Abstract In collaborative tasks where humans work alongside machines, the robot's movements and behaviour can have a significant impact on the operator's safety, health, and comfort. To address this issue, we present a multi-stereo camera system that continuously monitors the operator's posture while they work with the robot. This system uses a novel distributed fusion approach to assess the operator's posture in real-time and to help avoid uncomfortable or unsafe positions. The system adjusts the robot's movements and informs the operator of any incorrect or potentially harmful postures, reducing the risk of accidents, strain, and musculoskeletal disorders. The analysis is personalized, taking into account the unique anthropometric characteristics of each operator, to ensure optimal ergonomics. The results of our experiments show that the proposed approach leads to improved human body postures and offers a promising solution for enhancing the ergonomics of operators in collaborative tasks. operator's ergonomics, anthropometrics, human-robot collaboration, multi-stereo camera system. ## I Introduction In manufacturing, when operators are physically interacting with a robot, trying to accomplish a collaborative task, their posture is inevitably influenced by the robot's movement and trajectory [1, 2]. Work-related Musculoskeletal Disorders (WMSD) is a major health problem in developed countries [3] and there is a worldwide interest to reduce the conditions and risk factors [4] that may cause this problem [5], decreasing in this way the substantial costs for therapies and negative impacts on the life quality of operators [6]. To overcome this challenging situation, adapted systems are required, able to be adjusted according to the anthropometrics and special characteristics of each operator, improving their physical situation and ergonomics [7]. In literature, there are a lot of works [8] that have used depth sensors (Kinect) or low-cost RGB devices to compute the Rapid Upper Limb Assessment (RULA) [9, 10] score or to detect awkward postures [11]. To achieve this, open-source landmark estimation libraries [12, 13] are used for calculating the body joint angles. Additionally, a lot of research has been done to improve physical ergonomics [14] in human-robot collaboration [15, 16] by generating task assignments [17, 18], finding the optimal trade-off motions [19], providing wearable feedback [20], estimating the physical load required in a daily job [21], or providing an automatic assessment of ergonomics [22]. In this study, we operate under the premise that collaborative tasks between robots and humans in industrial environments are typically comprised of pre-defined and known actions for the sake of safety. One factor that can impact the robot's movement is the operator's body characteristics, such as height. This factor does not alter the primary action of the robot (e.g., delivering a tool), but can be utilized to select the optimal adjustment of the robot's position and configuration for optimized ergonomics assessment of the operator (e.g., placing the tool in a comfortable location without causing strain). This adjustment takes place only at the start of the robot's operation, as dynamic adjustments are deemed inappropriate for safety reasons. The adaptation of the robot's trajectory facilitates the improvement of the operator's ergonomics and it is performed based on their unique anthropometric characteristics. The key contributions of this work can be summarized as follows: * A graph Laplacian-based scheme that provides a 3D landmark optimization, allowing for the best use of available captured information at any time, regardless of the camera source. * The development of a manufacturing simulator from scratch, emulating robotic movements and operator's actions in human-robot collaboration scenarios. * A framework that increases the operators' situational awareness regarding uncomfortable work postures that may be harmful in long-term exposure. This is achieved through a personalized anthropometric estimation component that provides a comprehensive ergonomic analysis of the operators' actions during their collaboration with the robot. * An offline tool, suitable for the ergonomics experts to analyse and evaluate the operator's action and help them to set personalized rehabilitation strategies or recommendations to avoid muscle injuries. The rest of this paper is organized as follows: In Section 2, we discuss in detail each step of the proposed method. Section 3 presents the experimental results and in Section 4 we draw the conclusions. ## II General Architecture & Methodology The architecture of the proposed multi-stereo camera system is illustrated in Fig. 1. In our implementation, the system consists of three stereo cameras placed in strategic locations to provide continuous monitoring of the operator's pose from multiple angles, increasing the possibility of continuously having a good capture of the operator's pose (i.e., a view in front of the operator), at least from one of the cameras, while he/she is freely moving in different directions. We use each RGB camera of each stereo set to capture the operator's actions. To extract 2D posture landmarks, we utilize the MediaPipe framework [23], which runs locally on the Jetson device where the stereo camera is connected. The extracted landmarks are then sent to a PC server for 3D landmark estimation using a Direct Linear Transform (DTL) triangulation approach [24] and optimization. Next, we estimate the operator's height and calculate their current physical ergonomic state based on the RULA score1. For seamless integration of the system components and algorithms, we utilize the Robotic Operating System (ROS) as our middleware. The system runs on isolated nodes that communicate using a publish-subscribe model, and we have developed a ROS node for each camera. Each node captures a frame of the camera at a frequency of 10Hz, extracts and processes the landmarks, and publishes them to a relevant ROS topic. The decentralized approach of ROS allows us to connect the cameras to different physical machines, such as the Jetson TX2 embedded device, and all components must be connected to the same local network. Footnote 1: [https://ergo-plus.com/rula-assessment-tool-guide](https://ergo-plus.com/rula-assessment-tool-guide) Algorithm 1 summarizes the main steps of the proposed framework. More details will be presented in the next sections. ``` /* A) Calibration steps (It is performed only once) */ 1for\(i=1:M\)do 2Single camera \(C_{i}\) calibration; 3 end for 4for\(j=1:L\)do 5Stereo camera \(S_{j}\) calibration; 6 end for 7Data: Frames' sequency from the \(M\) cameras Result: Improve operators' physical ergonomics /* B) 3D landmarks estimation */ 8for\(i=1:M\)do /* It runs locally to the Jetson devices */ 92D pose landmark estimation from \(C_{i}\); 10 11 end for 12 13Send the 2D pose landmarks to the central server via ROS; 14for\(j=1:L\)do 153D pose landmark estimation from \(S_{j}\) via DLT 16 17 end for /* C) 3D landmarks optimization */ 18Follow the steps 1-5 of section II.B; /* D) Anthropometric and ergonomic analysis */ 19Operator's height estimation; 20Real-time RULA score analysis; ``` **Algorithm 1**Algorithmic steps of the framework ### _3D Landmarks Calculation_ To calibrate each camera individually, we begin by using the same checkerboard pattern positioned in a location that is visible to all cameras at the same time. This step is crucial for performing stereo calibration later. We also ensure that all the used cameras have the same resolution analysis and capture images at the same frame-per-second (fps) ratio to ensure that all captured frames are synchronized [25]. #### Ii-A2 **Stereo Calibration** Let's assume that each camera of the integrated system can be defined as \(C_{i}\)\((\forall\ i=1,\cdots,6)\) with \(\textbf{c}_{i}\in\mathbb{R}^{3\times 1}\) 3D coordinates. The single-camera calibration of camera \(C_{i}\) returns the rotation matrix \(\textbf{R}_{i}\in\mathbb{R}^{3\times 3}\), the translation \(\textbf{T}_{i}\in\mathbb{R}^{3\times 1}\) matrix and the distortion coefficients. However, the **R** and **T** matrices alone are not enough to triangulate the 3D points. For the stereo-calibration procedure, we need two sets of cameras. For the sake of simplicity, let's define these two cameras as \(C_{1}\) and \(C_{2}\). Notice here that the same approach can be used for the stereo calibration of all other cameras. The rotation matrix **R** and translation vector **T** shows how to go from the \(C_{2}\) coordinate system to the \(C_{1}\). We also obtain world coordinates to \(C_{2}\) rotation and translation, by calculating: \(\textbf{R}_{2}=\textbf{RR}_{1}\), \(\textbf{T}_{2}=\textbf{RT}_{1}+\textbf{T}\). Next, we choose the \(C_{1}\) position as world space origin (\(x=0,y=0,z=0\)). Thus, the world origin to \(C_{1}\) rotation is equal to the identity matrix and translation is a zeros vector. Then, the **R** matrix and the **T** vector that have been estimated from the previous stereo-calibration step becomes the rotation and translation from world origin to \(C_{2}\). Basically, this means that the 3D triangulated points will be in a space with respect to the coordinate system of \(C_{1}\)'s lens. In other words, we simply overlap world coordinates with the coordinates of the \(C_{1}\) camera. This means \(\textbf{R}_{1}=\text{eye}(3)\), \(\textbf{T}_{1}=\text{zeros}(3)\) and \(\textbf{R}_{2}=\textbf{R}\), \(\textbf{T}_{2}=\textbf{T}\). Therefore, all triangulated 3D points are measured Fig. 1: Proposed concept architecture. from the \(C_{1}\) camera position in the world. 3) _Direct Linear Transform_. We assume the existence of a 3D point **p** in real space with coordinates given as \(\textbf{p}=[x,y,z]\). This point can be observed from both 2 cameras, which have pixel coordinates \(\textbf{G}_{i}=[u_{i},v_{i},1]\) for \(C_{i},\forall\ i=1,2\). Using the camera projection matrix \(\textbf{K}_{1}=[\textbf{R}_{1}\textbf{T}_{1}]\in\mathbb{R}^{3\times 4}\), \(\textbf{G}_{1}\) can be written as: \(\textbf{G}_{1}=a\textbf{K}_{1}\). In a triangulation problem, we do not know the coordinates of **p**. But we can determine the pixel coordinates and the projection matrix through camera calibration. Since \(\textbf{G}_{1}\) and \(\textbf{K}_{1}\textbf{p}\) are parallel vectors, the cross product of these should be zero. The \(j\) row vector of \(\textbf{K}_{1}\) can be written as \(\textbf{k}_{j}\forall\ j=1,...,4\). This gives us: \[\begin{bmatrix}u_{1}\\ v_{1}\\ 1\end{bmatrix}\times\begin{bmatrix}\textbf{k}_{1}\textbf{p}\\ \textbf{k}_{2}\textbf{p}\\ \textbf{k}_{3}\textbf{p}\end{bmatrix}=\begin{bmatrix}v_{1}\textbf{k}_{3} \textbf{p}-\textbf{k}_{2}\textbf{p}\\ \textbf{k}_{3}\textbf{p}-u_{1}\textbf{k}_{3}\textbf{p}\\ u_{1}\textbf{k}_{2}\textbf{p}-v_{1}\textbf{k}_{1}\textbf{p}\end{bmatrix}= \tag{1}\] \[\begin{bmatrix}v_{1}\textbf{k}_{3}-\textbf{k}_{2}\\ \textbf{k}_{3}-u_{1}\textbf{k}_{3}\\ u_{1}\textbf{k}_{2}-v_{1}\textbf{k}_{1}\end{bmatrix}\textbf{p}=\begin{bmatrix} 0\\ 0\\ 0\end{bmatrix}\] This gives us an equation of the form \(\textbf{A}\textbf{x}=0\). But the third row is a linear combination of the first two rows giving 2 systems of equations, which is not enough to solve the 3 unknowns in **p**. Since we have two cameras, we can extend the matrix to have more rows. In fact, we simply add on more rows for any number of views. This gives us the equation: \[\begin{bmatrix}v_{1}\textbf{k}_{3}-\textbf{k}_{2}\\ \textbf{k}_{3}-u_{1}\textbf{k}_{3}\\ u_{1}\textbf{k}_{2}-v_{1}\textbf{k}_{1}\\ v_{2}\textbf{k}_{3}-\textbf{k}_{2}\\ \textbf{k}_{3}-u_{2}\textbf{k}_{3}\\ u_{2}\textbf{k}_{2}-v_{2}\textbf{k}_{1}\end{bmatrix}\textbf{p}=0 \tag{2}\] DLT is a method for calculating a matrix equation of the form \(\textbf{Ap}=0\). In the real world, there can be some noise, so we write the equation as \(\textbf{Ap}=\textbf{n}\), and we solve for **p** such that **n** is minimized. The first step is to determine the SVD decomposition of **A**. \[\textbf{Ap}=\textbf{USV}^{T}\textbf{p} \tag{3}\] Our goal is to minimize **n** for some **p**. This can be done by taking the dot product: \[\textbf{n}^{T}\textbf{n}=(\textbf{p}^{T}\textbf{VSU}^{T})\cdot(\textbf{USV}^{ T}\textbf{p})=\textbf{p}^{T}\textbf{VS}^{2}\textbf{V}^{T}\textbf{p} \tag{4}\] **U** and **V** are orthonormal matrices and **S** is a diagonal matrix. The entries on the diagonal of **S** are decreasing, so that the last entry on the diagonal is the minimum value. Since our goal was to minimize \(\textbf{n}^{T}\textbf{n}\), this tells us that it is equivalent to choosing the smallest value of \(\textbf{S}^{2}\) by selecting the corresponding column vector of \(\textbf{V}^{T}\) as **p**. ### _3D Landmark Optimization Algorithm_ As a result of varying camera viewpoints and operator movements, we expect that landmarks detected from different cameras will not be the same and may differ in accuracy. Therefore, it is necessary to design a fusion approach that integrates the multi-camera system to provide the best possible landmarks configuration from all the involved cameras. For this task, we create an undirected graph of \(N+M\) points \(\hat{\textbf{p}}=[\textbf{p}\ \textbf{c}]^{T}\in\mathbb{R}^{(N+M)\times 3}\), consisting of the \(N\) detected landmarks as edges **p**, and the \(M\) cameras as nodes **c**. Fig 3 presents a simplified example of a graph with \(M=4\) and \(N=4\). In order to take advantage of this graph topology formulation, we apply the Graph Laplacian Processing (GLP) technique [26] for re-estimating landmarks in an optimal manner, following the next steps: 1) Encode the spatial relationship of cameras \(C\) and landmarks **p** via GLP. 2) Construct the binary Laplacian matrix \(\textbf{L}\in\mathbb{R}^{(N+M)\times(N+M)}\) of connectivity graph, indicating which landmark is related to which camera. To mention here that there are not connections between two cameras and between two landmarks. 3) Use as anchor points \(\textbf{a}=[\bar{\textbf{p}}\ \textbf{c}]^{T}\in\mathbb{R}^{(N+M)\times 3}\) the known location of cameras **c** and the average position of landmarks \(\bar{\textbf{p}}^{i}=\sum_{i=1}^{M}\textbf{p}^{i}\in\mathbb{R}^{N\times 3}\) as estimated by each one of the cameras. 4) Relative measurements and anchors are used to formulate differential coordinates \(\boldsymbol{\delta}\in\mathbb{R}^{(N+M)\times 3}\)[27, 28], where the Fig. 3: Indicative example of graph topology for cameras and landmarks. Fig. 2: Examples of the simulator in different configurations (height of the operator), corresponding to 3 different anthropometric classes. \(i^{th}\) row of \(\mathbf{\delta}_{i}\) is equal to: \[\begin{split}\mathbf{\delta}_{i}=[\mathbf{\delta}_{xi}\ \mathbf{\delta}_{yi}\ \mathbf{\delta}_{zi}]=\hat{\mathbf{p}}_{i}-\frac{1}{|\Psi_{i}|}\sum_{j\in\Psi_{i}} \hat{\mathbf{p}}_{j}=\mathbf{L}\hat{\mathbf{p}}_{i},\\ \forall\ i=1,...,N+M\end{split} \tag{5}\] where \(\mathbf{l}_{i}\) is the \(i\)-row of Laplacian matrix, and \(|\Psi_{i}|\) is the number of immediate neighbors of \(\hat{\mathbf{p}}_{i}\). 5) The optimized positions of point \(\hat{\mathbf{p}}\) are given by the minimization of the following cost function: \[\operatorname*{argmin}_{\hat{\mathbf{p}}}\left\|\tilde{\mathbf{L}}\hat{\mathbf{p} }-\mathbf{b}\right\|_{2}^{2} \tag{6}\] where \(\tilde{\mathbf{L}}=[\mathbf{L}\ \mathbb{I}_{N+M}]^{T}\in\mathbb{R}^{2(N+M)\times(N+M)}\), \(\mathbb{I}_{N+M}\) is the identity matrix and \(\mathbf{b}=[\mathbf{\delta}\ \mathbf{a}]^{T}\in\mathbb{R}^{2(N+M)\times 3}\). Finally, the re-estimated positions of landmarks are derived from linear least-squares minimization: \[\hat{\mathbf{p}}=\left(\mathbf{L}^{T}\mathbf{L}\right)^{-1}\mathbf{L}^{T}\mathbf{b} \tag{7}\] To note that during the simulation horizon the topology of landmarks and cameras remains unchanged. As such, the term \(\left(\mathbf{L}^{T}\mathbf{L}\right)^{-1}\mathbf{L}^{T}\) is constant and can be computed only once at the start of simulation, in order to be used for the successive frames. In that way, we reduce the computational time required for solving (7) since the costly operation of matrix inversion is performed only at the beginning of the process. ## III Experimental Analysis and Results ### _Experimental Setup and Implementations_ The operating system of the server is UBUNTU 20.04 which is compatible with ROS 1. All the algorithms are written in Python 3.10. There are no special requirements regarding the computational power of the server since the implementation can easily run to low-cost devices (e.g., Jetson). 1) _Operator's Height Estimation_. For this use case two steps are followed: i) Estimation of the operator's height; ii) Adaptation of the robot's position. Once the operator's height has been estimated, the robot's control system receives the information and configures the robot's movement parameters according to the selected scenario. This adaptation ensures that the robot's movements are ergonomically comfortable for the interacting user. 3 different classes \(c\) are used, based on the operators' height (\(c_{1}<1.68,1.68\leq c_{2}\leq 1.82,c_{3}>1.82\)) (Fig. 2). Each class leads to an adaptable robot response, ensuring that the user can work in an optimal ergonomic state. 2) _Real-Time RULA Estimation_. We perform a real-time joint angle estimation to calculate the RULA score (ranging from 1 to 7, used by ergonomists to evaluate work activities involving upper-body motion [29]). Based on this value, the appropriate warning messages are sent to the operators, informing them if their working pose is ergonomically correct or not (Fig. 4), so as to correct their posture in real-time. 3) _Simulator_. To evaluate the effectiveness of our framework, we have developed an in-house simulation framework from scratch (Fig. 5). The simulator emulates a robotic movement and the corresponding operator's actions in a digital workspace created in Unity3D [30]. Fig. 7 presents an example of a user's action before and after the robot's adaptation. 3 stereo cameras are involved, located in 3 different areas of the working environment, monitoring the operator's actions while performs a collaborative tasks with a KUKA robot. The primary objective of the manufacturing simulator is to create a digital twin, which provides a safe environment for evaluating the developed framework and algorithms. Additionally, the simulator can be also used for further ergonomic analysis in order to optimize the workstation of a real industrial environment, so as to obtain a trajectory of the robot as much as possible adapted to the human. Furthermore, the simulator serves as a validation framework since the ground truth information of the 3D landmarks' locations is not known in the real case. 4) _Ergonomics Evaluation Tool_. The 3D landmarks are stored and are accessible offline (i.e., 3D view, pause, zoom in/out) by ergonomic experts that can evaluate the poses during a specific action, and suggest how it could be improved. Fig. 6 illustrates an example showing how the joint angles are strained during an action. A heatmap visualizes the level of strength (deep blue is the lowest, deep red is the highest). ### _Experimental Results_ Table I summarizes the main outcomes (RMSE values of the 12 landmarks **p**, per each stereo camera \(S_{i}\ \forall\ i\in(1-3)\)) Fig. 4: Safe and wrong pose identification. Fig. 5: Screenshot from the environment of the simulator. from a testing scenario running in the simulator. As we can see, in most of the cases, the proposed GF-based fusion approach achieved results very close to the optimal, while in two cases (**p\({}_{6}\)** and **p\({}_{11}\)**), landmarks were detected slightly better than the optimal camera. To mention here that the evaluation of the fusion algorithm's performance can be achieved only in a simulated environment since in a real-case scenario the ground truth locations of the landmarks, as well as the accuracy of the cameras, can not be known. Fig 8 shows a plot with the mean RULA score of different users with different heights (1.5-2 m.) that perform a very specific task before and after the robot's adjustment. As we can see, only operators with 1.75 m. height has a low RULA score before the adaptation, while it is improved for all of the users after the robot's adaptation. Fig 9 shows the mean score per different body areas for all operators. The improvement is more apparent for the cases of "Neck" and "Lower arms". Fig 10 shows how the distribution of the angles per each joint decreases after the robot's adaptation. ## IV Conclusions & Future Work In this work, we introduced a multi-stereo camera pipeline aimed at enhancing the operator's physical ergonomics by optimizing the estimation of 3D landmarks. This is achieved in two ways: 1) by adjusting the movement parameters of the collaborative machine to the unique anthropometrics of the operator and 2) by modifying the operator's motion and actions through real-time visual warnings or expert recommendations, based on RULA score estimation. To facilitate the evaluation of our solution, we also developed a simulator. Future plans include implementation in a real industrial setting and a survey to gauge the effectiveness of the method through real user feedback and additionally the evaluation in more complex collaborative tasks.
2307.12118
Light Sterile Neutrinos in the Early Universe: Effects of Altered Dispersion Relations and a coupling to Axion-Like Dark Matter
We investigate the cosmological consequences of light sterile neutrinos with altered dispersion relations (ADRs) and couplings to an ultra-light, axion-like scalar field. In particular we study the impact on the number of additional, light, fermionic degrees of freedom and primordial nucleosynthesis. While the ADR leads to a new potential term in the Hamiltonian, the coupling to the scalar field results in a time dependent, effective mass contribution. We solve the quantum kinetic equations (QKEs) for the neutrino density matrix and find that in certain parameter regions both new physics effects can individually yield a suppressed population of sterile neutrino species and the correct observed amount of helium in nucleosynthesis. Combining both effects opens up new patches of parameter space excluded by experimental bounds applying to models featuring only one of the effects.
Dominik Hellmann, Heinrich Päs
2023-07-22T16:22:56Z
http://arxiv.org/abs/2307.12118v2
Light Sterile Neutrinos in the Early Universe: Effects of Altered Dispersion Relations and a coupling to Axion-Like Dark Matter ###### Abstract We investigate the cosmological consequences of light sterile neutrinos with altered dispersion relations (ADRs) and couplings to an ultra-light, axion-like scalar field. In particular we study the impact on the number of additional, light, fermionic degrees of freedom and primordial nucleosynthesis. While the ADR leads to a new potential term in the Hamiltonian, the coupling to the scalar field results in a time dependent, effective mass contribution. We solve the quantum kinetic equations (QKEs) for the neutrino density matrix and find that in certain parameter regions both new physics effects can individually yield a suppressed population of sterile neutrino species and the correct observed amount of helium in nucleosynthesis. Combining both effects opens up new patches of parameter space excluded by experimental bounds applying to models featuring only one of the effects. ## 1 Introduction There exist several anomalies in short baseline (SBL) neutrino experiments that point towards one or more sterile neutrinos in the \(\mathcal{O}(1\,\mathrm{eV})\) range [1, 2, 3, 4]. Even after the recently published MicroBooNE data [5, 6, 7, 8] had disfavored the excess observed in the MiniBooNE experiment, sterile neutrinos in this mass range remain an interesting option for Standard Model (SM) extensions [9, 16]. As has been shown many years ago, the simple solution of just adding a few sterile neutrino generations in the correct mass range to the SM is not viable, since it solves the problem only for SBL experiments while long baseline (LBL) and atmospheric neutrino oscillation experiments do not observe any anomalies [10, 11, 12, 13, 14, 15] at the same baseline-energy ratio \(L/E\). Therefore, if the SBL anomalies are indeed explained by light sterile neutrinos, the oscillation probability cannot depend on \(L/E\) in the same way as it would be the case in the standard description. There exist several proposals to resolve this tension (for a recent review see [16]). One such possibility is to include an additional potential term for sterile neutrinos in the Hamiltonian that parametrizes possible new physics effects. As a consequence, an energy dependent mixing pattern arises between active and sterile neutrinos. In this work, we mainly concentrate on a model proposed and investigated in Refs. [17, 18, 19, 20, 21, 22, 23] where the active-sterile mixing becomes maximal at some resonance energy \(E_{\mathrm{res}}\) and nearly vanishes at higher energies. Such altered dispersion relations (ADRs) for example arise if sterile neutrinos can take shortcuts through asymmetrically warped extra dimensions that are unaccessible to particles charged under the SM gauge group [24]. In order to solve the SBL anomalies, one has to require that \(E_{\rm res}\) is located within the energy ranges probed by these experiments such that oscillations incorporating the new heavier mass eigenstate associated with the sterile neutrino can reproduce the observed pattern. At higher energies, the sterile neutrinos decouple from the active ones due to mixing suppression and the standard oscillation pattern is restored in the energy ranges typically probed by LBL experiments. While this modification may help to resolve the tensions encountered between SBL and LBL experiments, there remains a severe tension with cosmological observations. For example, the effective number of neutrino generations \(N_{\rm eff}:=3+\Delta N_{\rm eff}\) has been shown to be very close to 3 at the time of recombination according to recent Planck data [25], while it could be altered dramatically if new, light, sterile degrees of freedom are equilibrated in the early universe. Furthermore, the helium fraction produced in big bang nucleosynthesis (BBN), \(Y_{4\,{\rm He}}\), is sensitive to two effects. The first and most significant one is the faster expansion of the universe caused by the presence of additional light degrees of freedom leading to an earlier freeze out of neutron-proton reactions. The second effect is the possible conversion of electron neutrinos to sterile neutrinos also resulting in a potentially earlier freeze out since electron neutrinos are a substantial part of almost all reactions keeping neutrons and protons in equilibrium with the plasma. Sterile neutrinos with ADRs may also ameliorate this cosmological tension by suppressing the population of the sterile flavor at high energies in the early universe, as has been suggested in [20]. Another idea that has been proposed to explain the SBL oscillation anomalies employs an axion-like particle (ALP) coupling to the sterile neutrinos via a Yukawa interaction term [26, 27]. This additional, ultra-light scalar field \(\phi\) is assumed to be an approximately homogeneous condensate behaving like a classical field. Its time evolution is governed by the Klein-Gordon equation in an expanding universe and influences the effective mass of the sterile neutrino and hence the mixing of the sterile and active neutrino. (Note that a similar idea [28] has been employed recently to suppress the population of sterile degrees of freedom in the early universe by means of an additional sterile mass generated by matter effects. Here we focus on models where the additional mass contribution originates exclusively from the evolution of the ALP condensate and independent from other background matter densities.) In this paper we discuss both, the individual cosmological effects of and the interplay between altered dispersion relations and interactions with a light scalar field. In order to estimate the influence of the full model on the chosen set of cosmological quantities, i.e. the effective number of additional neutrino generations \(\Delta N_{\rm eff}\) and the helium fraction \(Y_{4\,{\rm He}}\), we solve the Boltzmann equation for the neutrino density matrix \(\varrho\). This defines the main task of this work, since from \(\varrho\) we can derive the neutrino phase space distributions and energy densities. In order to simplify our framework for this study, we only consider a two flavor system of one active and one sterile flavor. Moreover, we take the active neutrino to be the electron neutrino to be able to draw realistic conclusions about nucleosynthesis. This paper is organized as follows: In chapter 2, we describe the model in detail and define the parameter space and crucial parameters. Chapter 3 is concerned with setting up and numerically solving the Boltzmann equations. Furthermore, we discuss the proper definition of the density matrix. In chapter 4, we present results for the final helium abundances and \(\Delta N_{\rm eff}\) for various benchmark points in the model parameter space. Finally in chapter 5, we draw our conclusions. ## 2 Sterile Neutrinos with Altered Dispersion Relations coupling to Axion-Like Dark Matter ### Sterile Neutrinos with Altered Dispersion Relations Assuming that the relativistic dispersion relations of neutrinos are altered by some unspecified new physics effect such as the presence of asymmetrically warped extra dimensions1 leads to new terms in the propagation Hamiltonian. Employing the usual ultra-relativistic expansion of the neutrino dispersion relation yields Footnote 1: The asymmetric warping leads to the effect that sterile neutrinos propagating through the extra dimension between two points \(x_{1}\) and \(x_{2}\) on the SM brane need less time to complete their journey than SM neutrinos traveling between those points on geodesics bound to the SM brane [17, 24]. An observer located on the brane will hence come to the conclusion that the usual energy momentum relation does not hold for sterile neutrinos and needs to incorporate an effective potential into \(E^{2}=p^{2}+m^{2}\) in order to describe their behavior using brane bound quantities. \[\mathcal{H}(p) =\frac{1}{2p}M^{\dagger}M+V_{s}(p)\,, \tag{1}\] \[V_{s}(p) =-\frac{bp}{2}\mathbb{P}_{s}\,, \tag{2}\] where \(p\) is the average neutrino momentum, \(b\) is the so called ADR parameter controlling the strength of the ADR effect, \(\mathbb{P}_{s}\) is the sterile neutrino projector and \(M\) is the neutrino mass matrix. In the flavor basis, the last two quantities read \[M =\begin{pmatrix}m_{ee}&m_{es}\\ m_{es}^{*}&m_{ss}\end{pmatrix}\,, \tag{3}\] \[\mathbb{P}_{s} =\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\,, \tag{4}\] for the \(2\times 2\) neutrino system under consideration. Furthermore, we choose all mass parameters to be real valued meaning that there is no CP violation in the active-sterile mixing. To be specific, we use \[m_{ee}\approx 0\,\mathrm{eV}\,,\quad m_{es}\approx 0.12\, \mathrm{eV}\,,\quad m_{ss}\approx 1.1\,\mathrm{eV}\,, \tag{5}\] in accordance with Ref. [27] and fits to SBL data. Since we consider the system in the early universe, we also have to include a potential for the electron neutrino induced by elastic scattering processes modifying the effective masses of the neutrino matter eigenstates. Therefore, the full Hamiltonian reads [29] \[\mathcal{H}(p) =\frac{1}{2p}M^{\dagger}M+V_{e}(p)+V_{s}(p) \tag{6}\] \[V_{e}(p) =-\frac{8\sqrt{2}G_{F}p}{3}\left(\frac{\rho_{e^{\pm}}}{m_{W}^{2} }+\frac{\rho_{\nu_{e}}}{m_{Z}^{2}}\right)\mathbb{P}_{e}\,, \tag{7}\] with the Fermi coupling \(G_{F}\), the electron and neutrino energy densities \(\rho_{\alpha}\) and the \(W,Z\)-Boson masses \(m_{W,Z}\). Here, we neglected the usual MSW potential proportional to the particle-antiparticle asymmetries because we assume them to be small (of the order of the baryon asymmetry). Thus, \(V_{e}\) only contains the more significant higher order elastic scattering contributions from the inverse mass expansion of the \(W,Z\) boson propagators. In order to analyze the resonance structure of this two flavor system, we diagonalize the Hamiltonian with the general ansatz \[U(\theta(p)):=\begin{pmatrix}\cos(\theta(p))&\sin(\theta(p))\\ -\sin(\theta(p))&\cos(\theta(p))\end{pmatrix} \tag{8}\] and find the following relation for the mixing angle \[\tan(2\theta(p))=\frac{2m_{es}(m_{ee}+m_{ss})}{(m_{ss}^{2}-m_{ee}^{2})+2p(V_{s }(p)-V_{e}(p))}\,. \tag{9}\] The mixing becomes maximal as soon as \(\theta=\pi/4\) because then electron and sterile neutrinos equally constitute both mass eigenstates and the mass gap between these two eigenstates is minimal. This in turn leads to higher transition rates between active and sterile neutrinos in the energy regime close to the resonance. From Eq. (9) we can infer the resonance condition \[\theta\to\frac{\pi}{4} \Leftrightarrow\tan(2\theta(p))\to\pm\infty\] \[\Rightarrow(m_{ss}^{2}-m_{ee}^{2})+2p(V_{s}(p)-V_{e}(p))\to 0^{\pm}\,. \tag{10}\] The momentum \(p_{\rm res}\) fulfilling the condition (10) is called the resonance momentum. As follows from Eq. (10), the interplay of the two appearing potentials and the neutrino masses determines the resonance structure of the system. This will become important again as soon as we study the effects of resonant conversion between sterile and active neutrinos in combination with collisions in the early universe plasma. ### Sterile Neutrinos coupling to Axion-Like Dark Matter In addition to the ADR effects, we also introduce an ultra-light, real, scalar field. The corresponding scalar particles are assumed to be produced non-thermally and to form a coherent condensate. This condensate behaves like a homogeneous, classical field in an expanding universe and its time evolution is given by [27] \[\phi(t) =\phi_{0}\eta(t) \tag{11}\] \[\eta(t) =1.08\frac{J_{\frac{1}{4}}(m_{\phi}t)}{\sqrt[4]{m_{\phi}t}}\quad \text{with}\quad\lim_{t\to 0}\eta(t)=1\,. \tag{12}\] Here, \(J_{\frac{1}{4}}\) is the regular Bessel \(J\) function of fractional order \(1/4\), \(m_{\phi}\) is the mass of the scalar and \(t\) is the cosmic time. Moreover, we allow for a feeble2 coupling to sterile neutrinos via a Yukawa interaction term in its Lagrangian, Footnote 2: The coupling has to be feeble in order for the scalar field not to thermalize if the sterile neutrinos are thermalized via oscillations. \[\mathcal{L}(\phi,\partial_{\mu}\phi)=\frac{1}{2}(\partial_{\mu}\phi)(\partial^ {\mu}\phi)-\frac{m_{\phi}^{2}}{2}\phi^{2}-\frac{\lambda}{2}\phi\bar{\nu}_{s} \nu_{s}\,. \tag{13}\] Therefore, the sterile neutrinos gain an additional time dependent mass term \[m_{\rm eff}(t)=m_{ss}+\lambda\phi_{0}\eta(t)\,, \tag{14}\] which modifies the mixing between electron and sterile neutrinos. At early times, the mass contribution remains approximately constant leading to a constant sterile neutrino mass \(m_{\rm eff}\approx m_{ss}+\lambda\phi_{0}\). If \(\phi_{0}\) is sufficiently large, the effective mixing of electron and sterile neutrinos is negligible since \(\tan(2\theta)\) is suppressed by the large mass gap \(m_{\rm eff}^{2}-m_{ee}^{2}\) in the denominator of Eq. (9) with \(m_{s}s\) replaced by \(m_{\rm eff}\). Hence, sterile neutrinos are not populated via oscillations at these early times. From the point in time where the Bessel function starts its damped oscillating behavior (\(t\sim 1/m_{\phi}\)) the effective sterile neutrino mass approaches \(m_{ss}\), allowing for significant oscillations again. Therefore, the neutrino oscillation behavior depends on the mass parameter \(m_{\phi}\), i.e. the smaller \(m_{\phi}\) the longer \(\phi(t)\) will remain approximately constant and active-sterile mixing will be suppressed by \(m_{s0}\). ### Central Quantities and Parameter Space In the following analysis, we focus on two major quantities: 1. The effective number of additional neutrino generations \[\Delta N_{\rm eff}(t): =\frac{8}{7}\left(\frac{11}{4}\right)^{\frac{4}{3}}\sum_{k\in{ \cal V}}\frac{\rho_{k}(t)}{\rho_{\gamma}(t)}-3\,,\quad{\rm with}\quad{\cal V} =\{e,\mu,\tau,s\}\,.\] (15) Here, \(\rho_{k}(t)\) are the neutrino energy densities at time \(t\) and \(\rho_{\gamma}(t)\) is the corresponding photon density3. Moreover, we need to incorporate factors of \(8/7\) and \((11/4)^{4/3}\) to directly compare fermionic and bosonic energy densities with a temperature deviation of \((11/4)^{1/3}\) to each other. Footnote 3: By using the energy density in the definition of \(\Delta N_{\rm eff}\) we slightly differ from the methods employed in Ref. [27] where the number density is used instead. 2. The helium mass fraction \[Y_{{}^{4}{\rm He}}:=\frac{4n_{\rm He}}{n_{\rm B}}\,,\] (16) with the helium and baryon number densities \(n_{\rm He}\), \(n_{\rm B}\). In order to compare these quantities with observations, these observables have to be computed at different times in the cosmic evolution. The value of the number of additional neutrino generations \(\Delta N_{\rm eff}(t)\) is inferred from the Hubble rate measurement from the CMB [25] and thus needs to be known at the time of the last photon scattering, while the value of the helium fraction \(Y_{{}^{4}{\rm He}}\) has to be computed shortly after BBN. It is, however, sufficient to evaluate \(\Delta N_{\rm eff}\) directly after \(e^{\pm}\) annihilation since from there on it remains constant. This is because after \(e^{\pm}\) annihilation the total neutrino energy density only changes due to the expansion of the universe4 and by normalizing it to the photon energy density we cancel the dependence on the scale factor. The helium fraction remains constant right after BBN, hence it is appropriate to evaluate \(Y_{{}^{4}{\rm He}}\) as soon as deuterium dissociation has ceased to be efficient. Hence, our analysis solely focuses on the era of radiation domination. Finally, we want to discuss the three dimensional parameter space of the model under consideration. It is parameterized by 1. The scalar field mass \(m_{\phi}\) determining when the scalar field starts to oscillate. 2. The amplitude \(m_{s0}:=\lambda\phi_{0}\) of the additional mass contribution for the sterile neutrino. 3. The ADR parameter \(b\). In the following, we assume a range for \(m_{s0}\in[10\,\mathrm{eV},250\,\mathrm{eV}]\) in accordance with Ref. [27]. Furthermore, for \(m_{\phi}\) we choose the interval of possible values to be \([10^{-22}\,\mathrm{eV},10^{-14}\,\mathrm{eV}]\) since for \(m_{\phi}\leq 10^{-22}\,\mathrm{eV}\) the scalar field starts oscillating so late that the effective sterile neutrino mass remains constant during the considered temperatures. Thus, for \(m_{\phi}\lesssim 10^{-22}\,\mathrm{eV}\) our scenario becomes independent of \(m_{\phi}\) and yields the same constraints on \(m_{s0}\) as for \(m_{\phi}\sim 10^{-22}\,\mathrm{eV}\). At values larger than \(m_{\phi}=10^{-14}\,\mathrm{eV}\) the addition of the scalar field to the model becomes meaningless since the sterile neutrino mass already gets close to its _bare_ value at the relevant temperatures and the sterile species equilibrates. For the ADR parameter, we choose benchmark values in \(I_{b}=[0,\infty)\), i.e. we consider anything between zero ADR potential and an arbitrarily large ADR effect. A special point in this parameter range is \(b\sim 10^{-17}\), since this is the order of magnitude needed in order to explain SBL anomalies [21]. The reason why we allow for arbitrarily high (low) ADR parameters in the early universe is that there might be some mechanism in the extradimensional realization of ADRs causing the curvature of the extra dimension to change from early times until today and correspondingly implying the ADR parameter to decrease (increase), accordingly. ## 3 Neutrino Quantum Kinetic Equations and Numerical Strategy The density matrix describing the oscillations of neutrinos in the early universe is defined as the thermal average of creation and annihilation operators, \(a_{j}^{(\dagger)}(\vec{p})\) of neutrino mass eigenstates, i.e. [29, 30] \[(2\pi)^{3}\delta^{3}(\vec{p}-\vec{q})\varrho_{jk}(\vec{p}):= \langle a_{k}(\vec{p})^{\dagger}a_{j}(\vec{p})\rangle \tag{17}\] where the thermal average of an operator \(\hat{O}\) is defined using the density operator5\(\hat{\Pi}\) of the thermal system as usual \(\langle\hat{O}\rangle:=\mathrm{Tr}(\hat{\Pi}\hat{O})\). Thus, if we consider the diagonal elements (\(j=k\)) of the density matrix it simplifies to the thermal average of the occupation number operator of \(\nu_{k}\) which in turn results in its phase space distribution function. This already gives some intuition of its physical meaning: The diagonal of the density matrix contains the information how many \(\nu_{k}\) momentum states on average are occupied in the system. Moreover, by inspecting the off-diagonal elements of \(\varrho\), we obtain information about the average correlation between \(\nu_{j}(p)\) and \(\nu_{k}(p)\). Such correlations arise for example due to neutrino oscillations. This makes the density matrix the quantity of choice if we want to consider incoherent particle collisions and oscillations in a thermal environment [31] at the same time. Footnote 5: Despite their very similar names the density operator and density matrix are by no means equivalent quantities. In the following, we will mainly work in the flavor basis instead of the mass basis because the collision terms and Hamiltonian potentials are easier to calculate in the flavor basis. Therefore, we need to transform \(\varrho_{jk}\) into this basis using the neutrino mixing matrix \(U\) from Eq. (8) via \[\varrho_{jk}^{f}=\sum_{l,m=1}^{2}U_{jl}\varrho_{lm}(U^{\dagger})_{mk}\,. \tag{18}\] From now on, we will work with \(\varrho^{f}\) only and drop the superscript \(f\). ### Quantum Kinetic Equations for the Density Matrix The time evolution of \(\varrho\) in an expanding, homogeneous and isotropic universe is governed by a Boltzmann-like, quantum kinetic equation (QKE) [29, 30, 32, 33] \[(\partial_{t}-pH\partial_{p})\varrho(t,p)=-i[\mathcal{H}(t,p),\varrho(t,p)]+ \mathcal{C}[t,p,\varrho]\,, \tag{19}\] where \(p\) is the modulus of the neutrino momentum, \(t\) is the cosmic time, \(H\) is the Hubble rate, \(\mathcal{H}\) is the Hamiltonian from Eq. (6) and \(\mathcal{C}\) is the collison operator. The convectional derivative operator, \(\partial_{t}-pH\partial_{p}\), on the left hand side includes the effect of the expansion of the universe redshifting the neutrino momentum as \(p\propto a^{-1}\), where \(a\) is the scale factor of the Robertson Walker metric. Furthermore, the right hand side contains the commutator of the neutrino Hamiltonian with the density matrix and the collision operator \(\mathcal{C}\). While the commutator part governs the evolution of \(\rho\) due to neutrino oscillations, the collision part determines how many neutrinos are annihilated, created or scattered to other momentum modes in interactions with the background plasma. Since there are also neutrinos in the background plasma, this last term is non-linear. The collision operator is the sum of the individual collision operators corresponding to a scattering process involving neutrinos \[\mathcal{C}[t,p,\varrho]=\sum_{k\in\mathcal{P}}\mathcal{C}_{k}[t,p,\varrho]\,, \tag{20}\] where the set of all processes \(\mathcal{P}\) contains the interactions given in Tab. 1. Here we neglect neutrino-nucleon scattering processes due to strong Boltzmann suppression of the nucleon distribution functions at temperatures of \(\mathcal{O}(100\,\mathrm{MeV})\) and smaller. \begin{table} \begin{tabular}{l l} \hline k & process \\ \hline 1 & \(\nu e^{-}\leftrightarrow\nu e^{-}\) \\ 2 & \(\nu e^{+}\leftrightarrow\nu e^{+}\) \\ 3 & \(\nu\bar{\nu}\leftrightarrow e^{-}e^{+}\) \\ 4 & \(\nu\nu\leftrightarrow\nu\nu\) \\ 5 & \(\nu\bar{\nu}\leftrightarrow\nu\bar{\nu}\) \\ \hline \end{tabular} \end{table} Table 1: All relevant processes considered in the neutrino collision terms. For example, the collision term for the process \(\nu e^{-}\leftrightarrow\nu e^{-}\) is given by \[\mathcal{C}_{1}[t,p,\varrho]= \frac{8G_{F}^{2}}{p}\int\mathrm{d}^{3}\vec{\pi}_{1}\,\mathrm{d}^{3} \vec{\pi}_{2}\,\mathrm{d}^{3}\vec{\pi}_{3}\,(2\pi)^{4}\delta^{4}(p^{\mu}+p_{1}^ {\mu}-p_{2}^{\mu}-p_{3}^{\mu})\] \[\times\left\{4g_{l}^{2}(p_{\alpha}p_{3}^{\alpha})(p_{1\beta}p_{2}^ {\beta})+4g_{R}^{2}(p_{\alpha}p_{1}^{\alpha})(p_{2\beta}p_{3}^{\beta})-4g_{L}g _{R}(p_{\alpha}p_{2}^{\alpha})m_{e}^{2}\right\}\] \[\times\phi_{\nu e^{-}\nu e^{-}}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu}, p_{3}^{\mu})\,, \tag{21}\] where \(\mathrm{d}^{3}\vec{\pi}_{j}:=(2E_{j}(2\pi)^{3})^{-1}\mathrm{d}^{3}\vec{p}_{j}\) denotes the Lorentz invariant phase space measure, \(g_{L}=1/2+\sin^{2}(\theta_{W})\), \(g_{R}=\sin^{2}(\theta_{W})\), \(\theta_{W}\) is the Weinberg angle and the statistical factor \(\phi_{\nu e^{-}\nu e^{-}}\) is given by \[\phi_{\nu e^{-}\nu e^{-}}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu},p_{3}^{\mu}):=\phi _{\nu e^{-}\nu e^{-}}^{+}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu},p_{3}^{\mu})-\phi _{\nu e^{-}\nu e^{-}}^{-}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu},p_{3}^{\mu}) \tag{22}\] with \[\phi_{\nu e^{-}\nu e^{-}}^{+}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu},p_ {3}^{\mu}) :=[1-f_{e^{-}}(t,p_{1})]f_{e^{-}}(t,p_{3})\{\mathbb{P}_{e}\varrho (t,p_{2})\mathbb{P}_{e},\mathbb{I}-\varrho(t,p)\}\,,\] \[\phi_{\nu e^{-}\nu e^{-}}^{-}(t,p^{\mu},p_{1}^{\mu},p_{2}^{\mu}, p_{3}^{\mu}) :=f_{e^{-}}(t,p_{1})[1-f_{e^{-}}(t,p_{3})]\{\mathbb{P}_{e}[\mathbb{ I}-\varrho(t,p_{2})]\mathbb{P}_{e},\varrho(t,p)\}\,.\] Furthermore, \(f_{e^{-}}\) denotes the electron phase space distribution. The collision terms are calculated applying the methods described in [33] to the current scenario. In addition to the density matrix for neutrino states, in principle there is also an analogous one for antineutrinos which has to be solved at the same time. In the following, we assume that the lepton-antilepton asymmetry is of the order of the baryon asymmetry and hence negligible compared to the total phase space densities. This implies that the antineutrino density matrix behaves the same as the neutrino density matrix and therefore we just have to consider the QKE for neutrinos. In order to keep track of the temperature \(T_{\gamma}\) of the electromagnetic plasma, we need to solve the continuity equation of the universe \[\dot{\rho}=-3H(\rho+P)\,, \tag{23}\] where \(\rho\) and \(P\) are the total energy density and total pressure of all radiation species, respectively. By substituting in the equilibrium expressions for electrons and photons and assuming these particles to be in thermal equilibrium6, this equation can be reformulated into a differential equation for \(T_{\gamma}\). Footnote 6: This assumption is valid due to the rapid electromagnetic interactions between photons and electrons roughly until the time of last scattering. ### Numerical Solution of the Quantum Kinetic Equations In order to prepare the numerical solution of the previously introduced QKE (19), we define a new set of dimensionless variables \[x(t,p) :=m_{0}a(t)\,, \tag{24}\] \[y(t,p) :=a(t)p\,, \tag{25}\] where we choose \(m_{0}=1\,\mathrm{MeV}\). Therefore, \(x\) represents the dimensionless scale factor and \(y\) is a dimensionless momentum variable not being redshifted over time, since \(p\propto a^{-1}\). Moreover, takes the role of the reciprocal of the neutrino temperature which is equal to \(T_{\gamma}\) at early times but deviates from it after neutrino decoupling and electron positron annihilation. Transformed to these new variables Eq. (19) assumes the form \[xH\partial_{x}\tilde{\varrho}(x,y)=-i[\mathcal{H}(x,y),\tilde{\varrho}(x,y)]+ \mathcal{C}[x,y,\tilde{\varrho}(x,y)]\,, \tag{26}\] with \(\tilde{\varrho}\) being the density matrix expressed in the new set of variables. From now on, we will only refer to this quantity and hence drop the tilde, i.e. \(\tilde{\varrho}\to\varrho\). In order to integrate Eq. (26), in principle we had to start at \(x_{0}=0\) and set \(\varrho_{ik}(x_{0},y)\equiv 0\). But since the QKE described in the last section are only valid after the strong phase transition, we have to find a finite starting point \(x_{0}\) matching all criteria of validity of our equations of motion which are 1. Active neutrinos are in thermal equilibrium with the electromagnetic plasma. 2. Quarks and gluons are bound into hadrons. 3. Contributions from processes involving muons are negligble. Furthermore, we assume the sterile neutrino density and correlations between active and sterile neutrinos to be negligible at \(x_{0}\) such that our initial condition for \(\varrho\) is given by \[\varrho(x_{0},y)\approx\begin{pmatrix}(\exp(y)+1)^{-1}&0\\ 0&0\end{pmatrix}\,. \tag{27}\] We found that \(x_{0}=0.01\), i.e. \(T_{\gamma,0}=100\,\mathrm{MeV}\), fulfills these criteria. For more discussions see App. B. We terminate the integration at \(x_{1}\) which we require to fulfill the following criteria: 1. Neutrino interactions are completely frozen out 2. All free neutrons are bound into light nuclei 3. The neutrino distribution functions have reached their asymptotic values 4. The relativistic approximation for the oscillation Hamiltonian and collisions is valid We found \(x_{1}=50\) to be a suitable final point fulfilling these criteria while still being located in radiation domination. To solve Eq. (26) in the interval \(X:=[x_{0},x_{1}]\), we discretize the momentum space \(\Omega_{y}=[0,\infty)\) and integrate the resulting set of ordinary differential equations. To do so, we choose \(N_{y}\) equidistant points between the minimal and maximal momentum values \(y_{\min},y_{\max}\) at which we cut off the distribution function. Choosing a minimal value is necessary because the ultra-relativistic approximation employed within the oscillation Hamiltonian is not valid for all momentum values, especially not for \(y=0\). Hence, the minimal \(y\) value is chosen to be \(y_{\min}=10^{-4}\) in order to still yield a reasonably good approximation for the neutrino energy density. Furthermore, the maximal momentum value is chosen to be \(y=20\) since the neutrino distribution at this \(y\) value fulfills \[f_{\nu}(x,y=20)\leq f_{\mathrm{eq}}(x,y=20)=(\exp(y)+1)^{-1}|_{y=20}\approx 2 \cdot 10^{-9}\,, \tag{28}\] which is sufficiently close to zero. Therefore, the total, relative error induced within the neutrino energy density needed to calculate our central quantities is of the order \[\epsilon_{\rm rel}:=\frac{|\rho_{\nu}^{\rm approx}-\rho_{\nu}|}{ \rho_{\nu}}\sim 10^{-6}\,.\] The discretized version of \(\Omega_{y}\) then reads \[\tilde{\Omega}_{y}(N_{y}):=\left\{y_{k}=y_{\rm min}+k\cdot\Delta y \,\Big{|}\,k\in\{0,\ldots,N_{y}-1\},\Delta y=\frac{y_{\rm max}-y_{\rm min}}{N _{y}-1}\right\}\,.\] Hence, we arrive at \(N_{y}\) coupled, ordinary, differential equations for the density matrix values at the chosen momentum nodes plus one equation for the photon temperature. Furthermore, decomposing the Hermitian density matrix into its 4 independent, real components yields a total of \(4N_{y}+1\) coupled differential equations which need to be solved7. Footnote 7: For the numerical details see App. C ### Calculating the helium Abundance In this section, we present how the \({}^{4}\)He mass fraction \(Y_{{}^{4}{\rm He}}\) is estimated from the neutrino distribution functions. Our explanations and notation closely follow the book [36] by Bernstein on kinetic theory in an expanding universe. At first, we define the neutron fraction, i.e. \[X_{n}(t):=\frac{n_{n}(t)}{n_{n}(t)+n_{p}(t)}\,, \tag{29}\] where \(n_{n}\) and \(n_{p}\) are the neutron and proton number densities, respectively. This choice greatly simplifies the Boltzmann equation for \(n_{n}\) since the \(a^{-3}\) dependence of the number densities cancel. Furthermore, we find \[\frac{\rm d}{{\rm d}t}\{a^{3}(t)(n_{n}(t)+n_{p}(t))\}\equiv{\rm const.}\,, \tag{30}\] because the baryon number in a comoving volume, \(N_{B}\approx a^{3}(n_{n}+n_{p})\), is conserved within all relevant processes shortly before neutron freeze out. These processes are \[n+e^{+} \leftrightarrow p+\bar{\nu}_{e} \tag{31}\] \[n+\nu_{e} \leftrightarrow p+e^{-}\] (32) \[n \leftrightarrow p+\bar{\nu}_{e}+e^{-}\,. \tag{33}\] The differential equation governing the evolution of \(X_{n}\) reads [36] \[\frac{\rm dX_{n}(x)}{\rm d}x=\frac{\lambda_{pn}(x)}{xH}(1-X_{n}( x))-\frac{\lambda_{np}(x)}{xH}X_{n}(x)\,. \tag{34}\] in terms of \(x=m_{0}a(t)\). Here \(\lambda_{np}\) is the thermal interaction rate of all processes converting neutrons to protons, while \(\lambda_{pn}\) is that of all processes converting protons to neutrons. They are given and discussed in App. D. We solve Eq. (34) from \(x(T=5\,{\rm MeV})\) where neutrons and protons are still in thermal equilibrium up until \(x(T=0.07\,{\rm MeV})\) where deuterium dissociation ceases to be efficient. During the solution of this differential equation, we interpolate the neutrino distribution functions and temperatures obtained on the grid of \((x,y)\) values using the methods described in the previous subsections. Finally in order to estimate the produced helium abundance, we convert the neutron fraction into the helium mass fraction, i.e. \[Y_{{}^{4}{\rm He}}=\frac{4n_{{}^{4}{\rm He}}}{n_{\rm B}}=\frac{2n_{n}}{n_{\rm B }}=2X_{n}\,, \tag{35}\] where we used the assumption that approximately all free neutrons are bound into helium-4 nuclei at the end of BBN. Of course this method is subject to several approximations especially since we neglect the nuclear reaction rates, hence our estimate cannot be compared directly to observations from [38]. Nevertheless, we can inspect if the different models lead to a relative deviation from the expected value which is on the order of experimental uncertainty or if it exceeds this uncertainty significantly. ## 4 Predicted Effective Degrees of Freedom and helium Abundance Now, we present the results for different benchmark points within the 3 dimensional parameter space. In the following, we first discuss our results for the pure ADR scenario, the pure scalar field scenario and afterwards for the combination of both effects. For each chosen benchmark point, we calculate the resulting effective, additional number of degrees of freedom \(\Delta N_{\rm eff}\) and the estimated helium-4 abundance \(Y_{{}^{4}{\rm He}}\). These simulated values for \(\Delta N_{\rm eff}\) are compared to bounds obtained by the Planck collaboration, i.e. * TT + lowE (95% CL): \(N_{\rm eff}=3.00^{+0.57}_{-0.53}\)\(\Rightarrow\)\(\Delta\)**N\({}_{\rm eff}\leq\) 0.57**, * TT, TE, EE + lowE (95% CL): \(N_{\rm eff}=2.92^{+0.36}_{-0.37}\)\(\Rightarrow\)\(\Delta\)**N\({}_{\rm eff}\leq\) 0.28**, * TT + lowE + lensing + BAO (95% CL): \(N_{\rm eff}=3.11^{+0.44}_{-0.43}\)\(\Rightarrow\)\(\Delta\)**N\({}_{\rm eff}\leq\) 0.55**, * TT, TE, EE + lowE + lensing + BAO (95% CL): \(N_{\rm eff}=2.99^{+0.34}_{-0.33}\)\(\Rightarrow\)\(\Delta\)**N\({}_{\rm eff}\leq\) 0.33**. Here the abbreviations TT, TE, EE, lowE, lensing, BAO refer to different measurement techniques / features of the CMB data (i.e. TT \(\hat{=}\) intensity (temperature) only, TE \(\hat{=}\) temperature + curl free polarization data, EE \(\hat{=}\) curl free polarization data only, lowE \(\hat{=}\) curl free polarization data only at low multipole moments, lensing \(\hat{=}\) grav. lensing measurement, BAO \(\hat{=}\) baryon acoustic oscillations). Afterwards, we turn towards the helium abundance and compare its deviation for different benchmark points from the expected SM value to the experimental uncertainties on the helium mass fraction from Ref. [38], i.e. \(\sigma_{{}^{4}{\rm He}}=0.004\). ### \(\Delta N_{\rm eff}\) and \(Y_{{}^{4}{\rm He}}\) in the pure ADR Scenario The values of \(\Delta N_{\rm eff}\) obtained after the full integration of QKEs for different ADR parameters are shown in Tab. 2. Here, we see that the resulting \(\Delta N_{\rm eff}\) values for the smallest ADR parameters exceed all Planck bounds by far while bigger \(b\) values lead to excellent agreement with all four bounds \(\Delta N_{\rm eff}<\{0.28,0.33,0.55,0.57\}\). Moreover, we can infer that for small \(b\) values the no-ADR scenario is resembled and sterile neutrinos are equilibrated via oscillations. Turning \(b\) up to much larger values (\(b\gtrsim 10^{-6}\)) leads to a decrease in \(\Delta N_{\rm eff}\) and sterile neutrinos are not close to equilibrium anymore. In Fig. 1, we show the final sterile neutrino distributions compared to the equilibrium distribution for four \(b\) values differing by many orders of magnitude to emphasize this statement. The behavior described above can be explained by considering the resonance structure of each parameter configuration. In Fig. 2, we show the resonance momentum \(y_{\rm res}\) for multiple ADR parameters in the temperature range \(T_{\nu}\in\mathcal{T}_{\nu}:=[3,100]\,{\rm MeV}\). For larger ADR parameters the resonance curve passes through the relevant \(y\) region from above leading to resonantly enhanced \(\nu_{e}\)-\(\nu_{s}\) conversion. Momentum modes located underneath the respective resonance curve neither experience mixing enhancement nor mixing suppression and approximately behave as in the no-ADR scenario. On the other hand, momentum modes well above the resonance curve, i.e. \(y\gg y_{\rm res}(T_{\nu})\)\(\forall T_{\nu}\in\mathcal{T}_{\nu}\), are subject to effective mixing suppression since \(\nu_{1}\approx\nu_{e}\) and \(\nu_{2}\approx\nu_{s}\). Thus, \(\nu_{s}\) remains unpopulated in this regime. Note that the strength of this suppression/enhancement effect is momentum dependent since the active as well as the sterile potentials are proportional to \(y\). Hence, the mixing of neutrinos with small momenta is closer to the vacuum case leading to a faster popula \begin{table} \begin{tabular}{c|c c c c c c c} \(b\) & 0 & \(10^{-17}\) & \(10^{-15}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-4}\) & \(10^{-2}\) \\ \hline \(\Delta N_{\rm eff}\) & 1.36 & 1.36 & 1.36 & 1.38 & 0.04 & 0.04 & 0.04 \\ \end{tabular} \end{table} Table 2: Estimated additional light degrees of freedom \(\Delta N_{\rm eff}\) at \(x=50\) for different ADR parameters \(b\). Figure 1: Final (\(x=50\)) \(\nu_{s}\) phase space densities (solid) as a function of the comoving momentum \(y\) for various ADR parameters \(b\in\{0,10^{-17},10^{-12},10^{-6}\}\) compared to the equilibrium density (dashed). if the resonance momentum is much smaller. But since \(\Delta N_{\rm eff}\) depends more strongly on the high momentum region, this does not affect its estimate significantly. Now one could ask what happens as soon as multiple (high) momentum modes pass through one or two resonances, as is the case for e.g. \(b=10^{-15}\). A sterile \(y\) mode can be strongly populated by passing through a resonance, but it is even more important how big the long term mixing around this resonance is. If the mixing is sufficiently smaller than vacuum mixing, the respective sterile mode experiences some enhancement by the resonance, but it will not reach its equilibrium value. On the other hand, if the mixing is very large already before or after passing the resonance, the sterile momentum mode will be equilibrated irregardless of the resonance. Thus what matters is only the fact whether the relevant momentum modes are subject to mixing suppression for sufficiently long or if they are closer to the (large) vacuum mixing. This can be seen in Fig. 3 which shows the temperature evolution of the modulus \(|\varrho_{es}|:=\sqrt{{\rm Re}(\varrho_{es})^{2}+{\rm Im}(\varrho_{es})^{2}}\) for \(y=5\). The off-diagonal element \(\varrho_{es}\) is important since it contains information about the energy transfer from \(\nu_{e}\) to \(\nu_{s}\). Therefore, the plot shows that shortly after \(T={\cal O}(100\,{\rm MeV})\) for models with vanishing or small ADR parameters a dip occurs in \(\varrho_{es}\) leading to a significant enhancement of \(\rho_{ss}\), afterwards. Considering, the curve for \(b=10^{-15}\), we see that the resonance around \({\cal O}(7\,{\rm MeV})\) leads to a significant impact in \(|\varrho_{es}|\) but doesn't lead to a significant increase of the sterile neutrino density, since it already reached thermal equilibrium. Moreover, we see that for \(b\geq 10^{-6}\) the off-diagonal matrix elements stay much closer to zero due to sizeable mixing suppression. Despite the fact that the excess of light degrees of freedom, \(\Delta N_{\rm eff}\), is a good estimator for the degree of population of sterile neutrinos, it is not sufficient to rely on this number alone. After neutrino decoupling around \(T_{\gamma}={\cal O}(3\,{\rm MeV})\), the active-sterile oscillations could lead to a depletion of the density of active neutrinos which has strong impact on the neutron-proton equilibrium, since fewer \(\nu_{e}\) lead to an early freeze out of \(n\)-\(p\) reactions and hence to a larger neutron abundance. Figure 2: Resonance momentum \(y_{\rm res}\) plotted against the neutrino temperature \(T_{\nu}\approx T_{\gamma}\) for various ADR parameters \(b\in\{10^{-17},10^{-15},10^{-12},10^{-6},10^{-4},10^{-2}\}\). The relevant momentum region \(10^{-4}\leq y\leq 20\) is shown as the green shaded region. All modes above the corresponding resonance curve are subject to strong mixing suppression. This leads to an excess of helium that contradicts the very good agreement of the predictions from standard cosmology and cosmological observations. Hence, we need to carefully estimate how much helium is produced for our chosen parameter configurations. To do so, we now estimate the impact of the \(\nu_{e}\) depletion on nucleosynthesis by proceeding as described in Sec. 3.3 and solve the Boltzmann equation for the neutron fraction \(X_{n}=n_{n}/(n_{n}+n_{p})\). The neutron fraction \(X_{n}=n_{n}/(n_{n}+n_{p})\) can then be translated into the helium mass fraction \(Y^{\rm 4_{He}}\simeq 2X_{n}\) at \(T_{\gamma}\approx 0.07\,{\rm MeV}\). The final helium abundances for different ADR parameters are shown and compared to the expected standard value of \(Y^{\rm std}_{\rm 4_{He}}\approx 0.227\) in Tab. 3. Here, we observe the same consistent picture as for our \(\Delta N_{\rm eff}\) observable. Small ADR parameters lead to a deviation \(\Delta Y^{\rm 4_{He}}={\cal O}(0.01)\) from the SM expectation much larger than the experimental uncertainty \(\sigma_{\rm 4_{He}}\sim 0.004\) of the observable \(Y_{\rm 4_{He}}\). On the other hand, very large ADR parameters, i.e. \(b\gtrsim 10^{-6}\) lead to discrepancies much smaller than \(\sigma_{\rm 4_{He}}\) and hence would be in agreement with experiment. The argument here is exactly the same as before since the depletion of \(\nu_{e}\) solely follows from the mixing behavior which in turn is dominantly influenced by the resonance structure. In Fig. 4, we compare the temperature evolution of \(X_{n}\) for two different scenarios, i.e. for \(b=10^{-17}\) and for \(b=10^{-4}\). Around the temperature of \({\cal O}(1\,{\rm MeV})\)\(X_{n}\) departs from equilibrium, as expected, in each scenario. However, \(X_{n}(b=10^{-17})\) leaves equilibrium a little earlier and adopts a higher value compared to the SM curve after neutron freeze out. At the temper Figure 3: Time evolution of the off-diagonal density matrix element \(|\varrho_{es}(y=5)|\) for several ADR parameters. The \(y=5\) mode undergoes a resonance at \(T_{\nu}\approx 7\,{\rm MeV}\) for \(b=10^{-15}\) leading to a delayed increase of \(|\varrho_{es}|\) afterwards. The temperature at which this resonance occurs is marked by the orange dashed vertical line. \begin{table} \begin{tabular}{c|c c c c c c} \(b\) & 0 & \(10^{-17}\) & \(10^{-15}\) & \(10^{-12}\) & \(10^{-6}\) & \(10^{-4}\) & \(10^{-2}\) \\ \hline \(Y_{\rm 4_{He}}\) & 0.235 & 0.235 & 0.235 & 0.227 & 0.227 & 0.227 \\ \end{tabular} \end{table} Table 3: Estimated helium abundances for different ADR parameters \(b\) compared to the standard value \(Y^{\rm SM}_{\rm 4_{He}}=0.227\pm 0.004\). are bound into helium nuclei, this leads to a higher helium abundance. In the large ADR parameter scenario, i.e. \(b=10^{-4}\), \(X_{n}\) essentially stays in agreement with the SM curve for the relevant temperatures. Therefore, the corresponding helium abundance would also be in agreement with the SM expectation within the experimental margin of error. This is due to the negligible presence of \(\nu_{s}\) and much higher interaction rates \(\lambda_{np},\lambda_{pn}\) compared to the previous case caused by the non-dilution of the \(\nu_{e}\) density. ### \(\Delta N_{\rm eff}\) and \(Y_{4{\rm He}}\) in the ALP only Scenario Next, we look at the behavior of ALP only models, where we turn off the ADR potential and turn on the coupling of the ALP field to the sterile neutrino. This results in a time dependent, additional mass for the sterile neutrino mass matrix element, \(m_{ss}\to m_{ss}+m_{s0}\eta(t)\), where \(\eta\) is given by Eq.(12). We integrate the QKEs for different parameter values shown in Tab. 4. The obtained results for \(\Delta N_{\rm eff}\) show the clear pattern that higher \(m_{s0}\) values and lower \(m_{\phi}\) values are favored by experimental observation. We can explain this by looking at the behavior of the time dependent part \(\propto\eta\) of the sterile \begin{table} \begin{tabular}{c|c c c c c c c c} \(m_{s0}\) / eV & 50 & & & 100 & & & 250 & \\ \(m_{\phi}\) / eV & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) \\ \hline \(\Delta N_{\rm eff}\) & 1.23 & 1.22 & 1.30 & 0.80 & 0.80 & 0.93 & 0.26 & 0.26 & 0.30 \\ \end{tabular} \end{table} Table 4: Estimated additional light degrees of freedom \(\Delta N_{\rm eff}\) at \(x=50\) for different scalar field parameters. Figure 4: Evolution of the neutron abundance \(X_{n}\) with respect to the photon temperature \(T_{\gamma}\) for two different ADR parameters, i.e. \(b\in\{10^{-17},10^{-4}\}\), after neutron freeze out. The dashed black curve represents the SM expectation while the two solid curves correspond to the benchmark scenarios, respectively. mass matrix element which is shown in Fig. 5 and the \(\tan(2\theta)\) of the effective mixing angle shown in Fig. 6. The first plot shows that if the scalar field is too heavy, it starts to oscillate earlier leading to a decrease of the additional mass contribution of the sterile neutrino. According to the latter figure, smaller sterile masses lead to larger effective mixing angles which in turn lead to faster population of the sterile species. Therefore, we need a large \(m_{s0}\) and a small \(m_{\phi}\) to reconcile the existence of the sterile species with experimental observations. Figure 5: Behavior of the normalized scalar vev \(\eta\) for different ALP mass parameters \(m_{\phi}\) at temperatures in the integration range of the QKEs. Figure 6: Effective mixing angle between \(\nu_{e}\) and \(\nu_{s}\) for different values of \(m_{s0}\) at temperatures in the integration range of the QKEs. Here we fixed \(m_{\phi}=10^{-22}\,\)eV so that \(\eta\) remains constant for all temperatures of interest. These conjectures are supported by Figs. (a)a and (b)b showing the off-diagonal element \(|\varrho_{es}(y=5)|\). There we see that after the scalar field has started to oscillate the correlations between active and sterile neutrinos increase, whereas for an overall smaller \(m_{s0}\) parameter the correlations are also overall bigger. Now, we consider the obtaind helium abundances for the different benchmark points. In Tab. 5 the resulting values for \(Y_{4\,\mathrm{He}}\) are shown. Here the same pattern arises as for \(\Delta N_{\mathrm{eff}}\) with the difference that our BBN observable is more sensitive to the ALP mass. For masses \(m_{\phi}\gtrsim 10^{-18}\,\mathrm{eV}\) the condensate oscillates during or before nucleosynthesis has started resulting in a depleted electron neutrino density. The earlier this oscillation occurs, the more \(f_{\nu_{e}}\) is depleted and the bigger the deviation of the final helium abundance from the SM value becomes as we can see from Tab. 5. Only the benchmark points with \(m_{\phi}\gtrsim 10^{-16}\,\mathrm{eV}\) and \(m_{s0}\gtrsim 100\,\mathrm{eV}\) are within the uncertainty around the standard value. To underline this statement we also show the evolution of the neutron fraction for the benchmark points which are least and most compatible with observations in Fig. 8. Note that the red curve deviates more significantly from the SM expectation than the green one describing the most compatible parameter configuration under consideration. The departure of the model curve for the least compatible configuration due to the depletion of electron neutrinos is more prominent than the almost non-existing one in the most compatible case. As we have seen, a bigger sterile mass matrix element due to the coupling of a scalar field suppresses the mixing of \(\nu_{e}\) and \(\nu_{s}\) such that the resulting helium fraction and effective degrees of freedom become compatible with experimental bounds. Next, we consider the combined ADR and scalar field scenario. \begin{table} \begin{tabular}{c|c c c c c c c c} \(m_{s0}\) / eV & 50 & & 100 & & 250 & & \\ \(m_{\phi}\) / eV & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) & \(10^{-20}\) & \(10^{-16}\) & \(10^{-12}\) \\ \hline \(Y_{4\,\mathrm{He}}\) & 0.234 & 0.234 & 0.235 & 0.232 & 0.232 & 0.233 & 0.229 & 0.229 & 0.229 \\ \end{tabular} \end{table} Table 5: Estimated helium abundance at \(x=50\) for different scalar field parameters compared to the standard value \(Y_{4\,\mathrm{He}}^{\mathrm{SM}}=0.227\pm 0.004\). Figure 7: As Fig. 3 for the ALP only scenario, for fixed \(m_{\phi}\) (left) and for fixed \(m_{s0}\) (right). Shown are three different parameter configurations per panel. ### \(\Delta N_{\rm eff}\) and \(Y_{4{\rm He}}\) in the combined ADR and ALP Scenario In the ADR only case, we have concluded that for sufficiently large ADR parameters \(b\) the equilibration of \(\nu_{s}\) is suppressed. Choosing smaller ADR parameters leads to a strong population of \(\nu_{s}\) and hence large corrections to \(N_{\rm eff}\) that exceed experimental bounds. We expect that in the combined scenario even small ADR parameters can be brought into agreement with experiment by invoking the mixing suppression by the scalar field \(\phi\) from Sec. 2.2. This expectation is further substantiated from inspecting Fig. 9, which shows a (on average) smaller \(|\varrho_{es}|\) than for small \(b\) values in the ADR only case. We confirm this expectation by choosing \(m_{\phi}\sim 10^{-22}\,{\rm eV}\) and \(m_{s0}\sim 250\,{\rm eV}\). For this value of \(m_{\phi}\) the scalar field starts oscillating long after nucleosynthesis has ceased and leads to a constant addition to the sterile neutrino mass during the time of integration. In Tab. 6, we display the values of \(\Delta N_{\rm eff}\) again for \(b\in\{0,10^{-17},10^{-15},10^{-12}\}\) with the addition of the scalar field. We see that the values of \(\Delta N_{\rm eff}\) decrease significantly compared to the pure ADR scenario which is due to mixing suppression because of the additional sterile mass as discussed in the last subsection. Now the corrections to \(N_{\rm eff}\) are in agreement with all bounds from Planck, i.e. \(\Delta N_{\rm eff}<\{0.28,0.33,0.55,0.57\}\). Thus, even cases where the ADR scenario alone does not explain cosmological observation an additional mass contribution for the sterile neutrino can reconcile them with experiment. For comparison with the pure ADR case, we show the final neutrino distributions for \(b=10^{-17}\) and \(b=10^{-12}\) in Fig. 10. Of course this suppression effect disappears if one allows for higher scalar masses such that the asymptotic sterile mass at \({\cal O}(1\,{\rm MeV})\) is reached before BBN or neutrino freeze out or if \(m_{s0}\) is too \begin{table} \begin{tabular}{c|c c c c} \(b\) & 0 & \(10^{-17}\) & \(10^{-15}\) & \(10^{-12}\) \\ \hline \(\Delta N_{\rm eff}\) & 0.25 & 0.25 & 0.25 & 0.26 \\ \end{tabular} \end{table} Table 6: Estimated additional light degrees of freedom \(\Delta N_{\rm eff}\) at \(x=50\) for different ADR parameters \(b\) and \(m_{\phi}\sim 10^{-22}\,{\rm eV}\) and \(m_{s0}\sim 250\,{\rm eV}\). Figure 8: As Fig. 4, for ALP only scenarios after neutron freeze out. small in the first place. Consider for example the benchmark points \[p_{1} :=(m_{\phi},m_{s0},b)=(10^{-22}\,\mathrm{eV},\mathbf{50}\,\mathrm{eV},10^{-17})\,, \tag{36}\] \[p_{2} :=(m_{\phi},m_{s0},b)=(\mathbf{10^{-12}}\,\mathrm{eV},250\, \mathrm{eV},10^{-17})\,, \tag{37}\] Figure 10: As Fig. 1 for the combined ADR and ALP scenario, for two different ADR parameters. The shown \(b\) values are excluded in the ADR only scenario but are reconciled with experiment due to the addition of the scalar field. Figure 9: Comparison of the evolution of \(|\varrho_{es}(y=5)|\) for the ADR only scenarios \(b\in\{10^{-17},10^{-6}\}\) versus the combined scenario with \(m_{\phi}=10^{-22}\,\mathrm{eV}\), \(m_{s0}=250\,\mathrm{eV}\) and \(b=10^{-17}\). The ADR only plots are shown as dashed lines, while the line corresponding to the combined scenario is solid. Compare Figs. 3 and 7 for the individual ADR and scalar field cases. here we adopted a smaller value for \(m_{s0}\) for \(p_{1}\) and a bigger value for \(m_{\phi}\) for \(p_{2}\), respectively. Integrating the QKEs for the first configuration yields \[\Delta N_{\rm eff}(p_{1})\approx 1.23\,, \tag{38}\] which is slightly smaller than the original result \(\Delta N_{\rm eff}(b=10^{-17})\approx 1.36\) in the pure ADR case, but is still much larger than for \(m_{s0}=250\,{\rm eV}\). Hence, as expected, a smaller effective sterile mass contribution leads to a reduced mixing suppression. Increasing the scalar field mass as specified in Eq. (37) for the second benchmark point, we get \[\Delta N_{\rm eff}(p_{2})\approx 0.30\,, \tag{39}\] which is slightly higher than the value we obtained for \(m_{\phi}=10^{-22}\,{\rm eV}\) but still in agreement with three out of four Planck bounds. This is because after neutrino freeze out \(\Delta N_{\rm eff}\) remains constant and \(\phi(t)\) only starts oscillating shortly before. Thus, we obtain a similar result for \(\Delta N_{\rm eff}(p_{2})\) as for \(m_{\phi}=10^{-22}\,{\rm eV}\). For \(m_{\phi}\gg 10^{-12}\,{\rm eV}\) the scalar field oscillates long enough before neutrino freeze out to imply \(\Delta N_{\rm eff}(p_{2})\to\Delta N_{\rm eff}(b)\). After having discussed the combined scenario for \(\Delta N_{\rm eff}\), we now turn towards our estimate of the helium abundance. Here, we expect the same mechanism to apply as in Sec. 4.2: Assuming a scalar field mass of \(m_{\phi}\lesssim 10^{-18}\,{\rm eV}\) the oscillatory behavior of the scalar field starts after \(t_{\rm BBN}\sim 300\,{\rm s}\). Hence, active-sterile oscillations are suppressed depending on the additional mass \(m_{s0}\) during the process of neutron freeze out. We can observe this effect in Fig. 11 showing the evolution of \(X_{n}(T)\) for \(m_{\phi}=10^{-22}\,{\rm eV}\), \(m_{s0}=250\,{\rm eV}\) and \(b=10^{-17}\) compared to the evolution in the pure ADR case with the same \(b\) value. After adding the scalar field to the pure ADR model, one can no longer distinguish the SM from the model curve within the given margin of error \(\sigma_{{}^{4}{\rm He}}=0.004\). This holds for all chosen ADR parameter configurations with low \(m_{\phi}\) and high \(m_{s0}\), c.f Tab. 7. Hence, in the combined scenario the Figure 11: As Fig. 4 comparing the pure ADR case with \(b=10^{-17}\) with the combined ADR and ALP scenario. deviation from the SM value is even smaller than experimental uncertainties \(\Delta Y_{{}^{4}\text{He}}\sim 0.002<\sigma_{{}^{4}\text{He}}\). This effect gets weaker if we choose a lower \(m_{s0}\) or increase the mass of the scalar field up to values of \(m_{\phi}\gg 10^{-18}\,\text{eV}\). For higher scalar masses, the ALP condensate already oscillates at times before neutron-proton interactions freeze out and hence active-sterile oscillations are not suppressed anymore regardless of the value of \(m_{s0}\). In order to demonstrate this effect, we again consider the benchmark points \(p_{1}\) and \(p_{2}\) as in the \(\Delta N_{\text{eff}}\) analysis. For the first point with lower \(m_{s0}\), we again expect a less efficient mixing suppression within the whole integration interval. This is indeed what we get after integrating the QKEs for the neutrino density matrix and the neutron fraction. The final helium abundance for \(p_{1}\) amounts to \[Y_{{}^{4}\text{He}}(p_{1})\approx 0.234>0.229\,, \tag{40}\] which is larger than the value for \(m_{s0}=250\,\text{eV}\) and would be observable in experiments. For the second benchmark point with lower scalar mass, we again obtain a lower value \[Y_{{}^{4}\text{He}}(p_{2})\approx 0.229\,. \tag{41}\] By adopting even smaller scalar mass parameters, we expect this value to approach the pure ADR scenario. As a consequence, we would end up with more helium-4. This effect can be seen in Fig. 12. As expected, the \(\nu_{e}\) density decreases after \(t\sim m_{\phi}^{-1}=10^{14}\,\text{eV}^{-1}\) while the sterile neutrino density increases, compared to the benchmark point \(p_{\text{ref}}\) with \(m_{\phi}=10^{-22}\,\text{eV}\) and all other parameters equal to those of \(p_{2}\). As a consequence, the higher \(\nu_{s}\) density leads to an earlier departure of \(X_{n}\) from equilibrium, while the depleted \(\nu_{e}\) density causes an even bigger discrepancy between the SM curve and the corresponding model curve due to the smaller \(n-p\) reaction rates. For higher \(m_{\phi}\) this happens even earlier leading to a more significant increase in \(f_{\nu_{s}}\) (and a corresponding decrease in \(f_{\nu_{e}}\)). ## 5 Conclusions In this paper we have analyzed the impact of altered dispersion relations (ADRs) and couplings to an axion-like scalar field on cosmological bounds for sterile neutrinos. Both effects have the potential to ameliorate such bounds, depending on the concrete choice of parameters. In particular, we conclude that ADR parameters in the range needed to give an explanation for _short baseline experiments_, i.e \(b=\mathcal{O}(10^{-17})\), alone are not sufficient to suppress \(\nu_{s}\) population in the early universe. We show this by calculating the effective number of additional light degrees of freedom for these parameters and by estimating the amount of helium produced during BBN. This estimate results in the values \[\Delta N_{\text{eff}}(b =10^{-17}) \approx 1.36\,, \tag{42}\] \[Y_{{}^{4}\text{He}}(b =10^{-17}) \approx 0.235\,. \tag{43}\] \begin{table} \begin{tabular}{c|c c c c} \(b\) & 0 & \(10^{-17}\) & \(10^{-15}\) & \(10^{-12}\) \\ \hline \(Y_{{}^{4}\text{He}}\) & 0.229 & 0.229 & 0.229 & 0.229 \\ \end{tabular} \end{table} Table 7: Estimated helium abundances for different ADR parameters \(b\) and \(m_{\phi}\sim 10^{-22}\,\text{eV}\) and \(m_{s0}\sim 250\,\text{eV}\) compared to the standard value \(Y_{{}^{4}\text{He}}^{\text{SM}}=0.227\pm 0.004\). Both quantities adopt values higher than allowed by experimental observations, i.e. \(\Delta N_{\rm eff}\stackrel{{\rm Planck}}{{\leq}}0.33,0.57\) and \(Y_{\rm 4He}=0.227_{\rm SM\ pred.}\pm 0.004\). In contrast, much larger ADR parameters, like \(b\gtrsim 10^{-6}\), can indeed suppress the \(\nu_{\rm s}\) population sufficiently to make light sterile neutrinos compatible with early universe cosmology. For this range of parameters most momentum modes experience mixing suppression because propagation and flavor eigenstates almost coincide with each other resulting in the absence of active sterile oscillations. Hence, we obtain \[\Delta N_{\rm eff}(b \gtrsim 10^{-6}){<}0.04\,, \tag{44}\] \[Y_{\rm 4He}(b \gtrsim 10^{-6}){=}0.227\,. \tag{45}\] Considering \(b\)-values differing from the short baseline anomaly scenario may be, for example, motivated by effects making the ADR parameter dependent of the cosmic evolution. Moreover by adding the influence of an axion-like scalar field \(\phi\) changing the sterile neutrino mass \(m_{ss}\to m_{ss}+m_{s0}\eta(m_{\phi},t)\) via a Yukawa coupling, even very small ADR parameter values can be brought into agreement with experiment. For a scalar field mass \(m_{\phi}=10^{-22}\,\)eV and additional mass amplitude \(m_{s0}=250\,\)eV, we obtain \[\Delta N_{\rm eff}(m_{\phi} =10^{-22}\,\text{eV},m_{s0}=250\,\text{eV},b=10^{-17}) \approx 0.26\,, \tag{46}\] \[Y_{\rm 4He}(m_{\phi} =10^{-22}\,\text{eV},m_{s0}=250\,\text{eV},b=10^{-17}) \approx 0.229\,, \tag{47}\] which is compatible with observations. While this analysis was carried out for a two neutrino generations framework, i.e. one active and one sterile neutrino, our findings are expected to hold even in scenarios involving greater numbers of generations. This expectation is justified as it has been found that the active-sterile decoupling can be a generic effect of the model [21] at energies higher than the resonance energies. Also the additional sterile mass from the \(\nu_{s}\)-\(\phi\) coupling is expected to have the same effect if more neutrino generations are present. Increasing the diagonal elements of the mass matrix corresponding to the sterile species makes it dominantly diagonal resulting in suppressed active-sterile mixing. Furthermore, in this analysis we have neglected parameter configurations leading to an early equilibrated species, i.e. at around \(T=\mathcal{O}(100\,\mathrm{MeV})\), for which the integration of the QKEs had to be started much earlier at \(T=\mathcal{O}(1\,\mathrm{GeV})\) as well as finite temperature QED corrections. The former is justified since the corresponding parameter configurations are not of interest to us since they violate cosmological bounds by definition and would be excluded anyway. Moreover, due to our findings described in Sec. 3.2 and 4, we don't expect that our conclusions will be different if such an analysis is carried out. Finite temperature QED corrections are important for precision predictions of the total number of ultra relativistic degrees of freedom [39] since they can lead to an increase in \(\Delta N_{\mathrm{eff}}\) on the order of magnitude of \(\Delta N_{\mathrm{eff}}=\mathcal{O}(0.1)\). Here we were solely interested in the impact of light sterile species on the number of additional neutrino generations being mainly influenced by the oscillation Hamiltonian. Finite temperature QED corrections only have a subleading influence on sterile neutrinos because they do not interact directly with the electromagnetic plasma. In summary we find that the ADR-only scenario can only explain cosmological observations if one assumes \(b\gtrsim 10^{-6}\). The ALP only scenario works well for \(m_{s0}\gtrsim 100\,\mathrm{eV}\) and \(m_{\phi}\lesssim 10^{-14}\,\mathrm{eV}\), whereas in the combined case also ADR parameters compatible with SBL anomalies (\(b\sim 10^{-17}\)) can be brought into agreement with experimental data for the same (\(m_{\phi},m_{s0}\)) configuration as in the ALP only case. Thus, if sterile neutrinos are discovered at future experiments, ADR effects or a Yukawa coupling to a scalar condensate provide a promising explanation why they did not reach thermal equilibrium in the early universe. By choosing sufficiently high ADR parameters, high \(m_{s0}\) and low \(m_{\phi}\) these effects lead to a suppression of \(\nu_{s}\) population regardless of the strength of vacuum mixing between active and sterile neutrinos.
2305.04103
How mature are survival data at the time of an interim analysis in a clinical trial with a survival outcome?
In a clinical trial with a survival outcome, an interim analysis is often performed to allow for early stopping for efficacy. If the interim analysis is early in the trial, one might conclude that a new treatment is more effective (compared to e.g.\ a placebo) and stop the trial, whereas the survival curves in the trial arms are not mature for the research question under investigation, for example because the curves are still close to 1 at that time. This means that the decision is based on a small percentage of the events in the long run only; possibly the events of the more frail patients in the trial who may not be representative for the whole group of patients. It may not be sensible to conclude effectiveness based on so little information. Criteria to determine the moment the interim analysis will be performed, should be chosen with care, and include the maturity of the data at the time of the interim analysis. Here, the expected survival rates at the interim analysis play a role. In this paper we will derive the asymptotic distribution of the Kaplan-Meier curves at the (random) moment the interim analysis will be performed for a one and two arm clinical trial. Based on this distribution, an interval in which the Kaplan Meier curves will fall into (with probability 95\%) is derived and could be used to plan the moment of the interim analysis in the design stage of the trial, so before the trial starts.
Marianne A Jonker, Steven Teerenstra
2023-05-06T17:39:45Z
http://arxiv.org/abs/2305.04103v1
# How mature are survival data at the time of an interim analysis ###### Abstract In a clinical trial with a survival outcome, an interim analysis is often performed to allow for early stopping for efficacy. If the interim analysis is early in the trial, one might conclude that a new treatment is more effective (compared to e.g. a placebo) and stop the trial, whereas the survival curves in the trial arms are not mature for the research question under investigation, for example because the curves are still close to 1 at that time. This means that the decision is based on a small percentage of the events in the long run only; possibly the events of the more frail patients in the trial who may not be representative for the whole group of patients. It may not be sensible to conclude effectiveness based on so little information. Criteria to determine the moment the interim analysis will be performed, should be chosen with care, and include the maturity of the data at the time of the interim analysis. Here, the expected survival rates at the interim analysis play a role. In this paper we will derive the asymptotic distribution of the Kaplan-Meier curves at the (random) moment the interim analysis will be performed for a one and two arm clinical trial. Based on this distribution, an interval in which the Kaplan Meier curves will fall into (with probability 95%) is derived and could be used to plan the moment of the interim analysis in the design stage of the trial, so before the trial starts. **Keywords: study design, interim analysis, time-to-event endpoint, overall survival (OS), progression free survival (PFS)** ## 1 Introduction Suppose we plan a clinical trial with two arms, arm \(A\) and arm \(B\), and a survival (time-to-event) outcome. For example, cancer patients in arm \(A\) get a new treatment and those in arm \(B\) a placebo or "treatment as usual". The effect of the new treatment is studied by comparing the overall survival or progression-free survival in the two arms. We consider trials where the aim is confirmatory testing (e.g., with the log-rank test). An example is a phase 3 trial since the focus is on confirming efficacy (and safety). The test for efficacy may require a substantial number of patients or substantial follow-up and therefore an interim analysis is often planned to allow for early stopping for efficacy. The moment the interim analysis is performed is determined based on one or more criteria which should be fully specified before the trial starts. For instance, the timing could be based on the number of patients enrolled, or the number of events (Floriani et al., 2008). For event-based interim analyses, group-sequential methodology is typically used (Jennison and Turnbull, 2000) and comes down to choosing an alpha-spending function (e.g., O'Brien-Fleming type or Pocock type) and an information fraction which play a similar role as the significance level and the sample size in power calculations for designs with only one test (Lan et al., 1994). The amount of statistical information available at a time-point comes down to the number of observed events or the information fraction at that moment, where the latter is defined as the number of observed events divided by the total number of events planned for the final analysis. The interim analysis is performed once the information fraction equals a pre-specified value. This value can be chosen freely and should be determined in the design stage. However, the information fraction determines only the power of the log-rank/Cox regression test and the survival data at the interim analysis do not have to be mature in terms of Kaplan-Meier curves. Here "data maturity" is meant as in the European Medicines Agency guideline for oncology (EMA): "the distribution of events over time (early - late) makes it feasible to estimate the treatment effect in the full study population". As an example, to detect a hazard ratio of 0.63 in a 1:1 randomized controlled trial with 80% power at a two-sided significance level of 0.05, 147 events are needed (Schoenfeld, 1983). The interim analysis could be planned at an information fraction of 68%, i.e., 100 events. If the trial has 200 patients much more of the Kaplan-Meier curves in each arm will be observed than when the trial has 1000 patients. This can be clearly seen in Figure 1 where the Kaplan-Meier curves have been plotted (based on simulated data) at the time of the interim analysis for the trial with 200 patients (left) and for the trial with 1000 patients (right). More generally, if the fraction of _patients_\(p\) with an event at the interim analvsis is low, it might happen that the number of events at the interim analysis is sufficient for the statistical test to be statistically significant and to conclude that the new treatment is more effective than the placebo, while the survival curves are still close to 100%, say both above 80%. In that case, this conclusion of a statistical significant difference is based merely on patients who die early after the start of treatment; possibly the patients with poor prognosis. Then, it would not be sensible to claim efficacy for the whole population based on the conclusions from the interim analysis. So, when planning the moment the interim analysis will take place, it is important to have a rough idea about the survival at that moment, so that the interim analyses will not be too early "in the survival curves". This motivated the main question in our paper: "Can we derive a formula for the prediction interval for the survival rate at the time of interim analysis that can be used before the trial starts?". This is not a trivial question as the time of the interim analysis is a random moment in calendar time and possibly not all patients have entered the study before the time point of the interim analysis. To investigate what is known regarding this question, we performed a title-abstract search in Pubmed. As there Figure 1: Kaplan-Meier curves at time of the interim analysis (after 100 observed events) for a trial with 200 patients (left) and 1000 patients (right). In both trials the true survival distributions in arm \(A\) and \(B\) are exponential with mean 51.9 and 82.4. are no words that enable a direct search on the timing of an interim, we decided on a broad search using the words "interim analysis", "interim analyses", "interim look", or "interim looks" in Biometrical Journal, Biometrics, Biometrika, Clinical Trials, Contemporary Clinical Trials, Controlled Clinical Trials, Journal of the American Statistical Association, Lifetime Data Analysis, Pharmaceutical Statistics, Statistical Methods in Medical Research, and Statistics in Medicine (458 hits on 22 April 2022). Most of the literature found dealt with critical values for testing at interim analyses in specific situations (e.g. if the hypotheses relate to a survival probability at a fixed follow-up time (Lin et al., 1996), Bayesian adaptive designs, estimation and bias correction, treatment arm selection, optimal designs, and sample size adaption). Papers treating questions related to the timing of an interim were found. For example: when the interim analysis based on number of events occurs in calendar time (Bagiella and Heitjan, 2001) or conversely, how to update predictions regarding the number of events and thus power at fixed (non-stochastic) calendar times (Royston and Barthel, 2010); the problem how to translate the calendar-time scale into information-time scale, i.e., the fraction of events (Lan and Lachin, 1990); or estimation of percentiles at a given calendar time (safety for medical devices (Murray et al., 2013)). We found one concrete advice in Van Houwelingen et al. (2005): they advise in case of non-proportional hazards to only plan an interim analysis if a final time horizon for the final analysis is specified and at the time of the interim analysis sufficient information is present over the whole time interval up to that horizon. However, their arguments and results do not involve the Kaplan-Meier estimates at the interim analysis. In conclusion, it seems that our question has not been addressed in the literature. Therefore, in this paper we will consider the following situation. Suppose the interim analysis is conducted once \(p100\%\) of the patients has had an event. We will derive intervals that will contain the Kaplan-Meier curves at the moment of the interim analysis with high probability. The boundaries of these intervals are dependent of the expected shape of the survival curves in both arms, the accrual rate, and the fraction of patients with an event (\(p\)). Although the accrual rate can be influenced by opening more study sites, it is generally a given logistical restraint. Therefore, only the fraction \(p\) can be freely chosen by the researcher in the design stage. By knowing the relationship between \(p\) and the boundaries of the prediction interval during the design phase of the trial, choices for the fraction \(p\) can be made to be sure that the Kaplan-Meier curves at the time of the interim analysis are far enough below \(100\%\). The boundaries of the prediction intervals are derived from the asymptotic distribution of the Kaplan-Meier curves at the random time point the interim analysis takes place. The paper is outlined as follows. In Section 2 the research aims are made specific by describing them in mathematical terms. In order to do this, notation will be introduced. Next, in Section 3 the asymptotic distributions of the Kaplan-Meier curves at the time of the interim analysis are given in a two-arm and a single arm study. The proofs of these theorems are given in the Appendix. Then, in Section 4, the results of simulation studies for a range of settings are described to confirm that the asymptotic theory can be used for finite samples. Results are tabulated, to make the theory directly available for planning and R-code is provided in the appendix. Further, in Section 5, it is explained how the asymptotic theory can be applied in practice when planning a trial. The paper ends with a discussion in Section 6. ## 2 Notation and specific aim ### Notation We consider a clinical trial with two arms (\(A\) and \(B\)), with a survival outcome and an interim analysis. Two time lines are important: the follow-up and the calendar time. In survival analysis, the follow-up time is usually the time line researchers are interested in as it describes the time to an event of interest from a pre-specified starting point, for instance start of treatment. The calendar time line starts at the moment the trial is started (time zero) until it is stopped. This time line is important since the moment the interim analysis is performed is defined in calendar time. For estimating the survival curve at the interim analysis only information that is available at that moment can be used. In a clinical trial, the effect of a new treatment is studied by comparing a survival outcome (e.g., overall survival) between the arms \(A\) and \(B\). To distinguish between the two arms in the notation of the observations and their distribution functions a subscript \(A\) or \(B\) is used. For a patient in arm \(A\), define \(T_{A}\) and \(C_{A}\) as the time from entering the study to the event of interest (the survival time) and censoring, both in follow-up time. It is assumed that \(T_{A}\) and \(C_{A}\) are stochastically independent. Denote the distribution of the survival time \(T_{A}\) by \(F_{A}\), with continuous density \(f_{A}\), and survival function \(S_{A}=1-F_{A}\). The hazard function for \(T_{A}\) is defined as \(\lambda_{A}(t)=f_{A}(t)/(1-F_{A}(t))=f_{A}(t)/S_{A}(t)\), and its corresponding cumulative hazard function as \(\Lambda_{A}(t)=\int_{0}^{t}\lambda_{A}(s)\mathrm{d}s\). Similar notation is used for arm \(B\). In calendar time the start of the study is time zero. Let \(E_{A}\) be the time (since the start of the study) a patient from arm \(A\) enters the study with distribution function \(G_{Acc}\). In calendar time, the event-of-interest takes place at time \(E_{A}+T_{A}\) (time since the start of the study) or the patient is censored at time \(E_{A}+C_{A}\), whichever comes first. For \(L\) the moment (in calendar time) the study is temporary stopped (in case of an interim analysis) or definitely ended, \(E_{A}+C_{A}=L\) by definition (\(L\) will be specified later on). Here and later on, we will use phrases like "temporarily stopped" and "the time of the interim analysis" interchangeably, because from an analysis point of view, patients recruited after or events occurring after the interim analysis do not play a role in the analysis. From a trial logistics point of view, recruitment and follow-up typically continue (unless there is a safety concern that has to be sorted out). The number of patients in the arms \(A\) and \(B\) are \(n\) and \(m\), respectively. The observations for patient \(i\) in arm \(A\) are given by \((E_{A,i},T_{A,i}\wedge C_{A,i},\Delta_{A,i})\) if \(E_{A,i}<L\), where \(T_{A,i}\wedge C_{A,i}=\min\{T_{A,i},C_{A,i}\}\) and \(\Delta_{A,i}=1\{T_{A_{i}}\leq C_{A,i}\}\) equals \(1\) if \(T_{A,i}\leq C_{A,i}\) and \(0\) otherwise. If \(E_{A,i}>L\), patient \(i\) did not enter the study before it was stopped (temporary) at time \(L\) and there are no observations available. The observations of different patients are assumed to be independent. A similar notation is used in arm \(B\). ### Specific aim As was illustrated in the introduction, the Kaplan-Meier curve at the time of the interim analysis will not depend on the (statistical) information fraction, but rather on the fraction of patients with an event. Therefore, suppose the interim analysis is performed once \(100\,p\%\) of the patients has had an event, no matter whether these patients are from arm \(A\) or arm \(B\). The fraction \(p\) is a direct consequence of the number of events (typically chosen based on power considerations) and the total number of patients (typically chosen on logistic feasibility). Note that the number of events and number of patients are chosen during the design stage of the trial, whence one can steer these in the design stage towards a value of \(p\) which gives meaningful data maturity at the time of the interim analysis. The stochastic moment of the interim analysis is denoted as \(\hat{t}_{p,n+m}\), and is a moment in calendar time. At the interim analysis, only observations up to that moment can be used for estimation. Specifically, only if a patient enters the study before the moment the interim analysis takes place the patient will be included in the analysis. In Figure 2 the follow-up time for six patients who entered the study before the interim analysis is shown. In the left plot, the events are given in calendar time. In the right plot, the follow-up times of these patients are shown. The time-point \(\hat{t}_{p,n+m}\) is given as well. From the figure it is immediately clear that none of the patients can have a follow-up time of at least \(\hat{t}_{p,n+m}\) (almost surely). That means that estimates of the survival curves \(S_{A}\) and \(S_{B}\) at the (stochastic) point \(\hat{t}_{p,n+m}\) (in follow-up time) are unreliable even if the sample size is high. Therefore, the aim is to consider the Kaplan-Meier curves in both arms at the time point \(\hat{t}_{p,n+m}-\delta\) for \(\delta>0\) a pre-specified value. When designing the study different values of \(\delta\) can be considered. If the sample sizes \(n\) and \(m\) increase to infinity, \(\hat{t}_{p,n+m}\) converges in probability to a value \(t_{p}\) defined as the \(p\)th quantile of a mixture of distributions (that will be defined later) and for large, but finite \(n\) and \(m\) a positive fraction of the patients will have a follow-up time that is larger than \(\hat{t}_{p,n+m}-\delta\) under the assumption that this mixture distribution is strictly increasing, at least in a neighborhood of \(t_{p}\). In Section 3 (and the appendix) it will be proven that \(\sqrt{n}(\tilde{S}_{A,n}(\hat{t}_{p,n+m}-\delta)-S_{A}(t_{p}-\delta))\) is asymptotically normal with mean zero and a variance \(\sigma_{A,\delta}^{2}\), with \(\tilde{S}_{A,n}\) the Kaplan-Meier estimator in arm \(A\) based on observations up to the interim analysis at time \(\hat{t}_{p,n+m}\). The asymptotic variance \(\sigma_{A,\delta}^{2}\) is a function of the survival functions \(S_{A},S_{B}\), of \(G_{Acc}\) (for the definition of \(G_{Acc}\), see the beginning of Section 2) and of the fraction \(p\) (see Section 3 for the expression of \(\sigma_{A,\delta}\)). From this asymptotic behavior it follows that \[\mathrm{P}\Big{(}S_{A}(t_{p}-\delta)-\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{ \sqrt{n}}\ \leq\ \tilde{S}_{A,n}(\hat{t}_{p,n+m}-\delta)\ \leq\ S_{A}(t_{p}-\delta)+\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{\sqrt{n}} \Big{)}\approx 1-\alpha,\] for \(\xi_{\alpha/2}\) the upper \(\alpha/2\)-quantile of the standard normal distribution. Therefore, the probability \(\tilde{S}_{A,n}(\hat{t}_{p,n+m}-\delta)\) will be in the interval \[\left[S_{A}(t_{p}-\delta)-\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{ \sqrt{n}}\ ;\ S_{A}(t_{p}-\delta)+\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{ \sqrt{n}}\right] \tag{1}\] is approximately equal to \(1-\alpha\). The boundaries of the interval depend on the true survival functions \(S_{A},S_{B}\), of \(G_{Acc}\) and of the fraction \(p\) (via \(\sigma_{A,\delta}\)). The same boundaries hold for the Breslow estimator \(\tilde{S}_{A,n}\). For the survival curves in arm \(B\) a similar interval can be constructed. When planning a trial, the chosen distribution functions for \(S_{A},S_{B}\) and \(G_{Acc}\) play an important role for determining the sample size. The value \(t_{p}\) is a function of, among others, these curves as well and \(S_{A}(t_{p}-\delta)\) and \(S_{B}(t_{p}-\delta)\) can be computed when designing the study. That means that in the designing stage of the trial \(p\) (and \(t_{p}\)) can be chosen so that \(S_{A}(t_{p}-\delta)\) and \(S_{B}(t_{p}-\delta)\) and the corresponding boundaries are sufficiently below \(1\) so that the survival curves are sufficiently mature for the aim at hand. In Section 5 it is explained how to use this interval in a practical setting. ## 3 Asymptotic results In this section the asymptotic distribution of the Kaplan-Meier and the Breslow estimators evaluated at the stochastic time \(\hat{t}_{p,n+m}-\delta\) (in follow-up time) and based on all observations that occurred before the moment of the interim analysis at \(\hat{t}_{p,n+m}\) (in calendar time) in a two arm clinical trial (Theorem 1) and a single arm trial (Theorem 2) is given. The proofs of the theorems are given in the Appendix. Notation that is used in the theorem, especially in the definition of the asymptotic variance \(\sigma^{2}_{A,\delta}\), is given below the theorem. **Theorem 1: Clinical trial with two arms** Suppose the interim analysis is performed once \(100p\%\) of the patients has had an event, irrespective of the arm in which the events took place, at time-point \(\hat{t}_{p,n+m}\). Let \(\hat{S}_{A,n}\) and \(\tilde{S}_{A,n}\) be the Breslow and the Kaplan-Meier estimators for \(S_{A}\). Let \(\delta>0\) be pre-specified, \(\hat{t}^{\delta}_{p,n+m}=\hat{t}_{p,n+m}-\delta\) and its limit \(t^{\delta}_{p}=t_{p}-\delta\). The asymptotic distribution of the Breslow estimator is given by \[\sqrt{n}(\hat{S}_{A,n}(\hat{t}^{\delta}_{p,n+m})-S_{A}(t^{\delta} _{p}))\leadsto\mathcal{N}(0,\sigma^{2}_{A,\delta}),\] and of the Kaplan-Meier estimator the asymptotic distribution is the same: \[\sqrt{n}(\tilde{S}_{A,n}(\hat{t}^{\delta}_{p,n+m})-S_{A}(t^{\delta} _{p}))\leadsto\mathcal{N}(0,\sigma^{2}_{A,\delta}),\] Figure 2: Six patients enter the study at a random moment before the interim analysis. Left plot: six patients in calendar time. Right plot: the same six patients in follow-up time. An event of interest is represented as a cross, censoring as a circle. Although, some of the patients are at risk during the interim analysis (so in calendar time), they do not have a follow-up time of \(\hat{t}_{p,n+m}\). (where \(\sim\) is the notation for convergence in distribution) as \(n,m\rightarrow\infty\) and \[\sigma^{2}_{A,\delta}=S_{A}(t_{p}^{\delta})^{2}\int_{0}^{t_{p}^{ \delta}}\frac{\mathrm{d}\Lambda_{A}(s)}{1-H_{A,t_{p}}(s)}\ +\ \frac{q_{A}f_{A}(t_{p}^{\delta})^{2}}{(h_{ mix,t_{p}}^{uc,\star}(t_{p}))^{2}}\ p(1-p)\] \[\qquad\qquad-2S_{A}(t_{p}^{\delta})\frac{q_{A}\sqrt{q_{A}}f_{A}( t_{p}^{\delta})}{h_{ mix,t_{p}}^{uc,\star}(t_{p})}\bigg{(}(1-H_{A,t_{p}}^{uc,\star}(t_{p}))\Lambda_{A}(t_{p} ^{\delta})\ +\ \int_{0}^{t_{p}^{\delta}}\frac{(H_{A,t_{p}}^{uc}(s)-H_{A,t_{p}}^{uc,\star}(t_{p })H_{A,t_{p}}(s))\mathrm{d}\Lambda_{A}(s)}{1-H_{A,t_{p}}(s)}\bigg{)}.\] The proof of this theorem is given in Appendix B. For the Breslow estimators \(\hat{S}_{B,m}\) and the Kaplan-Meier estimator \(\tilde{S}_{B,m}\) in arm \(B\), a similar result holds. The variance \(\sigma^{2}_{A,\delta}\) depends on multiple distribution functions and parameters. The notation will be explained below. As in calendar time only observation up to the interim analysis are used, asymptotically \(L=t_{p}\), and the observations are censored in calendar time at time \(t_{p}\): so \(E_{A}+C_{A}=t_{p}\). The (sub)distribution functions \(H_{A,t_{p}}\) and \(H_{A,t_{p}}^{uc}\) are defined in follow-up time as \(H_{A,t_{p}}(t)=\mathrm{P}(T_{A}\wedge C_{A}\leq t)\) and \(H_{A,t_{p}}^{uc}(t)=\mathrm{P}(T_{A}\wedge C_{A}\leq t,\Delta_{A}=1)=\mathrm{P} (T_{A}\leq t,\Delta_{A}=1)\), where the latter one is for uncensored observations (the superscript "uc" stands for "uncensored"). With similar notation for arm \(B\), the mixture of the distributions in the two arms is defined as \[H_{mix,t_{p}}(t)=q_{A}H_{A,t_{p}}(t)+q_{B}H_{B,t_{p}}(t),\qquad H_{mix,t_{p}}^ {uc}(t)=q_{A}H_{A,t_{p}}^{uc}(t)+q_{B}H_{B,t_{p}}^{uc}(t),\] with \(q_{A}=\lim_{n,m\rightarrow\infty}n/(n+m)\) and \(q_{B}=\lim_{n,m\rightarrow\infty}m/(n+m)\). In calendar time the definitions are very similar, define the (sub-) distribution functions \(H_{A,t_{p}}^{uc}(t)=\mathrm{P}(E_{A}+(T_{A}\wedge C_{A})\leq t)\) and \(H_{A,t_{p}}^{uc,\star}(t)=\mathrm{P}(E_{A}+T_{A}\leq t,\Delta_{A}=1)\). Their densities are denoted as \(h_{A,t_{p}}^{uc,\star}\) and \(h_{A,t_{p}}^{uc,\star}\), respectively. The mixtures of the distributions in the two arms are defined as \(H_{mix,t_{p}}^{\star}(t)=q_{A}H_{A,t_{p}}^{\star}(t)\ +\ q_{B}H_{B,t_{p}}^{ \star}(t)\) and \(H_{mix,t_{p}}^{uc,\star}(t)=q_{A}H_{A,t_{p}}^{uc,\star}(t)\ +\ q_{B}H_{B,t_{p}}^{uc,\star}(t)\). Remember that the interim analysis is performed once 100\(p\%\) of all patients has had an event; at time point \(t_{p,n+m}\). This stochastic time point \(\hat{t}_{p,n+m}\) converges in probability to \(t_{p}\) defined as the \(p\)th quantile of the mixture \(H_{mix,t_{p}}^{uc,\star}\). In the next theorem the single arm setting is considered. Since we do not have to distinguish between arms, the notation \(A\) and \(B\) is left-out from the notation. **Theorem 2: Clinical trial with single arm** Define \(\hat{t}_{p,n}\) as the moment 100\(p\%\) of the patients had an event. Let \(\hat{S}_{n}\) and \(\tilde{S}_{n}\) be the Breslow and the Kaplan-Meier estimators based on the observations up to time \(\hat{t}_{p,n}\). Let \(\delta>0\) be fixed, \(\hat{t}_{p,n}^{\delta}=\hat{t}_{p,n}-\delta\) and its limit \(t_{p}^{\delta}=t_{p}-\delta\). For the Breslow-estimator it holds that \[\sqrt{n}(\hat{S}_{n}(\hat{t}_{p,n}^{\delta})-S(t_{p}^{\delta}))\ \leadsto\ \mathcal{N}(0,\sigma_{\delta}^{2}),\] and for the Kaplan-Meier estimator that \[\sqrt{n}(\tilde{S}_{n}(\hat{t}_{p,n}^{\delta})-S(t_{p}^{\delta}))\ \leadsto\ \mathcal{N}(0,\sigma_{\delta}^{2}),\] as \(n\rightarrow\infty\), with \[\sigma_{\delta}^{2}=S(t_{p}^{\delta})^{2}\int_{0}^{t_{p}^{\delta }}\frac{\mathrm{d}\Lambda(s)}{1-H_{t_{p}}(s)}\ +\ \frac{f(t_{p}^{\delta})^{2}}{(h_{t_{p}}^{uc,\star}(t_{p}))^{2}}\ p(1-p)\] \[\qquad-2S(t_{p}^{\delta})\frac{f(t_{p}^{\delta})}{h_{t_{p}}^{uc, \star}(t_{p})}\bigg{(}(1-p)\Lambda(t_{p}^{\delta})+\int_{0}^{t_{p}^{\delta}} \frac{(H_{t_{p}}^{uc}(s)-H_{t_{p}}^{uc,\star}(t_{p})H_{t_{p}}(s))\mathrm{d} \Lambda(s)}{1-H_{t_{p}}(s)}\bigg{)}. \tag{2}\] Theorem 2 follows from Theorem 1 by taking \(m=0\) and, thus, \(q_{A}=1\) and \(q_{B}=0\). The asymptotic variance in (2) is a sum of three terms. The first term can be seen as the variance due to the estimation of the survival function \(S\), the second term due to the estimation of the time point \(t_{p}\) and the last term comes from the covariance between the two terms. This covariance is negative as \(\tilde{S}_{n}\) and \(\tilde{t}_{p,n}\) are negatively correlated. If the sum of the second and third term in the display is negative, the estimator \(\tilde{S}_{n}(\tilde{t}_{p,n})\) for estimating \(S(t_{p})\) has an asymptotically smaller variance than the estimator \(\tilde{S}_{n}(t_{p})\), even though the second estimator is determined at a fixed time point \(t_{p}\). This is not surprising, because of the following example. Consider the situation in which there is no censoring. In that case the Kaplan-Meier curve equals the empirical survival curve and \(\tilde{S}_{n}(\tilde{t}_{p,n})\) equals \(1-p\) by definition and the asymptotic variance will be equal to zero. The asymptotic variance of \(\tilde{S}_{n}(t_{p})\) equals \(p(1-p)\) which is larger than zero. ## 4 Simulation studies ### Comparison of asymptotic versus simulation results In Theorem 1 the asymptotic distributions of the Kaplan-Meier and the Breslow estimators are given in a two arm trial. Based on this asymptotic distribution it follows that the probability \(\tilde{S}_{A,n}(\tilde{t}_{p,n+m})\) will be in the interval \[\left[S_{A}(\tilde{t}_{p})-\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{\sqrt{n}}:S_ {A}(\tilde{t}_{p}^{\delta})+\xi_{\alpha/2}\frac{\sigma_{A,\delta}}{\sqrt{n}}\right]\] is approximately equal to \(1-\alpha\). In this subsection the aim is to study the accuracy of the asymptotic interval by comparing it to the interval obtained by Monte Carlo simulations. In all scenarios it is assumed that in both arms the time to the event of interest follows an exponential distribution. In total eight different settings are considered, obtained by varying the hazard ratio for the two arms, the severity of the disease (in terms of the median survival time) and the rarity of the disease (in terms of the accrual period that is necessary to include the patients). More specifically, we consider * Effect of the treatment in terms of the hazard ratio: we consider two situations: * Strong effect: hazard ratio equals \(0.65\). This implies that \(\theta_{B}/\theta_{A}=0.65\). * Median effect: hazard ratio equals \(0.75\). This implies that \(\theta_{B}/\theta_{A}=0.75\). * Severity of the disease: * Aggressive: the median survival time is \(6\) months; \(\theta_{A}=-\log(0.5)/6=0.12\). * Indolent: the median survival time is \(36\) months; \(\theta_{A}=-\log(0.5)/36=0.019\). * Rarity of the disease: * Rare: accrual is \(4\) patients per month; the accrual time equals \((n+m)/4\) months. * Frequent: accrual is \(20\) patients per month; the accrual time equals \((n+m)/20\) months. In a clinical trial the sample size is usually determined to have sufficient power for the log rank test at the end of the study. In fact, what counts is that the number of patients together with the follow-up ensures a sufficient (expected) number of events. The required expected number of events for a log-rank test (Schoenfeld, 1991) or Cox regression (Schoenfeld, 1983) is calculated via the Schoenfeld's formula: \(d=\#\text{events}=(\xi_{\alpha/2}+\xi_{\beta})^{2}/((\log\text{HR})^{2}q_{A} q_{B})\), for HR the hazard ratio. In case of a 1-1 randomisation, \(\alpha=0.05\) (two-sided) and 80% power, \(\#\text{events}=31.4/(\log\text{HR})^{2}\). A common approach is to choose a combination of number of patients \(n+m\), accrual distribution function \(G_{Acc}\), and follow-up duration after the recruitment of the last patient, \(FU\), such that the expected number of events (after the last recruited patient has \(FU\) follow-up) equals the required expected number of events \(d\). If the recruitment is uniform over a recruitment period from \(0\) to \(R\) in calender time (i.e., \(G_{Acc}\) has density \(1/R\) on the interval from \(0\) to \(R\) and equals \(0\) elsewhere), then the duration of the trial is \(L=R+FU\) and the expected number of events in the period from \(0\) to \(L\) is in arm \(A\): \[n\ H_{A,L}^{uc,\star}(L) =n\,\text{P}(T_{A}\leq L-E_{A}\,,\,\Delta_{A}=1)=n/R\ \int_{0}^{R}\text{P}(T_{A}\leq L-s)\,\text{d}s\] \[=n/R\int_{0}^{R}\left(1-\exp(-\lambda_{A}(L-s))\right)\text{d}s=n \,\left[1-\frac{\exp(-\lambda_{A}L)}{\lambda_{A}R}\,\left(\exp(\lambda_{A}R)- 1\right)\right], \tag{3}\] and a similar expression holds for arm \(B\). The combination of sample size, recruitment time \(R\), and follow-up \(FU\) is chosen such that \(n\ H_{A,L}^{w,\epsilon}(L)+m\ H_{B,L}^{w,\epsilon}(L)=d\). In order to reduce the number of different settings, the following situation is considered. First, randomization is 1:1: \(n=m\) in the formula. Also, the follow-up time after the last patient is recruited \(FU\) is fixed at 6 months. Then still there are several combinations of recruitment period \(R\) and total sample size \(2n\) that can provide the required number of events at the end of follow-up. Depending on the occurrence of the disease, the recruitment rates 4 or 20 patients per month are considered. The rates fix the ratio of sample size \(2n\) and recruitment period \(R\) and result in one combination of \(2n\) and \(R\). For the severity of the disease (i.e., the survival curves), aggressive and indolent diseases are considered. For the two scenario's for the treatment effect (\(hr=0.65\) or \(hr=0.75\)), the number of events needed are equal to 170 and 380, respectively. The interim analysis is often performed once a fraction of the required number of events for the final analysis is observed. This fraction is called the information fraction (IF). If (and only if) the total number of patients is decided on, the IF is one-to-one related to the fraction \(p\), the fraction of patients with an event at the interim analysis (the fraction \(p\) could be called the patient fraction to contrast it with the information fraction). In the simulation study we consider two moments of interim analysis: * Early interim analysis: after 40% of the events: IF\(=0.40\). * Late interim analysis: after 60% of the events: IF\(=0.60\). The different settings that are considered are given in Table 1. The asymptotic interval is found by computing its bounds based on the chosen setting. The finite sample intervals are found as follows. Data are sampled for the \(n+m\) patients in the study. Based on the sampled data the estimate \(\hat{S}_{A,n}(\hat{t}_{p,n+m}^{\delta})\) is computed. This is repeated 1000 times. The sample mean and the 2.5% and 97.5% quantiles of the estimates are used to construct a 95% prediction interval. The results of the simulation study (Table 2) show that the asymptotic intervals are accurate as they show a very strong resemblance to the intervals constructed based on Monte Carlo simulations. Further, in all settings the intervals are sufficiently narrow to be useful for planning an interim analysis. ### The value \(\delta\) and the width of the prediction interval Remember that the interim analysis is performed at time \(\hat{t}_{p,n+m}\) in calendar time. Since no patients can have a follow-up time of at least \(\hat{t}_{p,n+m}\) (see Figure 2), the survival curves in the two arms are estimated at time \(\hat{t}_{p,n+m}-\delta\) in follow-up time, with \(\delta>0\). If \(\delta\) is small, only a few patients may have a follow-up time of at least \(\hat{t}_{p,n+m}-\delta\) and \begin{table} \begin{tabular}{|c|c|c||c||c|c|c|c||c|c|c|c|} \hline & \multicolumn{4}{c||}{disease/treatment} & \multicolumn{4}{c||}{trial} & \multicolumn{4}{c|}{interim} \\ nr. & effect & severity & rarity & \(n+m\) & total & accrual & FU & IF & \# events & \(p\) & \(t_{p}\) \\ & (HR) & (median) & (pat/mon.) & patients & \# events & (mon.) & (mon.) & & & & \\ \hline 1. & 0.65 & 6 & 4 & 196 & 170 & 49 & 6 & 0.40 & 68 & 0.35 & 27 \\ 2. & 0.65 & 6 & 20 & 260 & 170 & 13 & 6 & 0.40 & 68 & 0.26 & 10 \\ 3. & 0.65 & 36 & 4 & 344 & 170 & 86 & 6 & 0.40 & 68 & 0.20 & 53 \\ 4. & 0.65 & 36 & 20 & 620 & 170 & 31 & 6 & 0.60 & 102 & 0.16 & 27 \\ \hline 5. & 0.75 & 6 & 20 & 480 & 380 & 24 & 6 & 0.40 & 152 & 0.32 & 16 \\ 6. & 0.75 & 36 & 4 & 580 & 380 & 145 & 6 & 0.40 & 152 & 0.26 & 82 \\ 7. & 0.75 & 36 & 20 & 1000 & 380 & 50 & 6 & 0.40 & 152 & 0.15 & 33 \\ 8. & 0.75 & 36 & 20 & 1000 & 380 & 50 & 6 & 0.60 & 228 & 0.23 & 41 \\ \hline \end{tabular} \end{table} Table 1: Different scenario’s for simulation study. In all cases \(n=m\); the number of patients in the two arms equal. The variable “total # events” indicates the total number of events needed based on the Schoenfeld’s formula for \(\alpha=0.05\) and a power of 0.80. The variable “# events” (for the interim analysis) indicates the number of observed events at time of the interim. “FU” is the follow-up time after the last patient was accrued. the Kaplan-Meier estimator in \(\tilde{t}_{p,n+m}-\delta\) may be inaccurate, what will lead to a wide(r) interval for \(S_{A}(t_{p}-\delta)\) and \(S_{B}(t_{p}-\delta)\). At the other hand, if \(\delta\) is large, the Kaplan-Meier (or Breslow) estimator is evaluated earlier in follow-up time and may not represent all the information that is available at the interim analysis. These opposing considerations are similar to a bias-variance trade-off. The value \(\delta\) can be chosen by the researcher and, therefore, it is interesting to study the effect of \(\delta\) on the width of the prediction interval. The settings 3 and 7 (see Table 1) are considered for different values of \(\delta\). From Table 3 it can be seen that for an increasing value of \(\delta\) the value of \(t_{p}-\delta\) becomes smaller and the survival curve in \(t_{p}^{\delta}=t_{p}-\delta\) increases (by definition). Moreover, it can be seen that the widths of the prediction intervals decrease with increasing \(\delta\). The latter is a direct consequence of the fact that the survival curves can be estimated more accurately if more patients are still at risk. More settings have been considered. The conclusions are the same (results not shown). ## 5 In practice A researcher who aims to design a confirmatory clinical trial with the log-rank test or Cox regression analysis is typically first interested in obtaining sufficient power. In group-sequential methodology, the power of such tests at the final or the interim analysis depends on the number of events and the critical values (alpha-spending function) at these analyses. In practice, the interim analysis is often conducted after a certain percentage of the required events that is necessary for sufficient power at the final analysis (this is called the information fraction). To illustrate that information fraction relates (only) to power, consider the following. When the power (for a certain effect size) at the final analysis is 80% (with a two-sided significance level of 0.05) and one interim analysis is planned using an O'Brien-Fleming boundary, then the power (for achieving a statistically significant test statistic already at that one interim analysis) is 6% for 40% information fraction (IF), 18% for IF=50%, 34% for IF=60%, 51% for IF=70%, 66% for IF=80%, and 76% for IF=90%. Conversely, 80% power for the interim analysis can only be achieved if the \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline & \multicolumn{4}{c||}{Arm \(A\)} & \multicolumn{4}{c|}{Arm \(B\)} \\ nr. & \(\hat{S}_{A,\pi}(t_{p,n+m}^{\delta})\) & 95\% int & \(S_{A}(t_{p}^{\delta})\) & asymp int & \(\hat{S}_{B,m}(t_{p,n+m}^{\delta})\) & 95\% int & \(S_{B}(t_{p}^{\delta})\) & asymp int \\ \hline 1. & 0.07 & [0.00 ; 0.20] & 0.06 & [-0.06 ; 0.18] & 0.17 & [0.00 ; 0.36] & 0.16 & [-0.02 ; 0.34] \\ 2. & 0.36 & [0.20 ; 0.50] & 0.36 & [0.20 ; 0.52] & 0.52 & [0.35 ; 0.66] & 0.52 & [0.36 ; 0.68] \\ 3. & 0.39 & [0.23 ; 0.53] & 0.40 & [0.25 ; 0.55] & 0.55 & [0.38 ; 0.69] & 0.55 & [0.40 ; 0.70] \\ 4. & 0.63 & [0.53 ; 0.70] & 0.63 & [0.54 ; 0.72] & 0.74 & [0.66 ; 0.81] & 0.74 & [0.66 ; 0.82] \\ \hline 5. & 0.20 & [0.07 ; 0.30] & 0.20 & [0.09 ; 0.31] & 0.30 & [0.16 ; 0.41] & 0.30 & [0.17 ; 0.42] \\ 6. & 0.24 & [0.13 ; 0.36] & 0.24 & [0.13 ; 0.35] & 0.34 & [0.21 ; 0.46] & 0.34 & [0.22 ; 0.46] \\ 7. & 0.57 & [0.48 ; 0.65] & 0.57 & [0.48 ; 0.65] & 0.65 & [0.56 ; 0.73] & 0.65 & [0.57 ; 0.73] \\ 8. & 0.49 & [0.41 ; 0.56] & 0.49 & [0.41 ; 0.57] & 0.58 & [0.50 ; 0.66] & 0.58 & [0.51 ; 0.66] \\ \hline \end{tabular} \end{table} Table 2: Results of the simulation results. The number in the first column refers to the simulation scenario in Table 1. In all cases \(\delta\) was taken equal to \(\delta=0.1\,t_{p}\). \begin{table} \begin{tabular}{|c||c|c|c||c||c|c|c|} \hline & \multicolumn{4}{c||}{Arm \(A\)} & \multicolumn{4}{c|}{Arm \(B\)} \\ \(\delta\) & \(\hat{S}_{A,\pi}(t_{p,n+m}^{\delta})\) & 95\% int & \(S_{A}(t_{p}^{\delta})\) & asymp int & \(\hat{S}_{B,m}(t_{p,n+m}^{\delta})\) & 95\% int & \(S_{B}(t_{p}^{\delta})\) & asymp int \\ \hline 0.01 & 0.37 & [0.00 ; 0.53] & 0.36 & [0.14 ; 0.58] & 0.51 & [0.26 ; 0.67] & 0.52 & [0.30 ; 0.73] \\ 0.10 & 0.40 & [0.22 ; 0.53] & 0.40 & [0.25 ; 0.55] & 0.55 & [0.37 ; 0.69] & 0.55 & [0.40 ; 0.70] \\ 0.25 & 0.47 & [0.34 ; 0.58] & 0.46 & [0.34 ; 0.59] & 0.61 & [0.50 ; 0.72] & 0.61 & [0.49 ; 0.73] \\ \hline 0.01 & 0.54 & [0.39 ; 0.64] & 0.54 & [0.41 ; 0.66] & 0.62 & [0.49 ; 0.72] & 0.63 & [0.51 ; 0.74] \\ 0.10 & 0.57 & [0.48 ; 0.65] & 0.57 & [0.48 ; 0.65] & 0.65 & [0.57 ; 0.73] & 0.65 & [0.57 ; 0.73] \\ 0.25 & 0.62 & [0.56 ; 0.69] & 0.62 & [0.56 ; 0.69] & 0.70 & [0.65 ; 0.76] & 0.70 & [0.64 ; 0.77] \\ \hline \end{tabular} \end{table} Table 3: Results of the simulation results. The value of \(\delta\): \(\delta=x\,t_{p}\) with \(x\) the value in the first column. true effect is larger than what was supposed for the final analysis. The factor by which this should be larger (on log hazard ratio scale) is 2.136 for IF=40%, 1.864 for IF=50%, 1.554 for IF=60%, 1.330 for IF=70%, 1.166 for IF=80% and 1.051 for IF=90%. Besides power, also the maturity of the survival data at the time of the interim analysis plays a role. In the formula derived to estimate the prediction interval of the Kaplan-Meier curves at the interim analysis (Section 3), the input parameters are: the survival distribution in each arm, the accrual distribution function \(G_{Acc}\), the relative sample sizes in each arm, the fraction \(p\), and the closeness parameter \(\delta\). The choice of the distributions \(S_{A},S_{B}\) are ideally based on results from a similar (e.g., explorative) study which was performed earlier or otherwise on clinical reasoning. The accrual rate and accrual time (summarized in \(G_{Acc}\)), the number of patients \(n\) and \(m\) in each arm, and the minimum follow-up time are more in the researcher's control by the choice of recruitment sites and/or determined by logistical feasibility. The parameter \(p\) does not have to be specified for the power analysis, but can be calculated once the total number of patients has been selected. Its value reflects the moment the interim analysis will be performed in terms of follow-up time and consequently \(p\) determines the expected survival curve up to that point. The higher the value of \(p\), the later the interim analysis will be performed and the more information on the survival curves will be available. Often it is important that enough information is available at the interim analysis and \(p\) should be chosen accordingly. Sufficient information could for example mean that the survival curve up to the interim analysis can be seen as a reasonable representation of the survival for the whole patient population. On the other hand, the interim analysis should not be too close to the end of the trial for acting on an interim analysis to be meaningful. Different values of \(\delta\) could and should be considered by the researcher. By varying \(\delta\), one can trade off between being close to the interim analysis versus obtaining a precision that is meaningful for the aim at hand. Summarised, we envision the following strategy to determine the timing of an interim analysis. First, power (i.e., number of events) is considered as usual. To assess maturity of the survival curves, the expected survival rates close to the time of the interim analysis are estimated using for instance our methodology. Next, the timing can then be adjusted such that not only sufficient power but also sufficient maturity of data for the purpose at hand is to be expected at the interim analysis. It is advised to investigate this for a range of plausible survival curves for both arms. ### An application With the proposed methodology it is assessed whether the preplanned interim analysis for progression-free survival (PFS) in the Keynote 204 study would a priori have been expected to give mature data. The Keynote 204 was a study investigating preembolizumab versus a control of brentuzumab vedotin in patients with relapsed or refractory classical Hodgkin lymphoma. The relevant design parameters were: 300 patients randomized 1:1; 12 month accrual period; exponential survival assumed with a median progression-free survival of 5.6 months in the brentuzumab vedotin arm; hazard ratio of 0.622; and 176 events planned for the efficacy (i.e., confirmatory) interim analysis (see protocol page 93 and 96 in the online Appendix of Kuruvilla et al. (2021)). The protocol did not specify the shape of the accrual rate over time, so we will assume a uniform accrual of the 300 patients over 12 months. Also, a minimum follow-up was not specified. Therefore, we will assume that the interim analysis takes place after recruitment of all patients; this is the case for \(t_{p}=14.87\) months. We now evaluate the Kaplan-Meier curves at different time points close to the expected time of the interim analysis. Starting with \(\delta=0.1t_{p}\) (this comes down to 6.4 weeks before the expected time of the interim planning), the expected Kaplan-Meier estimates (95%-prediction interval) are 19% (9%-29%) in the brentuzumab vedotin arm and 36% (24%-47%) in the preembolizumab arm. For \(=0.05t_{p}\) (3.2 weeks before the interim), this is 17% (6%-29%) in the brentuzumab arm and 34% (21%-47%) in the preembolizumab arm. When looking at \(=0.01t_{p}\) (so 4.5 days before the interim analysis), we expect to see 16% (2%-30%) for brentuzumab arm and 32% (16%-49%) for preembolizumab arm. Thus, a substantial part of the survival curves is expected to be observed in both arms. In particular, both early and late events are expected to be seen and the planned interim analysis allows to assess the survival benefit (if any) for a large majority of the population. Summary and Discussion In this paper we derived the asymptotic boundaries of the prediction interval for the survival curve at the (random) time that a prespecified total number of events has been reached, also including the situation that this number has been reached before the planned total number of patients has been recruited. The input parameters for the derived formula are the supposed survival curves, the patient accrual, the prespecified number of total events, and the total number of patients. These are typical parameters used to plan a trial with a survival outcome anyway, so no new information is needed. Thus, the derived formula can be easily implemented in typical planning practice. Although we focused on an interim analysis in a two-arm trial, the expected survival estimates in the final analysis can be estimated as well. A typical application of the proposed theory is a phase 3 randomized clinical trial where an interim analysis is planned for early stopping for efficacy and is tested using a log-rank test or a test for the Cox model. Often the interim analysis is planned when a given number of events has occurred. This will only cover whether sufficient power is expected for the statistical test. As the test relates typically to equality of whole curves, statistical significance at an interim analysis could be due to (only) early differences in the survival curves. It will not be guaranteed that survival data are sufficiently mature for drawing conclusions on the aim of the study. For example, the European Medicines Agency oncology guideline (EMA) states that interim analyses "should be undertaken only when available survival data provide the information needed for a proper evaluation of benefit/risk". The same guideline states "If a clear majority of the total number of expected events in the long term has been observed and a difference has been documented, this is normally accepted as an indicator that the study is reasonably mature and that the study results will remain stable over prolonged follow-up." Of interest is that the wording "the total number of expected events" relates to the long-run (large follow-up times), not the total number of events at the final analysis. Despite this, maturity of the survival curves at the time of the interim analysis is often not considered explicitly in the planning stage or only qualitatively. Our results allow to plan for mature survival data quantitatively. They also give insight in the relation between the patient fraction \(p\) (that is chosen by the researcher) and the amount of information that is expected to be available at the moment of the interim analysis: the Kaplan-Meier survival estimates and their corresponding prediction interval. Thus the results in this paper can help designing the trial. In the simulation studies, the survival times are assumed to come from an exponential distribution and the accrual times from a uniform distribution. This is the most common setting that is assumed when designing a trial with survival outcome. However, the asymptotic distribution was derived for an arbitrary distribution. The R-code is available from the corresponding author upon request and can be easily adapted for other distributions (e.g., Weibull or Parmar-Royston models (Royston and Parmar, 2002)) to make the application more general. **CONFLIICT OF INTEREST** The authors declare no potential conflict of interest. **DATA AVAILABILITY STATEMENT** No real life data have been used in this publication. **ORCID** Marianne Jonker: [https://orcid.org/](https://orcid.org/) 0000-0003-0134-8482
2310.09849
Solid state defect emitters with no electrical activity
Point defects may introduce defect levels into the fundamental band gap of the host semiconductors that alter the electrical properties of the material. As a consequence, the in-gap defect levels and states automatically lower the threshold energy of optical excitation associated with the optical gap of the host semiconductor. It is, therefore, a common assumption that solid state defect emitters in semiconductors ultimately alter the conductivity of the host. Here we demonstrate on a particular defect in 4H silicon carbide that a yet unrecognized class of point defects exists which are optically active but electrically inactive in the ground state.
Pei Li, Song Li, Péter Udvarhelyi, Bing Huang, Adam Gali
2023-10-15T14:43:18Z
http://arxiv.org/abs/2310.09849v2
# Solid state defect emitters with no electrical activity ###### Abstract Point defects may introduce defect levels into the fundamental band gap of the host semiconductors that alter the electrical properties of the material. As a consequence, the in-gap defect levels and states automatically lower the threshold energy of optical excitation associated with the optical gap of the host semiconductor. It is, therefore, a common assumption that solid state defect emitters in semiconductors ultimately alter the conductivity of the host. Here we demonstrate on a particular defect in 4H silicon carbide that a yet unrecognized class of point defects exists which are optically active but electrically inactive in the ground state. Point defects in semiconductors play a pivotal role in determining the electrical and optical properties of the host material. Understanding the physical fundamental of point defects in semiconductors was a key to arrive at the concept of opto-electronics devices, photovoltaics and energy storage devices, and very recently, state-of-the-art quantum information processing realisations [1; 2; 3; 4; 5] which have been shaped the socio-economical environment at global scale. Point defects may introduce defect levels (DLs) within the host semiconductor's fundamental band gap, thereby influencing its electrical conductivity, i.e., electrically active point defects [6; 7]. Notably, these in-gap DLs and associated states also impact the optical properties of the material by reducing the optical excitation threshold energy compared to the perfect semiconductor's optical gap [8; 9; 10; 11]. As a consequence, a common assumption is that solid-state defect emitters modify the host material's electrical conductivity [12; 13; 14; 15]. We show below that this common assumption is not generally valid. In Fig. 1, we depict the possible optical transition mechanisms within semiconductors. Many point defects introduce multiple deep DLs into the fundamental band gap that could dramatically modify the electrical properties of the host because these deep levels often participate in carrier trapping and recombination events. In these defects, optical transition could occur between the occupied and unoccupied DLs in the gap [see Fig. 1(a)]. Alternatively, the optical transition can occur between localized DLs and the band edge, either valence band maximum (VBM) or conduction band minimum (CBM) (e.g. Ref. [16]). The respective excited states may be called pseudo-acceptor and pseudo-donor states as they show a Rydberg-series of the excited states converging towards the ionization threshold [see Fig. 1(b)]. We note that the optical excitation threshold energies are lower than the electrical gap between the occupied and unoccupied states participating in the optical transition because of the attracting electron-hole interaction in the excited state. By harnessing this excitonic effect, we suggest a category of point defects that act as emitters and are electrically inactive at the same time. A defect may introduce just one occupied DL below VBM without disturbing the bands close to VBM, so the defect is electrically inactive in the ground state and its positive charge state is not stable. This defect can be optically excited where the hole is localized in the resonant DL whereas the electron occupies a state split from CBM that builds up a pseudo-donor excited state. The exciton binding energy in the excited state could shift the excitation energy below the optical band gap of the host semiconductor with establishing a solid state defect emitter [see Fig. 1(c)]. We label these defects as EIDE after the expression of electrically inactive defect emitters in the context. In this study, we demonstrate the principles of EIDE on a tri-carbon interstitial cluster in 4H silicon carbide (SiC) which produces an ultraviolet emission below the gap of 4H-SiC. We show by first principles calculations that the given defect has zero-phonon-line (ZPL) emission with characteristic local vibration modes in the photoluminescence (PL) spectrum which agrees well with a previously reported defect emitter, the so-called \(\mathrm{D}_{\mathrm{II}}\) center in 4H-SiC [17; 18; 19]. The optical excited state is a pseudo-donor type and it does not show electrical activity. The effect is mediated by the attractive electron-hole interaction in the optical excited state of the defect enhanced by the resonant defect states which is strikingly paramount in indirect semiconductors. This example unveils the EIDE category of point defects in solids. We discuss potential dopants to engineer such defects in semiconductors. ## Results 4H-SiC is a wide band gap indirect semiconductor which is a platform for high-power, high temperature electronic devices [20; 21; 22] as well as quantum information processing [23; 24] that makes it unique among the technological mature semiconductors. The band gap of 4H-SiC is slightly reduced at elevated temperatures (see Ref. [25] and references therein). The VBM and CBM are located at \(\Gamma\)-point and \(M\)-point, respectively. The excitonic band gap of the material is 3.265 eV at 2 K [26], where the crystal phonon replica dominates the PL spectrum upon above-band-gap illumination; nevertheless, the ZPL of the free exciton can be weakly observed too because the free exciton may gain some momentum from the defects in 4H-SiC [27]. The PL spectrum of the bound exciton of the shallow donors (nitrogen substituting carbon in the lattice at the so-called quasicubic and hexagonal sites) can be well observed at 2 K, where the respective ZPL emissions at 3.243 eV and 3.256 eV are more pronounced as the defect potential breaks the translational symmetry of the crystal. Nevertheless, the phonon sideband still dominates in the respective PL spectrum as expected for an indirect semiconductor [27]. In the followings, we focus our attention to the ultraviolet \(\mathrm{D_{II}}\) color center in 4H-SiC recorded near cryogenic temperatures [19]. The \(\mathrm{D_{II}}\) shows up a sharp ZPL at 3.20 eV which is only \(\approx 60\) meV lower than that of the free exciton in 4H-SiC [27; 28; 29]. A characteristic phonon sideband was also observed in the \(\mathrm{D_{II}}\) PL spectrum with sharp features that were associated with local vibration modes (LVMs) of the underlying defect [19]. We note that various reports on the \(\mathrm{D_{II}}\) PL spectrum exhibit different numbers of sharp features associated with LVMs [17; 19]. No single defect observation has yet been carried out for \(\mathrm{D_{II}}\) center [30; 31; 32; 33], thus some sharp features may not belong to the defect and could overlap with the PL spectrum of other defects. Nevertheless, previous studies attempted to identify this color center by calculating the LVMs of the defect models and compare those to the observed features in the phonon sideband [31; 32; 34]. The most recent study proposed the tri-carbon interstitial cluster as the origin of the \(\mathrm{D_{II}}\) center which was corroborated by molecular dynamics calculations with revealing the high-temperature stability of the defect in line with the observations [31]. However, the nature of optical transition has not yet been discussed for any models. Furthermore, it has been recently shown that not all the calculated LVMs show up in the observed phonon sideband of the PL spectrum of other carbon clusters in 4H-SiC [16], thus identification of this ultraviolet color center has not yet been established. We use the tri-carbon interstitial cluster as a working model for the \(\mathrm{D_{II}}\) PL center in 4H-SiC. First, the electronic structure and possible optical transitions of the host semiconductor is analyzed. The band structure of 4H-SiC is depicted in Fig. 2(a). Our HSE06 calculations yield 3.17 eV electronic band gap for the host 4H-SiC which underestimates the low-temperature data by about 0.1 eV. The electron-hole pair in a free exciton has a crystal momentum \(\mathrm{k}_{M}\), where \(\mathrm{k}_{M}\) is the momentum corresponding to the \(M\)-point. Consequently, direct recombination of the electron at VBM and the hole at CBM is forbidden because of the crystal-momentum conservation law. To conserve the crystal momentum the free exciton recombination is only possible with the assistance of another particle (or quasiparticle). Therefore, the minimum direct optical transition in 4H-SiC occurs between the band edges at the \(M\)-point, with a gap of approximately 4.41 eV, and is larger than the indirect band gap. The tri-carbon interstitial cluster is composed of three carbon interstitials bridging three adjacent on-axis Si-C bonds at the hexagonal site [see Fig. 2(b)], which is the most stable among all possible configurations [16]. The calculated Kohn-Sham DLs for the neutral tri-carbon interstitial cluster are shown in Fig. 2(b). In the ground state, a double degenerate \(e\) state (VBM\(-0.15\) eV) and an \(a_{1}\) state (VBM\(-0.20\) eV) appear that show no dispersion unlike the host bands. We find also resonant states in this energy region when crossed with the host bands that show up quasilocalization but heavily mixed with the host bands. No further DLs emerge in the funda Figure 1: The possible optical transition mechanisms within indirect band gap semiconductors. (a) Occupied and unoccupied deep DLs. (b) Deep DL and band edge. (c) Electrically inactive DL and band edge: establishing EIDE. (d) Comparison between the values of zero-phonon-line (\(E_{\mathrm{ZPL}}\)) and band gap (\(E_{\mathrm{g}}\)) for the defect category in (c). \(E_{\mathrm{ZPL}}\) is the energy difference between the optical excited (ES) and ground state (GS) of optical transitions with no-phonon whereas the thinner lines are the optical transition energies when phonons are participating in the optical transition. mental band gap and the VBM is basically unaffected by the presence of DLs (see Supplementary Fig. 1), implying that the defect is electrical inactive [see Figs. 2(c,d)]. Indeed, we find in our density functional theory (DFT) calculations that the positively charged defect is not stable (see Supplementary Fig. 2). It can be recognized that the DLs will be well isolated from the bands around the \(M\)-point and it may be expected that the electron from the DLs can be photoexcited to the CBM that constitutes a pseudo-donor excitonic state. We first studied the nature of the exciton and the oscillator strength of the optical transitions by a many-body perturbation method (see Methods) with that the excitonic effects can be accurately calculated [see Fig. 3(a)]. Here we focus to the lowest-energy bright transitions that only play the role in the PL emission of the defect. Bright transitions occur at around 3.2 eV that we label by P\({}_{1}\) and P\({}_{2}\) in Fig. 3(a). In both peaks the hole part of the exciton wavefunction is dominantly built up from the DLs and the electron part is located in the conduction bands at the \(M\)-point. The contribution is about 95% and 99% for P\({}_{1}\) and P\({}_{2}\) peaks, respectively. In contrast to P\({}_{1}\) and P\({}_{2}\) peaks, the P\({}_{3}\) and P\({}_{4}\) peaks in Fig. 3(a) are mainly caused by the excitation from DLs to the conduction bands at the \(L\)-point, collectively constituting approximately 87% and 82% of the total intensity. We here analyze the origin of the P\({}_{1}\) peak in detail whereas the analysis of the other peaks can be found in Supplementary Note 3. The calculated many-body perturbation theory band gap between the double degenerate \(e\) state (DLe) and the CBM at the \(M\)-point is 3.57 eV, thus the exciton binding energy is about 0.4 eV. The contribution of the electron-hole pairs to the P\({}_{1}\) exciton is depicted in Fig. 3(b). The foremost contribution arises from excitation originating from the degenerate DLe to the CBM, constituting approximately 93.71% of the total intensity. The excitation from the \(a\) state (DLa) to CBM contributes by about 1.40%. Thus, it can be concluded that the vast majority of the hole wavefunction has a localized nature whereas the electron wavefunction is from the CBM in the low-energy bound exciton. Based on our findings about the lowest energy bright excited state in the BSE spectrum, we employed HSE06 \(\Delta\)SCF method where an electron was promoted from the DLe to the CBM, in order to take into account the geometry relaxation in the optical excited state. This calculation makes it possible to simulate the ZPL energy and the phonon sideband in the PL spectrum. We assume that the vertical excitation energy can be well calculated by the many-body perturbation method and then we apply finite size error corrections of the supercell method, and then the reorganization energy is subtracted as obtained by the HSE06 \(\Delta\)SCF method to arrive at the final ZPL energy. We note that the \(\Delta\)SCF procedure will self-consistently change the Kohn-Sham states and levels Figure 3: (a) The optical transition intensity calculated within BSE. (b) The contributions of excitations originating from VBM, DLs, and resonances state (R) to CBM and conduction band in the \(L\)-point (CB-L) for the peak P\({}_{1}\) in (a). Inset: The diagram of excitation. Figure 2: (a) The band structure of 4H-SiC. (b) The calculated Kohn-Sham DLs of the ground state (GS) and excited state (ES) for tri-carbon interstitial cluster. The occupied and unoccupied DLs are labelled by filled and empty triangles, respectively. Inset: the configuration of tri-carbon interstitial cluster. (c) The band structure of 4H-SiC with tri-carbon interstitial cluster. (d) The local enlarged drawing of (c). The degenerate \(e\) states (DLe) and \(a_{1}\) state (DLa) are labelled in green and orange, respectively. accordingly. In the excited state, the hole DL pops up in the band gap close to VBM by 0.07 eV [see Fig. 2(b)]. The redistribution in the localization of the defect wavefunctions results in relatively high forces on the ions and the defect reconstructs in the optical excited state with a reorganization energy of about 0.13 eV with slightly reducing the symmetry of the defect. We considered the P\({}_{1}\) peak in the BSE spectrum as the vertical excitation energy for which we applied finite size error corrections (adding 0.10 eV) that account for the pseudo-donor nature of the excited state as explained in Supplementary Note 4. The final simulated ZPL is at 3.13 eV, in good agreement with experimental data. We then computed the phonon sideband of the PL spectrum within Huang-Rhys theory with using the excited state's geometry as obtained by HSE06 \(\Delta\)SCF method [35; 36; 37]. The PL spectrum of the tri-carbon interstitial cluster is shown in Fig. 4 and the respective vibration frequencies are listed in Supplementary Table 1. Because of the symmetry reduction, both symmetry-breaking \(E\) and symmetry conserving \(A_{1}\) vibration modes participate to the phonon sideband of the PL spectrum. The double degenerate (\(E\)-mode) highest LVMs at 162 meV are stretching modes from two of the three approximately vertical C-C bonds. The third LVM is a stretching mode of the three C-C bonds, which preserves the symmetry of the defect (\(A_{1}\)-mode). The fourth LVM is a breathing mode of the triangle formed by the three carbon interstitials. The respective fifth and sixth LVMs result from the axial vibration of the carbon atoms right below and above the center of the tri-carbon interstitial cluster. We conclude that the calculated ZPL energy and the LVM structure well agree with those of the D\({}_{\text{II}}\) color center in 4H-SiC (see Supplementary Note 5). ## III Discussion We demonstrated that the D\({}_{\text{II}}\) color center is a point defect which belongs to the EIDE category. In this particular case, the excited state has a pseudo-donor nature. This explains the fact that the ZPL energy of the D\({}_{\text{II}}\) center scales with the widths of the band gap of SiC polytypes, c.f., Refs. [17; 19; 33]. One can generalize the feasible electronic structure of EIDE that (i) either an occupied DL should lie below but not too far from VBM but should be deep enough not to realize a stable positive charge state (ii) or an empty DL should lie above but not too far from CBM but high enough not to realize a stable negative charge state or (iii) the combination of both. The bound exciton state may be easier realized in an indirect than a direct semiconductor where the attractive electron-hole interaction of the bound exciton and the geometry relaxation in the excited state could result in a ZPL emission below the optical gap of the host semiconductor. The wavelength of these emitters depend on the fundamental band gap the host semiconductors and the strength of exciton binding energies in the excited of the defect. The last term depends on the screening effects of the material. Semiconductors with moderate and low screening can host defects with exciton binding energies that may exceed hundreds of millielectronvolts which establishes a considerable playground to realize EIDEs. Now one may ask how one can engineer EIDEs in various materials. Electrically inactive defect in semiconductors strongly implies that (i) vacancy or vacancy complexes can be disregarded as dangling bonds introduce levels in the fundamental band gap, (ii) the substitutional defect or dopant should be isovalent with the host atom to avoid donating or accepting electrons to the crystal, and (iii) interstitial or interstitial cluster defects should introduce even number of electrons to realize a closed-shell singlet ground state and they should possess stronger chemical bonds than those constituting of the host crystal so the occupied and unoccupied defect levels would presumably lie outside of the fundamental band gap. Certainly, the EIDE defect models should be checked one-by-one in a given semiconductor. This is not surprising as this is the case, e.g., for the effective mass donor (n-type doping) or acceptor (p-type doping) models. Although, the usual recipe for realizing donor and acceptor states is to substitute the host atom with a dopant possessing one extra and one less electron, respectively, this recipe does not always work. As an example, the nitrogen has five valence electrons that can replace silicon atom with four valence electrons in the silicon crystal; however, one chemical bond will be broken in the defect and the silicon dangling bond produces a deep level instead of the shallow effective mass donor. On the other hand, boron atom with three valence electrons substituting silicon acts as an effective mass acceptor in silicon Figure 4: The PL spectrum of the tri-carbon interstitial cluster. The \(E\) and \(A_{1}\) modes are marked with green and red vertical lines below the corresponding LVMs. The LVMs (blue [17] and orange [19] vertical lines) of D\({}_{\text{II}}\) PL spectrum with frequency higher than that of 4H-SiC bulk phonon spectrum are also listed for comparison. crystal despite the fact that boron and nitrogen atoms sit in the same row of the periodic table. On the contrary, nitrogen substituting the carbon in SiC indeed acts as effective mass donor as was anticipated earlier in this paper. These examples highlight the quest of accurate _ab initio_ predictions of defect states because simple rules cannot be trustworthy applied for cases that are thought to be easy such as defect engineering of donors and acceptors in semiconductors. This study rather establishes the search criteria for electrically inactive defect emitters in semiconductors as quoted above. Recent advances in the theoretical spectroscopy of defects in semiconductors [11] make it possible to systematically search for defects with target properties that may lead to discoveries of EIDEs, n-type and p-type dopants or defect qubits in semiconductors. This study uncovers a class of defects that decouple optical and electrical effects. These findings propose an unexplored avenue for engineering semiconductor devices, suggesting the feasibility of creating defect species that offer independent control over optical and electrical functionalities within the same platform. The implications of this work extend to the design and optimization of highly integrated and miniaturized semiconductor devices that leverage both optical and electrical attributes for enhanced functionality. Because the modularity of the opto-electronics devices can be simplified this may lead to reduced cost of production and to contribution to green technology with lowering the energy consumption of the operation of these devices. ## Conclusion This study has revealed a phenomenon that expands our understanding of the interplay between defect-induced optical and electrical effects that was exemplified on a tri-carbon interstitial defect in 4H-SiC. Unlike the conventional assumption that solid-state defect emitters in semiconductors necessarily affect the electrical conductivity of the host, we demonstrate the existence of defects that introduce optically active but electrically inactive states. This phenomenon may be observed in rather indirect than direct band gap semiconductors where all the defect levels lie outside to the fundamental band gap but reside very close to the band edges that may alter the minimal optical excitation energy without influencing the electrical conductivity. By exploiting the attractive interaction between electrons and holes in the optical excited state, these defects can show significant optical activity without exerting any discernible impact on the electrical conductivity. This finding might pave the ways to realize new generation opto-electronics devices. ## Methods All the first-principles calculations are performed using density functional theory (DFT) within the projector augmented wave potential plane-wave method, as implemented in the Vienna _ab initio_ simulation package (VASP) [38] with the projector augmented wave method [39]. The electron wave functions are expanded in plane-wave basis set limited by a cutoff of 420 eV. The fully relaxed geometries were obtained by minimizing the quantum mechanical forces between the ions falling below the threshold of 0.01 eV/A and the self-consistent calculations are converged to \(10^{-5}\) eV. The screened hybrid density functional of Heyd, Scuseria, and Ernzerhof (HSE06) [40] is employed to calculate the electronic structure. In this approach, we could mix part of nonlocal Hartree-Fock exchange to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE) [41] with the default fraction (\(\alpha=0.25\)) and the inverse screening length at 0.2 A\({}^{-1}\). The calculated band gap is 3.17 eV. We embedded the tri-carbon interstitial cluster in a 576-atom 4H-SiC supercell which is sufficient to minimize the periodic defect-defect interaction. The single \(\Gamma\)-point scheme is convergent for the k-point sampling of the Brillouin zone (BZ). The \(M\)-point and \(L\)-point in the BZ are projected into the \(\Gamma\)-point in this supercell due to band folding where the lowest energy conduction bands occur in these k-points. The excited states were calculated by \(\Delta\)SCF method [37]. We note that the reorganization energy and the optimized geometry of the optical excited state can be calculated with \(\Delta\)SCF method which are both important to predict the photoluminescence spectrum including the phonon sideband. For the phonon modes, we calculated the corresponding dynamical matrix containing the second-order derivatives of the total energy by means of the PBE [41] functional where all the atoms in the supercell were enabled to vibrate. In this case, we apply strict threshold parameters for the convergence of the electronic structure (\(10^{-6}\) eV) and atomic forces (\(10^{-3}\) eV/A) in the geometry optimization procedure. These vibration modes are applied together with the HSE06 ground state and excited state geometries to simulate the PL spectrum of the given defects within Huang-Rhys theory [36; 37]. This strategy worked well for deep defects in diamond (e.g., Refs. [42; 43]). To accurately consider the excitonic effect, many-body perturbation theory based on GW approximation with Bethe-Salpeter equation (BSE) [44; 45] are used. We here use the supercell with 576-atom in order to reach close-to-converged calculation for the GW method which is very computationally demanding in the VASP implementation as more than 9000 bands were included in the single-shot G\({}_{0}\)W\({}_{0}\) calculation. We note that the CBM at the \(M\)-point and the conduction band at the \(L\)-point are projected into the \(\Gamma\)-point for this supercell too. The energy cutoff for the response function is set to be 150 eV. The Tamm-Dancoff approximation was used to solve BSE. The highest one hundred valence bands and one hundred lowest conduction bands are considered as the basis for the excited state in the BSE procedure. The calculations are based on the optimized HSE06 functional which resulted in the optimized geometry and the electronic structure of the neutral tri-carbon interstitial cluster in 4H-SiC. ## Author contribution PL carried out the calculations. All authors contributed to the discussion and writing the manuscript. AG developed the concept of electrically inactive defect emitters and led the entire scientific project. ## Competing interests The authors declare that there are no competing interests. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgement Support by the National Excellence Program for the project of Quantum-coherent materials (NKFIH Grant No. KKP129866) as well as by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) is much appreciated. AG acknowledges the high-performance computational resources provided by KIFU (Governmental Agency for IT Development) institute of Hungary and the European Commission for the project QuMicro (Grant No. 101046911). BH acknowledges the NSFC (Grants Nos. 12088101 and 12174404), National Key Research and Development of China (Grant No. 2022YFA1402400), NSAF (Grant No. U2230402).
2303.09971
Estimating Censored Spatial-Temporal Demand with Applications to Shared Micromobility
In shared micromobility networks, such as bike-share and scooter-share networks, using trip data to accurately estimate demand in docked and dockless systems is critical to analyzing how the system is operating, such as identifying the number of dissatisfied users, operational costs, and equity in access, especially for city officials. However, the distribution of available bikes affects the distribution of observed trips. Users may walk from an unobserved cell location to an available bike masking the true location of user demand, and users may look for a bike and not find one, which is unobserved user demand. In collaboration with city planners from Providence, R.I., we present a flexible and interpretable framework to estimate spatial-temporal demand as a spatial non-homogeneous Poisson process that explicitly models how users choose a bike, bridging the gap between the docked and dockless methodology. Further, we present computational experiments highlighting that our method provides more accurate estimates of demand when there is incomplete availability compared to previous methods, and we comment on the results of our algorithm on data from Providence's dockless scooter-share network. Our estimation algorithm is publicly available through an efficient and user-friendly application designed for other city planners and organizations to help inform system planning.
Alice Paul, Kyran Flynn, Cassandra Overney
2023-03-17T13:44:08Z
http://arxiv.org/abs/2303.09971v2
# Estimating Censored Spatial-Temporal Demand with Applications to Shared Micromobility ###### Abstract In shared micromobility networks, such as bike-share and scooter-share networks, operators and city planners are interested in understanding user demand. However, observed trips do not equate directly to demand. The distribution of available bikes affects the distribution of observed trips both through the censoring of potential users who cannot find a nearby bike and the spatial dependence between where a user originates and where a trip begins. The ability to use trip data to accurately estimate demand in both docked and dockless systems is key to analyze the number of dissatisfied users, operational costs, and equity in access, especially for city officials. In this paper, we present a flexible and interpretable framework to estimate spatial-temporal demand by explicitly modeling how users interact with the system. This choice model and algorithm was informed by our collaboration with city planners from Providence, RI, and we demonstrate our algorithm on data from Providence's dockless scooter-share network. Our estimation algorithm is publicly available to use through an efficient and user-friendly application designed for other city planners and organizations to help inform system planning. ## 1 Introduction Shared micromobility networks, including bike-share and scooter-share networks, are becoming an established part of urban environments. Access to these networks expands the modes of transportation available and enables connections to existing public transportation. When checking in or checking out a bike or scooter, users directly impact the state of the system and the distribution of available vehicles. This leads to a wealth of individual-level data that city officials can use to extract usage patterns, understand how the population interacts with the network, and inform contracts with the available operators to better serve the city. While initial shared micromobility networks had a docked structure in which users checked in and out bikes at a set of available stations, more recently there has been an increase in dockless and hybrid systems in which users can return a bike or scooter anywhere by simply checking it back in through the mobile app and enabling the locking mechanism. Dockless systems increase the flexibility and efficiency of the network [2, 17], allowing the state of the system to adapt to demand. In theory, this leads to increased equity in access. However, there are additional challenges as operators tend to prioritize repositioning scooters or bikes to areas with low idle times over maintaining access across the service area [2, 17]. This makes it even more important for city officials to understand how availability is affecting observed demand to inform the regulations on operators, which can vary significantly between cities. For example, the City of Providence requires each operator to have at least 5% of trips occur in each of five designated regions, shown in Figure 1[4]. Because availability affects demand itself, equitable access can be defined as availability that is correlated with the true underlying demand when there is dependable and nearby access. This underlying demand naturally varies across the service area given the underlying population (e.g. number of commuters or students) and built environment (e.g. distance to public transportation, access to bike lanes) [2]. Therefore, it is important to adjust for censoring due to unavailability when estimating demand from past usage -- the observed data will not contain events for users who wanted to use the system but did not find an available bike or scooter. In this paper, we present a new framework for estimating spatial-temporal demand from past usage using an Expectation-Maximization (EM) algorithm to estimate the underlying parameters. Our demand estimation algorithm incorporates a probability model on whether a user at a particular location will find an available bike or scooter to explicitly model the observed censoring. Additionally, this model extends to both docked and dockless system by directly incorporating the spatial dependence and censoring on a user's initial location. Estimating user location through this choice model is important because this data is not generally made available to city officials. To illustrate the use of our algorithm, we present a specific user choice model based on the Figure 1: Service regions for Providence’s Shared Micromobility Program [4]. distance between an arriving user and the available scooters or bikes. This model can be specified by two interpretable parameters, but our algorithm can be extended to other user choice models. Our estimation algorithm is incorporated into a publicly available application that allows users to efficiently analyze their own data. Additionally, because the output from the algorithm is a matrix of estimated rates of arrival by location and time, the application visualizes this distribution for the user and highlights areas with potential unmet demand. By focusing on the usefulness and ease of interpretation, we hope to increase the audience for these tools and allow more people without data analysis training to gain insight from publicly available data. In Section 2, we place our work in the context of past literature on estimating demand in shared micromobility networks. We then introduce our general framework and user choice model in Section 3 and present our estimation algorithm in Section 4. In Section 5, we demonstrate the use of our publicly available tool using data from Providence, Rhode Island and comment on the results. Additionally, a small test data set, modified from Kansas City's Microtransit network [3], is available for users to test our application. Last, in Section 6, we perform sensitivity analysis on both a simple simulated example and the Providence data. While our tool gives useful insight into usage patterns and highlights potential areas to increase service levels, future work will focus on how to use the results from this estimation to explicitly analyze how changing access affects user behavior and overall observed demand. We discuss these possible extensions in Section 7. ## 2 Literature Review Initial research in analyzing bike-share data focused on estimating demand in a docked system. In these cases, the demand at station \(j\) is often modeled as a Poisson arrival process where the arrival rate at station \(j\) in time period \(t\) is given by the number of trips observed during that time period divided by the proportion of time at least one bike was available [9, 10]. The latter part adjusts the estimate to account for potential censoring. In a related paper, [11] introduce a censored likelihood function within a Gaussian process model. However, these estimates ignore the spatial dependence between stations, which can often be quite close together. When one station does not have available bikes, users may choose a nearby station. In docked systems, another stream of research has focused on predicting trips. These have ranged from regression-based methods [6, 19] to simulation-based methods [20]. To incorporate spatial dependence, some of these methods have used clustering to define neighborhoods to predict the number of trips or trip destinations [5, 8]. More recently, machine learning methods such as gradient-boosted regression trees [13] and attention-based networks [18, 21] have also been applied to improve prediction of trips in docked systems. Additionally, researchers have focused on using the demand estimates above to inform decisions such as placement and redistribution of bikes [9, 10]. Dockless systems bring additional flexibility to users and require different methods to predict demand and usage. A dockless system also allows operators to adjust availability without having to change the station structure. This leads to the potential for increased equity [17, 2]. However, there is also potentially large overestimation of service levels when censoring is not considered given the spread of availability [1]. Predicting trips or demand in dockless systems requires a spatial-temporal representation of the system such as representing the network as a grid or graph. Models predicting trips have incorporated this structure and utilized different machine learning methods such as random forests [24], gradient boosted decision trees [15], convolutional neural networks [22], and long short-term memory neural networks [23, 14]. The complexity of these methods increase prediction accuracy but give less informative outcomes to city planners. [12] aims to find a sparse representation of the travel between different regions in the service area. To our knowledge, there has been limited work on directly estimating demand in dockless systems beyond using the grid system to have each grid point represent a'station' and applying the methods mentioned above [1]. This again ignores the spatial dependence between grid points. While we also use a grid to represent the service area, this paper directly models users arriving in one cell and choosing a bike or scooter in another grid cell, meaning that as we decrease the size of each grid cell we can estimate a very granular picture of demand. ## 3 Framework and User Choice Model In this section, we define our general framework and our model for how users arrive and make decisions. For ease of presentation, we consider our shared micromobility network to only have available bikes. We first discretize the space by inducing a grid \(\mathcal{G}\) over the network and indexing the grid cells \(i=\{1,2,\ldots,m\}\). Further, we discretize the times of day into discrete time periods \(t=1,2,\ldots,\mathcal{T}\). In our app, these time periods are set to the hours of the day and the granularity of this grid is specified by the user -- smaller grid cells give more detailed demand estimates but require more computation time. Our framework makes the following three assumptions. 1. All bikes and users are located at the center point of the corresponding grid cell. 2. For each time period \(t=1,2,\ldots,\mathcal{T}\) and grid cell \(i\in\{1,2,\ldots,m\}\), users arrive in the center of grid cell \(i\) following an independent Poisson process with rate parameter \(\mu_{t,i}\). 3. Each arriving user \(j\) observes the available demand and either picks a bike from grid cell \(i\in\{1,2,\ldots,m\}\) with probability \(\text{prob}_{i}\) or leaves the system without using a bike with probability \(\text{prob}_{0}\), where \(\text{prob}_{0}+\sum_{i=1}^{m}\text{prob}_{i}=1\). If a user decides to choose a bike from grid cell \(i\), they choose one at random from the available bikes in that cell. The framework above assumes that the rate \(\mu_{t,i}\) is fixed for each time period and grid cell. That is, that the variance in actual demand can be accounted for by the estimated Poisson distribution Figure 2: Visualization of blank grid discretization (400m grid cells, Providence). itself. Dependency on other covariates such as the day of week, season, or other characteristics is discussed in Section 4.3. The probabilities \(\text{prob}_{i}\) incorporate the censoring and spatial dependence between a user's location and any generated trip. Note that our estimation algorithm is agnostic to the distribution of these probabilities and can be generalized to any choice model. For example, a user choice model may take into account the direction of travel. We consider a choice model in which users have a threshold distance they are willing to travel from where they arrive to where they pick up a bike. Users then greedily choose an available bike within that threshold, breaking ties randomly. If no bike is within the user's threshold, they leave without generating a trip. For example, in Figure 3 if a user arrives and has a threshold of 250m, they would only consider bikes within their same grid cell since all other grid cells are at least 400m away. To model the distribution of user thresholds, we use a discretized version of a half-normal distribution as shown below. In particular, for a half-normal distribution with standard deviation \(\sigma\), let \(\mathrm{f}_{\sigma}(x)\) be the corresponding cumulative distribution function. We consider all possible distances \(\mathcal{D}=\{\mathrm{dist}_{0},\mathrm{dist}_{1},\ldots,\mathrm{dist}_{\max}\}\) between the center points of two grid cells up to distance \(\mathrm{dist}_{\max}\), the maximum distance a user would be willing to walk to find a bike, and define the probability a user has a threshold in the range \([\mathrm{dist}_{i},\mathrm{dist}_{i+1})\) to be \[\Pr(\mathrm{dist}_{i}\leq\mathrm{threshold}<\mathrm{dist}_{i+1}):=(f_{\sigma} (\mathrm{dist}_{i+1})-f_{\sigma}(\mathrm{dist}_{i}))/(1-f_{\sigma}(\mathrm{ dist}_{\max})).\] Given the assumption that all bikes and users are located at the center point of a grid cell, this is the probability a user considers bikes up to a distance \(\mathrm{dist}_{i}\) away. Figure 3 shows how the grid induces the set of possible travel distances, and Figure 4 demonstrates how the folded normal distribution leads to the discretized threshold probabilities for a set maximum distance and \(\sigma\). To set \(\sigma\), we let \(p_{0}\) be the probability a user is only willing to consider bikes within their own grid cell and use bisection search to find \(\sigma\) such that \(\Pr(\mathrm{dist}_{0}\leq\mathrm{threshold}<\mathrm{dist}_{1})\approx p_{0}\). Specifying a higher \(p_{0}\) reduces \(\sigma\) and leads to a sharper decrease in probabilities whereas a small \(p_{0}\) can lead to an almost uniform distribution on threshold up to the maximum distance, showing the flexibility in this distribution. Figure 5 illustrates this for \(p_{0}=0.1\) and \(p_{0}=0.7\). Our user threshold distribution is informed by past research on user behavior which indicates that a maximum travel distance of 800m or 1000m is reasonable to assume and that users are more likely to use bikes closer to them [25, 7]. The distribution is also easy to visualize and adjust. In our public application, \(\mathrm{dist}_{\max}\) and \(p_{0}\) are user-specified parameters with default values of 1000m and 0.7, respectively, for a grid with 400m wide grid cells. These default values and our user choice model overall were also informed by our discussions with city planners from Providence. Figure 3: Visualization of relative grid cell distances in a 400m grid with a maximum distance of 1000m. Figure 4: The truncated probability density function for the folded half normal distribution with variance \(\sigma^{\mathbf{2}}\) is given by the blue curve and the corresponding user threshold distribution is given by the bar plot. Figure 5: Example of how \(\mathbf{p_{0}}\) affects the estimated distribution.The user threshold distribution with \(\mathbf{p_{0}=0.7}\) (\(\sigma=\mathbf{392m}\)) on the left and with \(\mathbf{p_{0}=0.1}\) (\(\sigma=\mathbf{2000m}\)). A smaller \(\mathbf{p_{0}}\) shifts the distribution towards a uniform distribution. Estimation Algorithm To estimate the rates \(\mu_{t,i}\), we assume we have data on the observed trips and the location of available bikes across \(k\) consecutive days indexed by \(d=1,2,\ldots,k\). Suppose there are \(n\) observed trips and that each trip \(j=1,2,\ldots,n\) occurs at cell \(c_{j}\) in time period \(t_{j}\) on day \(d_{j}\). Our goal is to set \(\mu_{t,i}\) to maximize the likelihood of the observed data. As mentioned, estimating the rates directly from the trips has the following limitations. 1. Users walk from an unobserved cell location to an available bike. Thus, the true location of user demand is not included in the trip origin data. 2. Users may look for a bike and not find one within their user threshold, which is unobserved user demand. These two limitations require us to back-solve from the data to estimate true demand in each cell. To do so, we define an indicator latent variable \(z_{j,i}\) representing whether the user from the trip \(j\) comes from cell \(i\) and use an algorithm to infer the Poisson arrival rates of each cell during each time period. Before defining the EM algorithm, we define two probabilities necessary for the calculations. ### Algorithm Notation First, we define \(\pi_{j,i}\) to be the probability a user arriving in cell \(i\) at the time of trip \(j\) would choose a bike in cell \(c_{j}\), where \(c_{j}\) is the cell from which trip \(j\) begins. Let \(\text{dist}_{j,i}\) be the distance between grid cell \(i\) and \(c_{j}\), \(S_{i}\) be the set of bikes closest to grid cell \(i\), and \(S_{j,i}\) be the set of bikes within grid cell \(c_{j}\). There are two cases. 1. There exists another available bike in a cell closer to \(i\) than \(c_{j}\). That is, \(S_{j,i}\not\subseteq S_{i}\). In this case, we have \(\pi_{j,i}=0\). 2. The bikes in grid cell \(c_{j}\) are part of the set of closest bikes to the user. Then the probability a user in grid cell \(i\) would choose a bike in grid cell \(j\) is based on whether the user's threshold is at least \(\text{dist}_{j,i}\) and the fraction of bikes in \(S_{i}\) that are in grid cell \(c_{j}\). \[\pi_{j,i}=\Pr(\text{threshold}\geq\text{dist}_{j,i})\frac{S_{j,i}}{S_{i}}.\] Second, we define \(\alpha_{t,i}\) to be the probability a user arriving in cell \(i\) at a uniformly distributed time within time period \(j\) and with a random threshold will find a bike within their distance threshold. To estimate this probability, we consider each possible threshold value range. Suppose that the user's threshold is in the range \([\text{dist}_{l},\text{dist}_{l+1})\) and consider the availability during time period \(t\). Using the bike locations during that time period, we can find the percent of time that the closest bike to cell \(i\) is at most relative cell distance \(\text{dist}_{l}\) away from cell \(i\). Let this value be \(\text{perc}_{t,i,l}\). Then, to find \(\alpha_{t,i}\), we consider all possible ranges for the user's threshold. Summing these probabilities over all feasible distances and then averaging across the days, we obtain an empirical estimate for \(\alpha_{t,i}\). \[\hat{\alpha}_{t,i}=\sum_{l=0}^{|\mathcal{D}|-1}\text{perc}_{t,i,l}\cdot\Pr( \text{dist}_{l}\leq\text{threshold}<\text{dist}_{l+1}).\] ### EM Algorithm As mentioned earlier, we introduce the indicator latent variable \(z_{j,i}\) representing whether the user from trip \(j\) comes from cell \(i\). Since each trip can come from only one cell, we have the constraint that \(\sum_{i=1}^{m}z_{j,i}=1\). We now consider the full data \((x,z)\) where \(x\) is the observed trip data \(x\) and \(z\) is the unobserved data. The log-likelihood of the observed data with arrival rates \(\mu\) can be written as \[\ell(\mu;x) =\sum_{j=1}^{n}\log p(c_{j}|\mu)\] \[=\sum_{j=1}^{n}\log\sum_{i=1}^{m}p(c_{j},z_{j,i}=1|\mu)\] We use an EM algorithm to maximize the log-likelihood function by alternating between two steps: an Expectation Step (E-Step) and a Maximization Step (M-Step). In the E-Step, we maximize the expectation of log-likelihood with respect to the latent indicator variables \(z\) given the data \(x\) and current estimates for \(\mu\), and in the M-Step, we maximize the log-likelihood estimate with respect to the parameters \(\mu\) given the data \(x\) and the current probability estimates for \(z\). The algorithm is guaranteed to converge to a local optima [16]. In the expectation step of the algorithm, we maximize the log-likelihood with respect to the distribution of \(z\) given \(x\) and \(\mu\). Since \[p(c_{j},z_{j,i}=1|\mu)=p(z_{j,i}=1|c_{j},\mu)\cdot p(c_{j}|\mu)\] we can maximize the log-likelihood by estimating the first term, the posterior probability that \(z_{j,i}=1\) given the data and arrival rates \(\mu\). Now \(p(z_{j,i}=1|c_{j},\mu)\) is the probability that the user from the \(j^{th}\) trip starting in cell \(c_{j}\) during time period \(t_{j}\) comes from cell \(i\). We'll define this likelihood to be the membership weight of trip \(j\) to cell \(i\)\(w_{j,i}\), which can be derived as follows. Given a mixture of \(m\) independent Poisson processes with average rates \(\mu_{t_{j},1},\mu_{t_{j},2},\ldots,\mu_{t_{j},m}\) during time period \(t_{j}\), the probability that a user is from Poisson process \(i\) is given by \(\mu_{t_{j},i}/\sum_{l=1}^{m}\mu_{t_{j},l}\). Using the probability \(\pi_{i,j}\) that an arriving user in cell \(i\) at the time of the trip \(j\) chooses the bike from trip \(j\) in cell \(c_{j}\), we find that the overall probability is given by \[\pi_{i,j}\cdot\frac{\mu_{t,i}}{\sum_{l=1}^{m}\pi_{l,j}\mu_{t_{j},l}}.\] We then normalize this probability distribution over all cells to obtain the likelihood membership weights \(w_{j,i}\). \[w_{j,i}=\frac{\left(\frac{\pi_{i,j}\mu_{t,i}}{\sum_{l=1}^{m}\pi_{l,j}\mu_{t_{j },l}}\right)}{\sum_{i^{\prime}=1}^{m}\left(\frac{\pi_{i^{\prime},j}\mu_{t^{ \prime},t}}{\sum_{l=1}^{m}\pi_{l,j}\mu_{t_{j},l}}\right)}=\frac{\pi_{i,j}\mu_{ t_{j},i}}{\sum_{i^{\prime}=1}^{m}\pi_{i^{\prime},j}\mu_{t_{j},i^{\prime}}} \tag{1}\] Intuitively, the membership weights balance the arrival rates by the likelihood a user would actually travel to cell \(j\) from cell \(i\). In the maximization step of the EM algorithm, we maximize the estimate for log-likelihood with respect to \(\mu\) given the data and the current distribution of \(z\) found in the E-Step. Estimating the likelihood as \(p(c_{j},z_{j,i}=1|\mu)=p(z_{j,i}=1|c_{j},\mu)\cdot p(c_{j}|z_{j,i}=1,\mu)\), we get \[\hat{\mu}=\sum_{j=1}^{n}\log\sum_{i=1}^{m}w_{j,i}p(c_{j}|z_{j,i}=1,\mu).\] Since the arrivals in cell \(i\) follow a Poisson distribution, given a fixed number of arrivals, the distribution of those arrivals over time period \(t\) is uniformly distributed. Recall the probability \(\alpha_{t,i}\) that a uniformly distributed user arriving in cell \(i\) during time period \(t\) finds a bike within their threshold. Then the number of observed points follows a Poisson distribution with mean \(\alpha_{t,i}\cdot\mu_{t,i}\). Therefore, setting \(\mu_{t,i}\) to maximize the likelihood of observing \(x\) will yield estimate \[\hat{\mu}_{t,i}=\frac{\frac{1}{k}\sum_{j:t_{j}=t}\hat{w}_{j,i}}{\hat{\alpha}_ {t,i}}, \tag{2}\] where \(\hat{w}_{j,i}\) is calculated using Equation 1 using the current estimate of \(\hat{\mu}\). In summary, we can find estimated rates \(\hat{\mu}\) by alternating between the following two steps. 1. E-Step: Maximize the expectation of log-likelihood with respect to the latent indicator variables \(z_{j}\) given the data \(x\) and current arrival rates \(\hat{\mu}\) by using Equation 1 to update the estimated membership weights \(\hat{w}_{j,i}\). 2. M-Step: Maximize the log-likelihood estimate with respect to the parameters \(\mu\) given the data \(x\) and the current membership weights \(\hat{w}_{j,i}\) by using Equation 2 to update the estimated arrival rates \(\hat{\mu}_{t,i}\). ### Extensions We conclude this section by describing extensions to the above framework. In Section 3, we assumed the Poisson arrival rates are fixed from day to day. If we want to allow certain factors to impact arrival rates such as the weather, season, or day of the week, we can either filter the data prior to analysis or the model can be extended to estimate a rate \(\hat{\mu}_{t,i}\) for each day by allowing for a weighted average in the M-step, with weights reflecting the similarity between days using the specified characteristics. Additionally, the model can be extended to predict demand for trips rather than just starting location by using the distribution of drop-off locations. Drop-off locations have the added benefit of being uncensored in the case of dockless systems. Last, the algorithm above is also flexible enough to extend beyond bike-share or scooter-share programs to any settings in which we are interested in estimating rates of events where the observed events depend on some underlying availability. For example, it could be used to model demand for bus travel routes when users may switch between buses depending on the schedule and time of day. ## 5 Implementation, Public Application, and Example Our code is available to use through a live R Shiny application. The demand estimation algorithm was implemented in python 3.10.9. The grid discretization of the map is incorporated through a grid object, which keeps track of the set of grid cell objects it contains. Each grid cell is efficiently indexed by the grid, and keeps track of its location and nearby bike availability. A data processing class then parses the input trip data sequentially to simultaneously update the relative grid cell availabilities of the grid, and use the grid cells' availability values to compute the appropriate \(\pi\) and \(\alpha\) values. Once computed, the \(\pi\) and \(\hat{\alpha}\) values are fed into an EM class to iteratively update the estimated membership weights \(\hat{w}\) and arrival rates \(\hat{\mu}\) until a stable set of demand rates \(\hat{\mu}\) is obtained or a set number of iterations is reached. Our code is available at [https://github.com/KyFlynn/shared-mobility-research](https://github.com/KyFlynn/shared-mobility-research). To create a user-friendly interaction with our model and to visualize the results, we host a live R Shiny application that communicates with our python demand model through the reticulate package. The app allows users to easily upload their cleaned trip data, select the demand model parameter values of choice (grid cell width, probability \(p_{0}\), maximum user distance \(\text{dist}_{max}\)), and run the model for demand visualization with a click of a button. Once processed, we display side-by-side colored leaflet maps that allow users to analyze and compare the findings of our model in their city. From a drop-down menu, users can choose to visualize the estimated demand rates, the estimated or observed availability, the rate at which trips took place, and a summary visualization that categorizes service levels to highlight where the rate of observed trips is much lower than the estimated demand. These findings allow city planners to determine regions of unmet demand due to low availability not directly inferable from the raw trip data. The data for these findings produced by our model can be downloaded in the app for further data analysis and/or re-uploaded for re-visualization. Our application can be accessed at [https://alicejpaul.shinyapps.io/shared-mobility/](https://alicejpaul.shinyapps.io/shared-mobility/). A local version of the app can also be downloaded through the github repository. For Providence data with approximately 100k trips (200k data points) spanning three months distributed over a 276km\({}^{2}\) (\(\sim\)107mi\({}^{2}\)), with default parameter settings (grid cell width of 400m, probability \(p_{0}\) of 0.7, maximum walking distance of 1km) our application takes approximately 30s Figure 6: Screenshot of the application visualizing Providence demand data. locally and 2min on our live app.1 In Table 1, we report the runtime of our app both locally and on the published Shiny app for different grid cell sizes and numbers of trips, keeping all other input parameters equal. Entries without data in the website column mean the website ran out of memory running the demand model. For large amounts of data and/or for high granularity of estimated demand, we recommend downloading the application from the github repo and running it locally. Footnote 1: Locally: Apple 2022 16” Macbook Pro, M1 Pro chip: 32GP unified memory; 10-core CPU (8 performance, 2 efficiency); 16-core GPU; 16-core Neural Engine; 200Gbps memory bandwidth. ### Results We now present results from scooter data from Providence, RI for June 1, 2019 to August 31, 2019. In Figure 7, the plots display an estimation of the service level and the estimated demand rates \(\hat{\mu}\) averaged over the operating hours (6 am to 10 pm) of the day. The requirement for a grid cell to show up for both of these plots is that the estimated availability \(\hat{\alpha}\) is at least \(10^{-2}\). Choosing the cutoff service level that estimated demand is at most twice the observed trip rate, we observe that demand is not being met on the outskirts of downtown Providence. On the right, we observe higher estimated demand in downtown Providence and College Hill, as expected, but find that user demand is spread throughout the city. In Figure 8, the plots display the estimated bike availability \(\hat{\alpha}\) and the observed trip rates. Even with the default value of 0.7 for \(p_{0}\), which corresponds to a sharp decrease in acceptance probability with distance, our model estimates availability in much of wider Providence. As expected, we find that most of the trips occur in downtown Providence and College Hill. Overall, the plots together indicate that availability is somewhat extended from these high-demand areas, but that more consistent availability might better serve estimated demand in these areas. Therefore, incorporating estimated availability and demand give important context when considering the distribution of trips. In our github repository, we have a test data set available for people to try out the application. The trip data was downloaded from Kansas City, MO's Microtransit Network [3]. We first filter the data to only consider trips from May 1, 2021 to May 6, 2021. The public data does not contain any operator rebalancing. To account for this, we assume perfect rebalancing each day. That is, at the end of each day, bikes are removed and the minimum number needed to ensure the observed \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Grid Cell Width (m)** & **Number of Trips** & **Local Runtime (s)** & **Website Runtime (s)** \\ \hline \multirow{3}{*}{200} & 100k & 107 & - \\ \cline{2-4} & 200k & 460 & - \\ \cline{2-4} & 300k & 871 & - \\ \hline \multirow{3}{*}{400} & 100k & 34 & 144 \\ \cline{2-4} & 200k & 63 & 598 \\ \cline{2-4} & 300k & 87 & - \\ \hline \multirow{3}{*}{600} & 100k & 16 & 44 \\ \cline{2-4} & 200k & 32 & 130 \\ \cline{1-1} \cline{2-4} & 300k & 46 & 293 \\ \hline \end{tabular} \end{table} Table 1: Runtime of our demand model locally versus on our public application for different grid cell sizes and number of trips in input trip data. trips the next day are feasible are relocated. Our R script for processing the data is also available in our repository. ## 6 Sensitivity Analysis The EM algorithm in Section 4 is guaranteed to converge to a local optima. Therefore, our initial guesses of the rates \(\hat{\mu}\) can impact the returned estimate. We explore this dependence on the Providence data and a simple simulated case. For our simulated data, we generate 50 days of trip data with 10 instantaneous trips per hour starting and ending at the same place in two locations 600m apart longitudinally. Our model creates a small grid of 7\(\times\)8 grid cells around these two active central grid cells, and an initial trip at time zero in both of these cells creates a constant availability of one bike in each of these grid cells for all 50 days. A visualization of the resulting trip rates and estimated availability is given in Figure 9. For each data set, we consider two extremes: (1) setting the rates uniformly so all grid cells start with the same guess of demand \(\hat{\mu}_{\mathrm{unif}}\) and (2) setting the rates to the average rate of observed trips in that cell and time period \(\hat{\mu}_{\mathrm{trip}}\). While the former corresponds to an uninformed prior on location the latter ignores any potential travel of users. To analyze the sensitivity of our algorithm to the initial conditions we consider setting the initial \(\hat{\mu}\) values to \(\hat{\mu}_{\gamma}=\gamma\hat{\mu}_{\mathrm{unif}}+(1-\gamma)\hat{\mu}_{ \mathrm{trip}}\) for \(\gamma\) between 0 and 1. In Figure 11, we plot the largest difference between the \(\mu_{\mathrm{trip}}\) initialization and the \(\mu_{\gamma}\) initialization versus \(\gamma\). In the simulated data, we observe that the largest difference increases steadily as \(\gamma\) increases, shown in Figure 10. This is expected because we have a limited distribution on availability. As \(\gamma\) increases, the estimated demand spreads from the central cells to the surrounding cells until uniform at \(\gamma=1\). Hence, the estimated demand is highly sensitive to the initialization. Figure 7: Visualization of Providence data summer 2019 service level (left) and estimated demand (right). By contrast, in the Providence data, we observe an immediate increase to a largest difference of around 2, and a subsequent stabilization of the largest difference at this value. The 99th percentile stays relatively low throughout, and the median difference was observed to be 0 for all \(\gamma\). To generate these results, we did not include grid cells in which estimated \(\hat{\alpha}\) is less than \(10^{-2}\), the same threshold used in our application. We consider these cells not to have enough availability to infer demand. Note that for regions that did not meet this threshold, the maximum observed difference between \(\hat{\mu}_{\text{trip}}\) and \(\hat{\mu}_{\text{unif}}\) was 103. This is caused by numerical instability in the division by small \(\pi\) and \(\hat{\alpha}\) values in the EM algorithm iterations. Overall, these results indicate that when we have a more complex and realistic distribution of availability the results are far more robust to our initialization. For this reason, we choose to initialize with \(\hat{\mu}_{\text{unif}}\) in our implementation to avoid biasing the results. ## 7 Future Work In this paper, we present a flexible and interpretable framework to estimate spatial-temporal demand in the form of estimated Poisson arrival rates. Some possible extensions to this model include incorporating the direction of travel and the built environment into our user choice model. Additionally, the model can be extended to allow the rates to depend on other factors such as day of the week, season, and weather. Last, the results provide insight into user behavior and can be used to inform future decisions such as redistribution and identifying areas with unmet demand. This focus on prescriptive analytics using the estimated model is a promising direction. Figure 8: Visualization of Providence 2019 summer data estimated availability (left) and trip rates (right). ## 8 Acknowledgements The authors would like to acknowledge principal planner Alex Ellis and curbside administrator Liza Farr with the City of Providence for their help defining our model and framework and their insight into the data and results.
2304.13756
Simulations of the dynamics of quantum impurity problems with matrix product states
The Anderson impurity model is a paradigmatic example in the study of strongly correlated quantum systems and describes an interacting quantum dot coupled to electronic leads. In this work, we characterize the emergence of the Kondo effect by investigating the model dynamics following a quantum quench based on matrix product state simulations. The relaxation of the impurity magnetization allows for the estimate of the predicted universal scaling of the Kondo temperature as a function of the impurity-lead hybridization and quantum dot repulsion. Additionally, our simulations permit us to evaluate the current in the nonequilibrium quasi-steady state appearing after the quench. Through their values, we examine the dependence of the conductance on the voltage bias $V_b$ and on the impurity chemical potential $V_g$, which displays a zero-bias Kondo peak. Our results are relevant for transport measurements in Coulomb blockaded devices, and, in particular, in quantum dots induced in nanowires.
Matteo M. Wauters, Chia-Min Chung, Lorenzo Maffi, Michele Burrello
2023-04-26T18:00:13Z
http://arxiv.org/abs/2304.13756v2
# Simulations of the dynamics of quantum impurity problems with matrix product states ###### Abstract The Anderson impurity model is a paradigmatic example in the study of strongly correlated quantum systems and describes an interacting quantum dot coupled to electronic leads. In this work, we characterize the emergence of the Kondo effect by investigating the model dynamics following a quantum quench based on matrix product state simulations. The relaxation of the impurity magnetization allows for the estimate of the predicted universal scaling of the Kondo temperature as a function of the impurity-lead hybridization and quantum dot repulsion. Additionally, our simulations permit us to evaluate the current in the nonequilibrium quasi-steady state appearing after the quench. Through their values, we examine the dependence of the conductance on the voltage bias \(V_{b}\) and on the impurity chemical potential \(V_{g}\), which displays a zero-bias Kondo peak. Our results are relevant for transport measurements in Coulomb blockaded devices, and, in particular, in quantum dots induced in nanowires. Introduction.The Kondo effect is the most emblematic embodiment of strong correlations in condensed matter systems. The advances in the fabrication and measurement techniques of nanostructures allowed us to observe its distinctive zero-bias conductance peak in a wide class of systems, including gate-defined quantum dots [1; 2], nanotubes [3] and semiconducting nanowires [4]. In these mesoscopic systems, however, the dynamics of the quantum impurities at the basis of the Kondo effect is typically too fast to be observed. A complementary experimental platform has been recently offered by quantum simulators of ultracold fermionic Yb atoms [5]. In these setups, the characteristic time scales are much longer than in their solid state counterpart, thus enabling the analysis of the dynamics of the spin impurities at the basis of the Kondo effect in out-of-equilibrium transient states [6]. Inspired by these developments, in this work we analyze the dynamics of the Anderson impurity model after a quantum quench through matrix product state (MPS) simulations. By studying the transient behavior of its impurity magnetization, we provide a numerical verification of the Kondo time scale consistent with previous renormalization group results [7]. We derive the conductance of the corresponding two-terminal problem, relevant for the experimental study of Coulomb blockaded nanowires with induced quantum dots. The out-of-equilibrium properties of quantum impurity models following quantum quenches are considered a paradigmatic playground to observe how strong correlations develop through time evolution in many-body quantum systems and have been recently studied by means of a vast set of analytical and numerical techniques [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. In the following we will apply the MPS algorithm described in Ref. [19] to simulate the time evolution of the two-terminal Anderson impurity model (AIM). The model.The AIM represents an electronic environment coupled with an interacting magnetic impurity; it is one of the most popular yet simple models that display the Kondo effect, and it constitutes the central element of dynamical mean field theory methods for studying correlated materials, making it a fundamental problem for many numerical algorithms [20]. Its Hamiltonian reads \[\widehat{H}=\widehat{H}_{\text{leads}}+\widehat{H}_{\text{ tunn}}+\widehat{H}_{\text{AIM}}\, \tag{1}\] where \[\widehat{H}_{\text{AIM}}=U\hat{n}_{\uparrow}\hat{n}_{\downarrow}+V_{g}(\hat{n} _{\uparrow}+\hat{n}_{\downarrow})\,, \tag{2}\] with \(\hat{n}_{\sigma}=\hat{d}_{\sigma}^{\dagger}\hat{d}_{\sigma}\) describing the occupation of the two spin states of a single-level quantum dot, which, in turn, plays the role of the magnetic impurity and is characterized by the Hubbard repulsion \(U\) and the chemical potential \(V_{g}\). Unless otherwise stated, we will focus on the particle-hole symmetric point \(V_{g}=-0.5U\). The lead Hamiltonian \[\widehat{H}_{\text{leads}}=-\sum_{\alpha,\sigma,l}t_{\alpha,\sigma,l}\left( \hat{c}_{\alpha,\sigma,\hat{l}}^{\dagger}\hat{c}_{\alpha,\sigma,l}+\text{H.c. }\right)+\sum_{\alpha,\sigma,l}\mu_{\alpha,\sigma}\hat{n}_{\alpha,\sigma,l} \tag{3}\] describes two spinful fermionic chains (\(\alpha=L,R\)) with a spin-dependent chemical potential \(\mu_{\alpha,\sigma}\) and a hopping amplitude \(t_{l}=t_{0}\mathrm{e}^{-(l-1)/\xi}\) that decreases exponentially as a function of the distance from the site \(l=1\), with a decay length \(\xi\). This is known as Wilson construction and it is commonly used in numerical renormalization group approaches to impurity problems. Moreover, it has been shown effective to increase both the resolution at small voltage bias, namely by mimicking an effectively larger system, and the stability of the time evolution in MPS simulation of transport problems [8; 19]. Indeed, given the finite size \(L\), the density of states around the Fermi energy depends on the hopping decay length \(\xi\): the smaller \(\xi\), the more states are shifted toward the Fermi energy, leading to a smaller energy level spacing. Therefore, a strong decay of the tunneling provides higher energy resolution to accurately determine the dynamics for states close to zero energy (thus at small bias voltages)[8]. Finally, the quantum dot and the leads are coupled with a standard tunneling Hamiltonian \[\widehat{H}_{\rm tunn}=-\sum_{\alpha,\sigma}J_{\alpha,\sigma}\left(\hat{c}^{ \dagger}_{\alpha,\sigma,\uparrow}\hat{d}_{\sigma}+{\rm H.c.}\right)\;, \tag{4}\] where \(\hat{d}_{\sigma}\) destroys an electron with spin \(\sigma\) on the impurity level. Throughout this paper, we consider a uniform tunneling strength between the quantum dot and the leads \(J_{\alpha,\sigma}=J\) and we denote by \(\Gamma=2J^{2}/t_{0}\) the effective tunneling rate in the limit of infinite bandwidth (constant density of states). To bring the system out of equilibrium, we adopt two different quantum quench protocols [19; 21]: _(i)_ in the _zero-bias quench_, we initialize the system with \(J=0\) and \(\mu_{L}=\mu_{R}\), thus preparing a product state between the impurity and the leads; at time \({\tt t}>0\), the leads are connected to the quantum dot (\(J>0\)) and the system equilibrates towards a stationary state. _(ii)_ in the \(\mu-\)_quench_, the system is initialized in the ground state at half filling, i.e. with uniform chemical potential \(\mu_{L}=\mu_{R}\), and then it evolves in time after a voltage bias is turned on. The first protocol is more useful to study the relaxation of the impurity magnetization and extract the Kondo temperature, while the second leads to a fast convergence of the current to a nonequilibrium quasi-steady state. Indeed, in the \(\mu\)-quench the initial state already captures some of the non-perturbative Kondo correlations and therefore is closer to the Kondo-like quasi-steady state that arises in transport measurements. Matrix product state simulations.Tensor networks offer a powerful framework to simulate the real-time evolution of quantum impurity models [8; 9; 11; 12; 13; 15; 17; 18]. To simulate the post-quench dynamics, we model the system with the MPS depicted in Fig. 1(b): each site represents a single-particle _energy_ orbital of the non-interacting and decoupled system (\(U,J=0\)), and we compute the unitary time evolution of the closed system with the time-dependent variational principle (TDVP) [22; 23]. In particular, we expand the construction presented in Ref. [19] with the addition of the spin degrees of freedom; the MPS "sites" are ordered based on their energies [24], such that the entropy growth during the time evolution is restricted in an energy window, thus a segment of the MPS, corresponding to the voltage bias. Since the basis states (MPS "site") are ordered by their energies regardless the number of leads, introducing multiple leads (or the spin degrees of freedom) is straightforward. The interaction \(U\) is introduced by including an auxiliary MPS site that represents the charge \(\widehat{N}=\hat{n}_{\uparrow}+\hat{n}_{\downarrow}\) of the dot [19]. Tunneling events increase or decrease this charge by one. This construction is not strictly necessary for a single impurity site as in the AIM in Eq. (1), but it can easily allow for generalizations to multilevel dots with a uniform all-to-all Coulomb repulsion described by an effective charging energy. In the chosen single-particle eigenstate basis, the dynamics is dictated only by the tunneling Hamiltonian coupling the leads with the quantum dot. \(\widehat{H}_{\rm tunn}\) is non-local in this basis, but it can be described by a matrix product operator (MPO) with limited bond dimension, such that TDVP is not hampered by the presence of these long-range interactions and can be efficiently used to simulate the dynamics for long evolution time. The method is implemented by using ITensor library [25]. The source code can be found in Ref. [26]. Results.We first focus on the equilibration of the impurity after it is coupled to the _unbiased_ leads (zero-bias quench). and here we set bias to zero (\(\mu_{L}=\mu_{R}\)).) The dynamics of the impurity magnetization is predicted to be characterized by two rates [27]: \(\Gamma\), which determines the short-time and nonuniversal evolution; and the Kondo temperature \(T_{K}\), whose inverse, the Kondo time \({\tt t}_{K}=T_{K}^{-1}\), defines the time scale required for the formation of the Kondo screening cloud. In the renormalization group sense, the evolution for time \(\Gamma^{-1}<{\tt t}<{\tt t}_{K}\) is Figure 1: (a) Sketch of the AIM: a single-level quantum dot with Hubbard repulsion \(U\) is tunnel-coupled to two non-interacting leads with chemical potentials \(\mu_{L}\) and \(\mu_{R}\). (b) Schematic representation of the MPS describing the system [19]. The sites of the chain represent single-particle orbitals and are ordered by their energy. To account for the interaction, we include an auxiliary bosonic charge site (represented by a square) which counts the number of particles inside the dot. This construction introduces long-range couplings (arrows) in the Hamiltonian MPO, which however do not constitute an obstacle for the TDVP algorithm used for the time evolution. governed by the weak-coupling fixed point of the Kondo problem [16], and t\({}_{K}\) constitutes the decay time of the magnetization in this intermediate regime towards the formation of a spin singlet with the conduction electrons. Therefore, we aim to get an estimate of the Kondo temperature as a function of the ratio between the interaction strength \(U\) and the effective tunneling rate \(\Gamma\) from the dynamics of the impurity magnetization \(\langle\sigma^{z}\rangle=\langle\hat{n}_{\uparrow}\rangle-\langle\hat{n}_{ \downarrow}\rangle\). We prepare the quantum dot in the polarized state \(|\hat{n}_{\uparrow}=1,\hat{n}_{\downarrow}=0\rangle\) and measure its evolution in time after a zero-bias quench. For this analysis, we choose \(L=64\) as the lead length and the hopping decay length between \(\xi=8\) and \(\xi=32\), depending on the energy resolution needed to accurately measure the magnetization up to times of the order of t\({}_{K}\). We consider two values for the interaction strength, \(U=t_{0}\) and \(U=0.4t_{0}\), and we examine the particle-hole symmetric point \(V_{g}=-0.5U\). To extract the predicted exponential dependence of the Kondo temperature from \(U/\Gamma\)[7; 12; 16], we vary the hybridization strength \(\Gamma\) between \(\sim U/20\) (\(J\sim 0.15U\)) and \(\sim U/2\) (\(J\sim 0.5U\)). Figure 2(a) shows the decay in time of the magnetization for different values of \(U/\Gamma\) while we fix \(U=t_{0}\). We can easily identify three regimes: at short time t \(\lesssim\Gamma^{-1}\), the different curves collapse on each other as the relevant time scale for the relaxation of the impurity is set only by \(\Gamma\) (dashed black line). Indeed, notice that time is measured in units of \(\Gamma^{-1}\). At longer times, the relaxation rate depends on the ratio \(U/\Gamma\), with a slower decay the further the system lies in the strongly interacting/weak-coupling regime. For these intermediate values of t, we can extract the relaxation time by exponential fits of the data (dot-dashed gray lines). Finally, the impurity approaches a steady state with a finite magnetization; in Fig. 2(a) this last regime is visible only for \(U=2\Gamma\). Due to the unitary dynamics, the system keeps the memory of its initial state and a complete relaxation to a \(SU(2)\) invariant state can not be reached. Comparable results have been obtained in Ref. [12] with real-time density matrix renormalization group (DMRG) applied to a similar MPS construction. Figure 2(b) illustrates the inverse of the relaxation times t\({}_{K}(U/\Gamma)\) extracted from the magnetization decay at intermediate times [grey lines of panel (a)] as a function of \(U/\Gamma\) and for two values of the Hubbard interaction \(U\). We interpret this quantity as the Kondo temperature \(T_{K}\sim\mbox{t}_{K}^{-1}\). A comparison with the renormalization group prediction \[T_{K}\sim\sqrt{U\Gamma}\mbox{e}^{-\frac{\pi U}{8U}} \tag{5}\] (solid black line) shows excellent agreement with our data for both values of the interaction strength. In the inset the same data are displayed in logarithmic scale to emphasize the exponential dependence of the Kondo temperature from \(U/\Gamma\). Moreover, the two datasets perfectly collapse on top of each other, highlighting the universal character of the exponential decay linked to the Kondo temperature. At very weak coupling (\(U/\Gamma\gg 1\)) the long evolution time needed for an accurate estimate of the relaxation time can not be reached and our data deviate from the analytical prediction. Although the entanglement growth ultimately limits our ability to simulate the evolution of large systems at long time, thus preventing the observation of Kondo correlations for very weak coupling, our method allows us to observe nonperturbative effects emerging directly from the nonequilibrium properties of the AIM. The data in Fig. 2 are obtained with a zero-bias quench, i.e., with a vanishing bias voltage. When dealing with transport properties, we evolve instead the system with a voltage bias \(V_{b}=\mu_{L}-\mu_{R}\) between the two leads in order to observe a quasi-stationary current. Here we use the \(\mu-\)quench protocol: in this scenario the initial state is the correlated ground state of the Hamiltonian \(\widehat{H}\) (obtained through the DMRG), quenched at \(\mathbf{t}=0\) to a Hamiltonian with a finite voltage bias. To simulate the quench dynamics at finite bias, we also need to adjust the decay length of the hopping amplitude in the lead, \(\xi\), such that the density of states in the leads is approximately constant in the energy interval between \(\mu_{L}\) and \(\mu_{R}\). The convergence in the simulation parameters (\(\xi,\ L\), and the TDVP time step discretization) is reached when the current signal displays a plateau in time long enough to reliably extract its expectation value in the quasi-steady state that develops after the quantum quench. The maximum bond dimension adopted is \(\chi=2000\) with a truncation error \(O(10^{-8})\). In Fig. 3(a) we plot the quasi steady current as a function of the voltage bias for different values of the effective tunneling rate \(\Gamma\), while keeping fixed the Hubbard interaction \(U\), at the particle-hole symmetric point \(V_{g}=0.5U\). As we approach the strong-coupling regime \(\Gamma\sim U\), the current tends toward a linear response with a quantized differential conductance \(\frac{\mathrm{d}I}{\mathrm{d}V_{b}}=2\frac{e^{2}}{h}\), i.e. there are two perfectly transmitting channels (dashed black line). The Kondo temperature \(T_{K}\) sets the extension of the bias window in which this quantization occurs [28]. In particular, as showed in Fig. 2(b), the Kondo energy scale drops down exponentially at weak coupling, \(U/\Gamma\gg 1\), and can become smaller than the values of voltage bias we can resolve with the chosen lead length \(L=100\) and hopping decay length \(\xi=30\). This explains the apparent deviation from the quantized conductance at weak coupling \(U/\Gamma=12.5\) in Fig 3(a). Similar results are shown in Ref. [15]. We remark that, away from the strong coupling regime, we can simulate transport for voltages larger than the tunneling rate \(\Gamma\). The main limitation comes from the fast entanglement growth when states in a large energy window contribute significantly to transport, which happens when \(V_{b}\) covers a significant fraction of the leads' bandwidth. In our model, this limitation becomes particularly relevant when \(V_{b}\sim U,\ t_{0}\) is large enough to excite the quantum dot and there is a strong current flow due to sequential tunneling resonances at finite bias. Fig. 3(b) illustrates the differential conductance in the weak-coupling regime (\(U/\Gamma=12.5\)) as a function of the bias \(V_{b}\) and the induced charge parameter \(n_{g}\) which is linked to the chemical potential as \(V_{g}=\frac{U}{2}(1-2n_{g})\) and determines the expectation value of the total occupation of the quantum dot. We derive the differential conductance in Fig. 3(b) from the simulation of a \(\mu\)-quench protocol in which the system is initialized in the ground state of \(\widehat{H}\) at half filling, thus for the particle-hole symmetric point (\(n_{g}=1,V_{g}=-0.5U\)). At time \(\mathbf{t}=0\), both the induced charge \(n_{g}\) and the bias voltage \(V_{b}\) are quenched to their final value [horizontal and vertical axis of Fig. 3(b)]. At \(n_{g}=0.5\) and \(n_{g}=1.5\) we observe two bright zero-bias sequential tunneling resonance, corresponding to the degeneracies between the empty and singly-occupied dot (\(n_{g}=0.5\)) or between the singly and doubly occupied dot (\(n_{g}=1.5\)). At finite voltage, the conductance peaks are prolonged along the lines \(V_{b}=\pm U(1-2n_{g})\) and \(V_{b}=\pm U(3-2n_{g})\), following the resonances between each biased lead and the quantum dot. Between the two charge-degeneracy points, an extended zero bias peak indicates the onset of the Kondo effect, although for strong interaction and weak coupling we can not see the quantization of the conductance. This limitation originates mainly from the high voltage resolution needed to sam Figure 3: (a) Current vs voltage bias in the symmetric point \(V_{g}=-0.5U\) for three different values of the hybridization strength \(\Gamma\) and \(U=t_{0}\). The dashed line corresponds to the quantized current \(I=2\frac{e^{2}}{h}V_{b}\). (b) Differential conductance as a function of the induced charge \(n_{g}\) and of the voltage bias \(V_{b}\) between the left and right leads, in the strongly interacting/weak-coupling regime \(U/\Gamma=12.5\). The zero bias peak extending between the two sequential tunneling resonances at \(n_{g}=0.5\) and \(n_{g}=1.5\) signals the onset of the Kondo effect. ple the current at energy below the Kondo temperature, which for \(U/\Gamma=12.5\) is of the order \(T_{K}\sim 10^{-3}U\), where we expect the quantized linear response. To reach such resolution in \(V_{b}\), we need either a larger system size or a shorter decay length \(\xi\). The former makes the simulations computationally more expensive, while the latter induces nonphysical effects in simulations at higher energy, preventing the calculation of the differential conductance in a wide bias range. As common in nanostructure experiments (see, for instance, Ref. [4]), this zero-bias peak does not extend to \(n_{g}<0.5\) or \(n_{g}>1.5\) where the ground state of the quantum dot becomes respectively, empty, or fully occupied, thus loosing the doublet degeneracy necessary for the Kondo effect. Conclusions.In this work we applied the tensor network method introduced in Ref. [19] to study the Kondo effect in the Anderson impurity model. In particular, we used a MPS+TDVP approach to study the dynamics of a single-level interacting quantum dot coupled to two fermionic leads after quantum quenches of the Hamiltonian parameters. We examined both the out-of-equilibrium evolution of the quantum dot magnetization, and the electric transport features emerging in a nonequilibrium quasi-steady state after the quench. The magnetization dynamic allows us to obtain a good estimate of the Kondo temperature as the inverse of its relaxation time when the quantum dot is coupled to unbiased leads. Such estimate is in agreement with renormalization group results [7]. In particular, the magnetization decay displays two typical time scales: the effective coupling rate with the leads and the Kondo time scale. The appearance of these two decay regimes for short and intermediate times is reminiscent of the experimental results concerning the evolution of the spin population of impurities in 1D ultracold Yb gases [5]. Concerning the study of the conductance of the system, relevant for transport measurements in nanostructures, our simulations allow us to study its evolution when a voltage bias is applied between the two leads. By looking at the emergent quasi-steady state, we can reconstruct its Coulomb blockade properties as well as the emergence of a Kondo peak at zero bias. The latter appears when the impurity chemical potential fixes its ground state in the degenerate singly-occupied sector and the related differential conductance approaches the quantized value \(G=2\frac{\varepsilon^{2}}{h}\) in the strong-coupling regime. We can simulate the system dynamics in a broad parameter range, from a strongly interacting/weak-coupling regime to a strong-coupling one, well beyond the applicability of standard perturbative master-equation approaches. Moreover, our method is not limited by single site or small impurities but can be easily extended to multilevel quantum dots or nanowires with long-range Coulomb repulsion. Additionally, our approach can address superconducting systems, opening the path for the study of the out-of-equilibrium dynamics of the topological Kondo effect [29; 30; 31], which arises in multiterminal impurities with p-wave superconducting coupling. This kind of system can be easily described by identifying the spin degrees of freedom of the AIM as a label for different spinless leads. In general, our method can thus be used to investigate transport phenomena in hybrid superconducting-semiconducting multiterminal devices with strong Coulomb interactions (see, for instance, Refs. [32; 33]), without being limited to a weak-coupling regime. This offers the possibility of investigating the variety of subgap states [34] that can appear in these platforms, thus providing important details towards the realization of Majorana - Cooper pair boxes and other building blocks for quantum devices [35; 36]. _Acknowledgements._ We thank J. Paaske and V. Baran for fruitful discussions. M.W., L.M. and M.B. are supported by the Villum Foundation (Research Grant No. 25310). This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 847523 "INTERACTIONS." C.-M.C. acknowledges the support by the Ministry of Science and Technology (MOST) under Grant No. 111-2112-M-110-006-MY3, and by the Yushan Young Scholar Program under the Ministry of Education (MOE) in Taiwan.
2306.14898
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
2023-06-26T17:59:50Z
http://arxiv.org/abs/2306.14898v3
# InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback ###### Abstract Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create two interactive code environments with Bash and SQL as action spaces, leveraging data from the static Spider [51] and NL2Bash [29] datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct [48] and Plan & Solve [40]. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to incorporate new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. ## 1 Introduction The art of computer programming is naturally an interactive process. When a human programmer writes code, she relies on several iterations of a 'write-execute-test' loop in order to iteratively refine solutions, plan changes, test sub-modules, and solve ambiguities by checking execution behavior. While this is reminiscent of other human endeavors like writing, code compilation and execution produce exact results that provide a deterministic form of feedback to make the refinement process more straightforward. Depending on the observed results, programmers perform various levels of debugging and rewriting, and continue the process until their code satisfies the requirements. There has been increasing interest in recent years around the development of models that can automatically generate code given a specification in natural language [17; 43; 13; 27; 24]. Powered by large-scale pre-training over thousands of codebases [2; 21; 18], these models have shown solid performance on static benchmarks like HumanEval [9], APPS [19], MBPP [4], CodeXGLUE [30]. However, generating code in a static, sequence-to-sequence or auto-regressive fashion has several drawbacks: 1) simple errors (even typos) can propagate and there is no chance for recovery or revision, 2) there is a disconnect between the code generation process and its downstream execution on the desired software and hardware environment, and 3) there is little room for human intervention or collaboration in the code generation process. Recently, some works have proposed the use of execution feedback or interaction [44] to benefit code generation models [23; 20; 45; 19]. However, these papers consider their own individual setup and are difficult to compare with one other due to the use of different compilers, execution environments, feedback signals, and assumptions on the interactive process such as human participation to create task descriptions or provide natural language feedback. This makes it difficult to compare existing methods for code generation and to clearly understand the benefits of interactive generation. To address these issues, we propose InterCode, the first standard coding benchmark designed natively with an interactive execution environment. Closely mimicking the human decision-making process, InterCode allows a coding agent to interactively receive feedback from compilers/interpreters that execute its code, and to submit further refinements. We design InterCode to be like a standard reinforcement learning (RL) environment that requires minimal human intervention and one in which generated code is treated as actions, which are executed to reveal observations. Our framework is (1) language and platform agnostic and can easily be used for new coding problems, (2) uses self-contained Docker environments to provide safe execution, and (3) compatible out-of-the-box with traditional seq2seq generation methods, while also enabling and empowering the development of new interactive techniques. We demonstrate the power of the framework by implementing SQL and bash tasks within InterCode, building on pre-existing static datasets [57; 29]. We perform experiments across diverse models and prompting methods, including ReAct [48] and Plan & Solve [40]. Our findings concretely showcase the benefits of interaction towards solving coding tasks, discuss the distribution of distinct code Figure 1: Overview of InterCode. Setting up an interactive code environment with InterCode requires a Dockerfile, dataset, reward function definition, and a small amount of subclass implementation. The interactive loop between agent and environment closely mirrors real world software development processes. While InterCode task performance is generally quantified as a binary 0/1 completion score, InterCode allows for the design of more complex evaluation criteria that can incorporate execution output and the effects of interaction on the state space. understanding challenges across different task settings, and explore the ease with which new tasks and datasets can be defined using InterCode. To summarize, our paper makes the following contributions: * We develop InterCode, a new, universal framework for interactive code generation, which provides ease of use, extensibility, and safety. * Using InterCode, we perform a comprehensive evaluation of state-of-the-art models and identify several avenues for improvements. * We release our framework as a new benchmark along with useful empirical tools to customize any new static code datasets into interactive tasks. ## 2 Related Work **Interactive environments for coding.** Most coding benchmarks (e.g. SQL - Spider [51], KaggleDBQA [25]; Bash - NLC2CMD [1], NL2Bash [29]; Python - HumanEval [9], APPS [19], MBPP [4], CodeXGLUE [30], CodeNet [35]) frame the coding problem as a sequence transduction problem (from instruction to code), rather than an interactive decision making problem with an execution environment. Attempts have been made to simulate interaction by developing conversational, dialogue-style [53; 52], multi-step problem solving [33] datasets, which involve pre-annotated human-designed queries. The work closest to InterCode has been recent explorations of Python Jupyter Notebooks as a natural choice for interactive coding [20; 23; 50]. However, task data and settings often constrain allowed actions to a closed domain of code and libraries [23; 50], use evaluation procedures or metrics that may not generalize [20], require human-in-the-loop participation (i.e. create task contexts, write problems, evaluate execution per task instance) [23], or are Python-exclusive [20; 23; 50; 45]. InterCode provides a more general purpose foundation defining interactive coding tasks that enables easy construction of diverse task settings, can have any programming language(s) as the action space, and has automatic, execution-based evaluation. **Execution-based evaluation for coding.** Evaluation for NL-to-code generation models has recently shifted away from surface form similarity metrics (BLEU [34; 2], ROUGE [28], Exact Match) towards execution oriented ratings (unit tests [4; 9; 20; 23; 19], output matching [15; 20; 57]). The rigidity of surface form analysis overlooks code syntax features, ignores execution effect, or over-penalizes alternative solutions [58], On the contrary, execution-based assessment is a more thorough and comprehensive score of code functionality [19] and is a more natural fit for open-domain program usage that does not constrain code generation to a subset of the language space [45]. However, for newer benchmarks and datasets that put forth task definitions incorporating execution-based evaluation (APPS [19], ExeDS [20], ODEX [45]), the fundamental code generation task (Context + Code \(\rightarrow\) Execution \(\rightarrow\) Score) is still devoid of interaction. InterCode combines execution-based evaluation with flexible task construction, enabling more diverse problem-solving paradigms within a unified coding task formulation. InterCode's use of virtual containers as execution sandboxes protect against harmful actions and allow for advanced evaluation criteria beyond the aforementioned ones. **Methods for interactive or execution-based coding.** The value of generative code models and interactive problem solving has motivated a recent proliferation of work to augment reasoning capabilities' of existing language models [48; 37; 40; 47; 55; 11] or propose new modeling techniques to tackle coding as a sequential decision making and reasoning tasks [6; 10; 16; 27; 8; 24], of which evaluation is unit test based. Approaches that leverage execution typically use re-ranking [56; 32; 49; 54] or majority vote [10; 27; 36] to decide on a final prediction. Additional work also explores incorporating human-in-the-loop [7; 22], compilers [41], and text [42] feedback. A common thread among these contributions is that 1) the task setting can only provide the investigated form of feedback and 2) sought-after capabilities are exemplified by strong performance on favorably curated tasks and datasets, rendering comparisons across benchmarks tedious. InterCode has the potential to standardize the evaluation of these methods because 1) the interactive coding task is a conglomeration of many interesting interaction, reasoning, and decision-making challenges and 2) InterCode's task construction makes it possible to incorporate a wide variety of sources of feedback. ## 3 The InterCode Benchmark ### Formulation The InterCode benchmark formalizes interactive coding with execution feedback as a partially observable Markov decision process (POMDP) \((\mathcal{U},\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{T},\mathcal{R})\) with instruction space \(\mathcal{U}\), state space \(\mathcal{S}\), action space \(\mathcal{A}\), observation space \(\mathcal{O}\), transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\), and reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). Given a coding instruction \(u\in\mathcal{U}\) in natural language, an agent issues code or a special submit keyword as an action \(a_{t}\in\mathcal{A}\). An action is _admissible_[46] if it can be parsed and executed in the compiler/interpret environment, and an admissible action incurs a change in the latent state space \(s_{t+1}\in\mathcal{S}\), and an execution feedback as observation \(o_{t+1}\in\mathcal{O}\). The interaction loop repeats until the submit action is issued, wherein the task episode ends and a reward \(r=\mathcal{R}(s_{T},\texttt{submit})\in[0,1]\) is computed, with \(1\) representing task completion. We use the **Success Rate (SR)** metric, defined as the proportion of task episodes where \(r=1\). We also define the **Error %** metric, which is the percentage of _non_ admissible actions across task episodes. ### Construction pipeline At a high level, InterCode decomposes the construction of an interactive coding task into three **modular** parts: (1) environment construction, (2) data collection, and (3) reward design. This workflow allows for the safe execution of transition functions, flexible reward design, and convenient adaptation of existing instructions to an interactive setting. **Docker-based environments.** InterCode uses Docker [31] virtual containers as a general-purpose execution sandbox. Given a Dockerfile that defines a system and execution entrypoint, InterCode creates a corresponding, stateful virtual container that hosts the desired state space and transition function. We choose Docker as the basis of InterCode's environment construction for its safe execution in virtual containers, reproducibility of a Dockerfile across any Docker-equipped machine, and excellent coverage of application code, libraries, and dependencies offered by the Dockerfile DSL. **Data collection.** InterCode requires that a dataset has at minimum two fields: query, a natural language instruction \(u\in\mathcal{U}\), and gold, an answer or code block that is a procedure for generating the correct answer. We define these conditions to make it easy to adapt existing text-to-code datasets to an interactive setting while also leaving plenty of bandwidth for constructing new tasks and datasets. **Reward design.** Across a single task episode, the action, observation, and state modification (if any) per interaction loop are implicitly logged by InterCode. InterCode's default reward function determines task completion via an exact match of the agent's execution output (observation and state modifications) against the gold command, where \(1\) is awarded only if all components match. Since Exact Match is usually too stringent of an evaluation criteria, InterCode exposes a reward function endpoint that has access to both the interaction history and the execution container, allowing for custom reward function definitions that can incorporate multiple signals. ### Implementations Following the procedure discussed in Section 3.2, we create two separate InterCode based environments where Bash and SQL are the action spaces respectively. Table 1 summarizes them. **InterCode-Bash.** We define a bash shell within an Ubuntu Operating System as the task setting. To evaluate an agent's ability to adapt generations to different situations, we architect four distinct file systems that can be swapped into the Bash environment by changing a single line in the Dockerfile. \begin{table} \begin{tabular}{l l l l} \hline \hline Action Space & Environment & Dataset & Reward Function & \\ \hline Bash & Ubuntu Terminal & NL2Bash [29] (200) & Latest Std. Output + File System \(\Delta\) \\ SQL & MySQL Database & Spider 1.0 [51] (1034) & Latest Std. Output \\ \hline \hline \end{tabular} \end{table} Table 1: Random of the two environments with Bash and SQL as action spaces developed using the InterCode framework. The numbers in parentheses refer to the number of task instances adopted from each dataset. Each environment is defined in under 200 lines of code total. Specific discussion of the environment construction and reward function can be found in § A.2 and § A.3 We bootstrap the NL2Bash [29] dataset (which lacks specificity in queries and grounding to any underlying file system, preventing it from being used directly for interactive evaluations) to create an interactive coding task where an agent completes an instruction via bash actions. Transferring NL2Bash to the interactive task setting requires simple transformations to ground instructions and go1d code blocks in the file system. First, we consider a subset of 1000 commands with each having \(\geq\) 4 utilities. We then filter out commands that are non-UNIX, non-Linux, or use utilities we currently do not support (eg. "ssh", "sudo", time, and GUI-dependent utilities). Finally, we enhance under-specified commands with specific file names/directory names/paths and update deprecated utilities/flags. The resulting 200 commands are grouped into 4 disjoint sets, 3 of which were grounded to custom-designed file systems, while one set is file-system agnostic. This categorization allows for a comprehensive evaluation of different command-grounding scenarios. The InterCode-Bash dataset instructions typically make one or both of the following two types of requests. It either 1. Requests information that can be answered via execution output (i.e. "How many files...", "What is the size of...", "Where is <file> stored?") or 2. Requests a change to the location/configuration/content of a file or folder (i.e. "Move dir1 folder...", "Set permissions of...", "Append a line to..."). Therefore, we define a custom reward function that evaluates an agent's performance against file system modifications and the latest execution output. Execution output is graded with a simple lexical similarity function. File system assessment is done in two parts. First, a comparison of the agent's and gold command's list of file system changes (list of [path, modification type \(\in\) [added, changed, deleted]] entries) reveals any extraneous or missing changes. Second, md5sum hashes of each commonly edited file path are compared to determine if an added or changed file was altered correctly. A max score of 1 is achieved only if the correct file paths are changed, the changes are correct, and the latest execution output matches the gold command output exactly. Additional Bash statistics and design details are discussed in SS A.2. **InterCode-SQL.** We write a Dockerfile that defines a SQL interpreter within a MySQL database as the task setting. To create the databases and tables necessary for the task dataset, we write type resolution scripts and perform database conversions using the sqlite3mysql [38] Python library to adapt the Spider [51] database and table schema to a MySQL format. We then consolidate all setup code into a single, unified MySQL.sql dump that contains the complete set of schemas for all tables across 20 different databases. On container start-up, this file is invoked automatically, creating and populating databases with tables and tables with records. The Spider [51] dataset is a large-scale cross-domain dataset originally meant for evaluating SQL query generations from natural language questions. We adapt the development set, which contains \(1034\) task instances, and remove all extraneous columns aside from the natural language questions and gold SQL command. The instruction and gold values do not require any additional pre-processing to be compatible with the MySQL task environment. Finally, we employ Intersection over Union (_IoU_), or more formally the Jaccard Index, to quantify the correctness of the latest execution output generated by the agent against the gold output, where both outputs are a list of records. A non-tabular execution output receives a reward of 0 by default. Among the items that lie in the intersection of the agent and gold execution outputs, we also apply a penalty if the records are in the incorrect order. To quantify how sorted the agent output is relative to the gold output, we lean on Kendall's \(\tau\) and adjust the output range to \([0,1]\). The _IoU_ score is then directly scaled by this coefficient. All in all, only a correctly ordered list with the exact set of records found in the gold output receives a score of 1. Visualizations like Figure 1 for SQL along with a more extensive implementation discussion for this environment are in SS A.3 **Validations.** To verify the functionality of action execution in the task environment and the correctness of custom reward functions, we write testing scripts for both Bash and SQL that pass the gold command in as a dummy agent's action to ensure that the command is admissible and executes without error, and to verify that the reward received by the command is \(1\). To confirm that InterCode's dataset specification is enforced across multiple accepted file formats, we define a custom InterCode data loader class which is then rigorously unit tested. ## 4 Methods We perform preliminary experiments to gauge the proficiency and behavior of current large language models on interactive coding tasks with Bash and SQL. To observe and elicit relevant reasoning skills, we draw on several existing prompting strategies that have been put forth to augment language models' reasoning and problem-solving skills. We apply these prompting strategies to models across the following three families: OpenAI (text-davinci-003, gpt-3.5-turbo, gpt-4), PaLM-2 (text-bison-001, chat-bison-001) [3], and Open Source (Vicuna-13B [12], StarChat-16B [26]). Figure 2 visualizes the four adjusted prompting strategies we evaluate on InterCode. **Single Turn** is a zero-shot attempt. A model is given a simple description of the task setting and asked to generate code in a specific programming language that would address the query. The first generation in response to the user's question is then evaluated in the InterCode environment. **"Try Again"** is an iterative feedback set up. In the initial message, the agent is informed of the task setting and its interactive nature; an agent has multiple turns to interact with the system, wherein each turn, upon generating an action, the execution output of the action is fed back as an observation. This continues until a reward of 1 (task completion) is achieved or the number of turns (\(n\)) is exhausted. The agent's position in this approach is meant to mirror human software development as closely as possible. The goal of this method is to probe language models' raw interactive coding abilities in addition to illustrating the benefits and different challenges that arise in interactive coding tasks. **ReAct and Plan & Solve.** We write prompts and design workflows that follow the text and task configurations described in ReAct [48] and Plan & Solve [40] as faithfully as possible. For these two approaches, the termination of a task episode is conditioned upon the agent's own judgment, as our goal with these methods is to gauge the transferability to and efficacy of existing reasoning frameworks with respect to the interactive coding task. Full prompt templates are included in SSB.7. ## 5 Experiments ### Base models comparison **Task performances.** We first compare the success rate of models in the Single Turn and Try Again settings for both the InterCode-Bash and SQL datasets. From Table 2 and Table 3, we observe that performance across different levels of task difficulty (SQL) and different file systems (Bash) is superior in the interactive setting for all models, with a notable multi-fold increase for GPT-4 (\(9.1\%\to 73.7\%\)) on the InterCode-SQL task. Figure 2: Overview of Prompting Strategies adjusted for evaluation on InterCode. The “Try Again” termination constraint is conditioned on reward = 1, while ReAct [48] and Plan & Solve [40] are determined by the agent itself. This is because the purpose of the “Try Again” method is to explore how capable agents are at error correction from feedback, while the other two are more concerned with the overall success of general problem-solving strategies. **Analysis of interactions.** Manual inspection of trajectory logs indicates that models actively exercise later turns for discovering relevant context, correcting errors via execution feedback as observations, and solving problems via iteratively constructing and editing actions as affirmed by Figure 3. In addition, models also demonstrate a level of planning and modular problem solving; for instructions with gold commands that chain multiple commands together (i.e. with \(\mathsf{l}\), \(\mathsf{>}\), or \(\mathsf{;}\) in \(\mathsf{bash}\)) or consist of multiple sub-problems (i.e. subqueries in \(\mathsf{SQL}\)), models will use observations from solving smaller sub-problems in earlier turns to compose the higher-order action. Trajectories that exhibit these phenomena are in SS B.4 **Failure cases.** With that said, both Figure 3 exhibits a plateauing in Success Rate and and Error %. This suggests that as the amount of context and feedback builds up, models are less capable of discerning relevant past history toward future actions. In late-turn scenarios, task episode trajectories often reveal repetition of earlier actions, a failure to effectively use recent observations towards deciding an appropriate next action, or an inability to recognize that a current problem-solving chain of thought is inconclusive or futile. This is particularly evident for hard and extra level InterCode-SQL task instructions that require context spanning across several tables and actions that incorporate multiple clauses. A larger context window size, retrieval of useful memory, and more adaptive reasoning paradigms are just a handful of potential solutions to overcoming such challenges. ### Prompting strategy comparison Initiating language agents with prompting strategies that encourage different forms of reasoning toward problem-solving improves performance on the interactive coding task to varying degrees. Table 4 presents side-by-side comparisons of the success rate, number of turns, and error rate per strategy. Compared to Try Again, which lacks specific guidance on leveraging multiple turns, more explicit reasoning frameworks such as ReAct and Plan & Solve policies generally achieve higher success rates (SQL: \(47.3\%\to 58.7\%\)) with fewer turns and a higher rate of admissible commands. **Different tasks present different learning challenges.** An important skill to solving the InterCode-SQL task is the ability to discover context and construct actions conditionally based on information revealed in prior observations. Given that InterCode-SQL task instructions are phrased most commonly as questions, adapting to the task setting and new information discovered along the way puts \begin{table} \begin{tabular}{l c c c c c|c c c c c} \hline \hline InterCode-SQL & \multicolumn{4}{c}{Single Turn} & \multicolumn{4}{c}{Try Again (\(n\) = 10)} \\ Model / Hardness & Easy & Med & Hard & Extra & All & Easy & Med & Hard & Extra & All \\ \hline text-davinci-003 & 20.6 & 4.9 & 1.7 & 0.0 & 7.4 & 32.4 & 14.6 & 5.2 & 4.2 & 15.6 \\ gpt-3.5-turbo & 22.6 & 8.3 & **5.7** & **3.6** & 10.5 & 72.5 & 44.3 & 43.7 & 21.1 & 47.3 \\ gpt-4 & 19.8 & 7.2 & 4.6 & 3.0 & 9.1 & **87.5** & **76.7** & **66.7** & **52.4** & **73.7** \\ text-bison-001 & **23.8** & **10.9** & **5.7** & 0.6 & **11.5** & 27.0 & 12.3 & 5.7 & 0.6 & 12.9 \\ chat-bison-001 & 18.5 & 6.5 & 4.0 & 0.0 & 7.9 & 22.2 & 7.8 & 6.9 & 0.0 & 9.9 \\ Vicuna-13B & 8.1 & 1.3 & 0.6 & 0.0 & 2.6 & 18.9 & 3.4 & 1.7 & 0.0 & 6.3 \\ StarChat-16B & 21.8 & 7.4 & 2.9 & 0.0 & 8.9 & 22.3 & 8.5 & 2.9 & 1.2 & 9.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Success Rate for single vs. multi turn evaluation on InterCode-SQL (refer §A.3). Query difficulty is adopted from Spider [51]. Best metrics are in **bold**. \begin{table} \begin{tabular}{l c c c c c|c c c c c} \hline \hline InterCode-Bash & \multicolumn{4}{c}{Single Turn} & \multicolumn{4}{c}{Try Again (\(n\) = 10)} \\ Model / File System & 1 & 2 & 3 & 4 & All & 1 & 2 & 3 & 4 & All \\ \hline text-davinci-003 & 10.0 & 32.1 & 28.8 & 33.3 & 24.6 & 30.0 & **52.8** & 32.2 & 44.4 & 38.7 \\ gpt-3.5-turbo & **30.0** & **39.6** & 33.3 & 37.0 & **34.5** & **45.0** & 49.1 & 45.0 & 48.1 & 46.5 \\ gpt-4 & 25.0 & 37.7 & **36.7** & **40.7** & 34.0 & 41.7 & 47.2 & **51.7** & **59.2** & **48.5** \\ text-bison-001 & 15.0 & 22.6 & 11.7 & 22.2 & 17.0 & 23.3 & 28.3 & 16.7 & 22.2 & 22.5 \\ chat-bison-001 & 12.1 & 22.5 & 16.7 & 22.2 & 17.7 & 13.8 & 24.5 & 18.3 & 22.2 & 19.2 \\ Vicuna-13B & 10.0 & 24.5 & 18.3 & 7.4 & 16.0 & 15.0 & 35.8 & 25.0 & 22.2 & 24.5 \\ StarChat-16B & 15.5 & 22.6 & 13.3 & 22.2 & 17.7 & 17.2 & 30.2 & 21.7 & 29.6 & 23.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Success Rate across file systems for single vs. multi-turn evaluation on InterCode-Bash (refer §A.2). To evaluate models’ ability to interact with different task settings, we evaluate disjoint sets of Bash instructions across four different file systems. Best metrics are in **bold**. more emphasis on error correction and context discovery. On the other hand, the more declarative and multi-step nature of the InterCode-Bash task instructions is more aptly solved by planning and modular task completion. These distinctions manifest in the Plan & Solve strategy's performance gap between the InterCode-SQL and InterCode-Bash tasks; while Plan & Solve encourages a model to decompose problems into more manageable steps, the strategy is less favorable towards adjusting on the fly in response to execution feedback. Example trajectories supporting these claims are in SS B.4. **More adaptive reasoning is favorable.** Compared to "imperative" reasoning paradigms such as Plan & Solve which prescribe a relatively rigid procedure, more flexible frameworks like ReAct, which do not enforce any particular logical formula or roadmap, are more conducive to eliciting a broader set of reasoning capabilities. However, while ReAct's performance is generally superior to Plan & Solve, tasks solved by _both_ strategies with gpt-3.5-turbo make up \(57\%\) (\(407/708\)) and \(27.6\%\) (\(21/76\)) of the union of all successfully solved InterCode-SQL and InterCode-Bash tasks respectively. This discrepancy highlights a trade-off between the guidance and structural constraints that are inherent to prompting strategies; schemes that draw out specific reasoning patterns often overlook other equally useful capabilities. InterCode's interactive coding task can serve as a strong litmus test toward more adaptable, variegated model reasoning. ### New tasks & datasets opportunities InterCode's task formulation, modular design, flexible task construction, and use of virtual containers enable task designers to manifest new, complex, code-driven tasks, where completion is much more attainable through interaction. We draw inspiration from Capture the Flag (CTF) [14], a competitive cybersecurity game that requires expertise in coding, cryptography (i.e. binary exploitation, forensics), reverse engineering, and recognizing security vulnerabilities to accomplish the primary objective of discovering encrypted "flags" concealed within code snippets or file systems. Compared to InterCode-Bash & -SQL, CTF is much more complicated, requiring an agent to exercise knowledge of multiple \begin{table} \begin{tabular}{l l l l|l l l|l l l} \hline \hline & \multicolumn{3}{c}{Try Again (\(n\) = 10)} & \multicolumn{3}{c}{ReAct (\(n\) = 10)} & \multicolumn{3}{c}{Plan \& Solve} \\ & SR & Turns & Error \% & SR & Turns & Error \% & SR & Turns & Error \% \\ \hline SQL & 47.3 & 7.25 & 46.4 & **58.7** & 5.30 & **6.94** & 49.1 & **4.29** & 16.2 \\ Bash & **46.5** & 6.15 & 24.9 & 20.5 & **4.40** & **20.4** & 28.0 & 6.65 & 53.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of different prompting strategies across the entire InterCode-SQL and InterCode-Bash datasets using gpt-3.5-turbo as the base model. _Turns_ refers to the average number of turns taken for a single task episode. For Try Again and ReAct, the max number of turns \(n=10\). The highest Success Rate, fewest Turns, and lowest Error % are highlighted per dataset since they reflect more accuracy and efficient task solving. Best metrics are in **bold**. Figure 3: Growth in Success Rate with increase in number of interaction turns across models configured with Try Again prompting strategy for InterCode-Bash and SQL tasks. coding languages, modularize a higher-order objective into sub-problems, construct multi-step plans towards solving each problem, and adjust strategy when a plan fails to yield any useful insights. We curate a toy dataset of easy CTF objectives from picoCTF [39], where each task instance is a <challenge description, hidden flag> pair. Following Section 3.3, we construct a Ubuntu OS file system with a Bourne Shell bash interpreter as the task environment. InterCode's use of virtual containers is crucial, as necessary actions can be irreversibly damaging on real systems (i.e. rm -rf, sudo access). Figure 4 spotlights the diverse skills needed for a simple CTF task. ## 6 Discussion **Conclusion.** We have developed InterCode, a novel lightweight framework that facilitates interaction between Language Models and the underlying environment, enabling them to mimic the human approach to language-to-code generation. Our framework has shown promising results when applied to state-of-the-art models using different prompting styles. It effectively leverages the capabilities of LMs to break down complex tasks and recover from errors within a secure and isolated environment. The ability to seamlessly convert existing datasets into the interactive format using InterCodeEnv API, and furthermore, the Bash and SQL environments, empowers task designers to construct new tasks to unlock the plethora of challenges that await in the space of interactive coding. **Limitations and future directions.** We point out several current limitations of InterCode. At this time, the number of InterCode based environments is limited to SQL and Bash action spaces and datasets; within the near future, we plan to expand the number of offerings to cover a wider set of programming languages and datasets that should further deliver on InterCode's purported promises of efficient and expressive task construction. Second, the CTF dataset is limited to just four task instances due to our manual curation procedure. We hope to release more formal work soon that provides a more thorough analysis of the reasoning and collaboration challenges of the CTF task along with a more extensive dataset for evaluation purposes. ## Acknowledgements We thank Xiao Liu for the Vicuna/Alpaca APIs, Carlos Jimenez and Yuhan Liu for trying our code, and Princeton NLP Group for helpful discussion and feedback in general. We acknowledge support from the National Science Foundation under Grant No. 2107048. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2303.17597
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions
The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available.
Lingdong Kong, Youquan Liu, Xin Li, Runnan Chen, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu
2023-03-30T17:59:17Z
http://arxiv.org/abs/2303.17597v4
# Robo3D: Towards Robust and Reliable 3D Perception against Corruptions ###### Abstract The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present **Robo3D**, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available1. Footnote 1: [https://github.com/ldkong1205/Robo3D](https://github.com/ldkong1205/Robo3D). ## 1 Introduction 3D perception aims to detect and segment accurate position, orientation, semantics, and temporary relation of ob jects and backgrounds around the ego-vehicle in the three-dimensional world [3, 20]. With the emergence of large-scale autonomous driving datasets, various approaches in the fields of LiDAR semantic segmentation and 3D object detection advent each year, with record-breaking performances on the mainstream benchmarks [19, 4, 7, 18, 61]. Despite the great success achieved on the "clean" evaluation sets, the model's robustness against out-of-distribution (OoD) scenarios remain obscure. Recent attempts mainly focus on probing the OoD robustness from two aspects. The first line focuses on the transfer of 3D perception models to unseen domains, _e.g._, sim2real [72], day2night [26], and city2city [30] adaptations, to probe the model's generalizability. The second line aims to design adversarial examples which can cause the model to make incorrect predictions while keeping the attacked input close to its original format, _i.e._, to test the model's worst case scenarios [49, 8, 65]. In this work, different from the above two directions, we aim at understanding the cause of model deterioration under real-world corruption and sensor failure. Current 3D perception models learn point features from LiDAR sensors or RGB-D cameras, where data corruptions are inevitable due to issues of data collection, processing, weather conditions, and scene complexity [48]. While recent works target creating corrupted point clouds from indoor scenes [28] or object-centric CAD models [60, 88, 2], we simulate corruptions on large-scale LiDAR point clouds from the complex outdoor driving scenes [19, 4, 7, 61]. As shown in Fig. 1, we consider three distinct corruption sources that are with a high likelihood to occur in deployment: _1) Severe weakness_ (_fog_, _rain_, and _snow_) which cause back-scattering, attenuation, and reflection of the laser pulses [22, 21, 59]; _2) External disturbances_, _e.g._, bumpy surfaces, dust, insects, that often lead to nonnegligible _motion blur_ and LiDAR _beam missing_ issues [44]; and _3) Internal sensor failure_, such as the _incomplete echo_ or miss detection of instances with a dark color (_e.g._, black car) and _crosstalk_ among multiple sensors, which likely deteriorates the perception accuracy [82, 6]. Besides the environmental factors, it is also important to understand the _cross-sensor_ discrepancy to avoid sudden failure caused by the sensor configuration change. To properly fulfill such pursues, we simulate physically-principled corruptions on the _val_ sets of KITTI [19], SemanticKITTI [4], nuScenes [7], and Waymo Oepn [61], as our corruption suite dubbed _Robo3D_. Analogous to the popular 2D corruption benchmarks [24, 80, 39], we create three severity levels for each corruption and design suitable metrics as the main indicator for robustness comparisons. Finally, we conduct exhaustive experiments to understand the pros and cons of different designs from existing models. We observe that modern 3D perception models are at the risk of being vulnerable even though their performance on standard benchmarks are improving. Through fine-grained analyses on a wide range of 3D perception datasets, we diagnose that: _1) Sensor setups have direct impacts on feature learning_. Models trained on data collected with different sensor configurations and protocols yield inconsistent resilience. _2) 3D data representations often coupled with the model's robustness_. The voxel and point-voxel fusion approaches exhibit clear superiority over the projection-based methods. _3) Detectors and segmentors are sensitive to different corruption types_. A sophisticated combination of both tasks is a viable way to achieve robust and reliable 3D perception. _4) Out-of-context augmentation (OCA) and flexible rasterization strategies can improve model's robustness_. We thus propose a solution to enhance the robustness of 3D perception models, which consists of a density-insensitive training framework and a simple flexible voxelization strategy. The key contributions of this work are summarized as: * We introduce Robo3D, the first systematically-designed robustness evaluation suite for LiDAR-based 3D perception under corruptions and sensor failure. * We benchmark 34 perception models for LiDAR-based semantic segmentation and 3D object detection tasks, on their robustness against corruptions. * Based on our observations, we draw in-depth discussions on the design receipt and propose novel techniques for building more robust 3D perception models. ## 2 Related Work **LiDAR-based Semantic Segmentation**. The design choice of 3D segmentors often correlates with the LiDAR representations, which can be categorized into point [64], range view [69, 40, 29], bird's eye view [84], voxel [11], and multi-view fusion [37, 74] methods. The projection-based approach rasterizes irregular point clouds into 2D grids, which avoids the need for 3D operators and is thus more hardware-friendly for deployment [12, 85, 10]. The voxel-based methods which retain the 3D structure are achieving better performance than other single modalities [89, 79]. Efficient operators like the sparse convolution are widely adopted to ease the memory footprint [63, 62]. Most recently, some works start to explore possible complementary between two views [37, 75, 46] or even more views [74]. Although promising results have been achieved, the robustness of 3D segmentors against corruptions remains obscure. As we will discuss in the next sections, these methods have the tendency of being less robust, mainly due to the lack of a comprehensive robustness evaluation benchmark. **LiDAR-based 3D Object Detection**. Sharing similar basics with LiDAR segmentation, modern 3D object detectors also adopt various data representations. Point-based methods [56, 58, 78, 77] implicitly capture local structures and fine-grained patterns without any quantization to retain the original point cloud geometry. Voxel-based methods [76, 86, 13, 81, 57, 38, 36, 34] transform irregular points to compact grids while only those non-empty voxels are stored and utilized for feature extraction through the sparse convolution [76]. Recently, some works [38, 51, 87] start to explore long-range contextual dependencies among voxels with self-attention [66]. Pillar-based methods [32, 52] better balance the accuracy and speed by controlling the resolution in the vertical axis. While point-voxel fusion method [54, 53] can integrate the merits of both representations to learn more discriminative features. The above methods, however, mainly focus on obtaining better performance on clean point clouds, while paying much less attention to the model robustness. As we will show in the following sections, these models are prone to degradation under data corruptions and sensor failure. **Common Corruptions**. ImageNet-C [24] is the pioneering work in this line of research which benchmarks image classification models to common corruptions and perturbations. Follow-up studies extend on aspect to other perception tasks, _e.g._, object detection [39], image segmentation [27], navigation [9], video classification [80], and pose estimation [67]. The importance of evaluating model robustness has been constantly proven. Since we are targeting a different sensor, _i.e._, LiDAR, most of the well-studied corruption types become realistic or suitable for such a data format. This motivates us to explore new taxonomy for defining more proper corruption types for the 3D perception tasks in autonomous driving scenarios. **3D Perception Robustness**. Several recent studies proposed to investigate the vulnerability of point cloud classifiers and detectors in indoor scenes [28, 48, 60, 88, 2]. Recently, there are works started to explore the robustness of 3D object detectors under adversarial attacks [43, 65, 73]. In the context of corruption robustness, we notice three concurrent works [82, 33, 1]. These works, however, all consider 3D object detection alone and might be constrained by either a limited number of corruption types or datasets. Our benchmark properly defines a more diverse range of corruption types for the general 3D perception task and includes significantly more models from both LiDAR-based semantic segmentation and 3D object detection tasks. ## 3 Robo3D Benchmark Tailored for LiDAR-based 3D perception tasks, we summarize eight corruption types commonly occurring in real-world deployment in our benchmark, as shown in Fig. 1. ### Corruption Types Given a point \(\mathbf{p}\in\mathbb{R}^{4}\) in a LiDAR point cloud with coordinates \((p^{x},p^{y},p^{z})\) and intensity \(p^{i}\), our goal is to simulate a corrupted point \(\hat{\mathbf{p}}\) via a mapping \(\hat{\mathbf{p}}=\mathcal{C}(\mathbf{p})\), with rules constrained by _physical principles_ or _engineering experiences_. Due to space limits, We present more detailed definitions and implementation procedures in the Appendix. _1) Fog_. The LiDAR sensor emits laser pulses for accurate range measurement. Back-scattering and attenuation of LiDAR points tend to happen in foggy weather since the water particles in the air will cause inevitable pulse reflection [5]. In our benchmark, we adopt the physically valid fog simulation method [22] to create fog-corrupted data. For each \(\mathbf{p}\), we calculate its attenuated response \(p^{i_{\text{int}}}\) and the maximum fog response \(p^{i_{\text{int}}}\) as follows: \[\hat{\mathbf{p}}=\mathcal{C}_{\text{ fog}}(\mathbf{p})=\begin{cases}(\hat{p}^{x},\hat{p}^{y},\hat{p}^{z},p^{i_{ \text{int}}}),&\text{if }\ p^{i_{\text{int}}}>p^{i_{\text{int}}},\\ (p^{x},p^{y},p^{z},p^{i_{\text{int}}}),&\text{else}.\end{cases} \tag{1}\] _2) Wet Ground_. The emitted laser pulses will likely lose certain amounts of energy when hitting wet surfaces, which causes significantly attenuated laser echoes depending on the water height \(d_{w}\) and mirror refraction rate [59]. We follow [21] to model the attenuation caused by ground wetness. A pre-processing step is taken to estimate the ground plane with existing semantic labels or RANSAC [17]. Next, a ground plane point of its measured intensity \(\hat{p}^{i}\) is obtained based on the modified reflectivity, and the point is only kept if its intensity is greater than the noise floor \(i_{n}\) via mapping: \[\mathcal{C}_{\text{wet}}(\mathbf{p})=\begin{cases}(p^{x},p^{y},p^{z},\hat{p} ^{i}),&\text{if}\quad\hat{p}^{i}>i_{n}\ \&\ \mathbf{p}\in\text{ground},\\ \text{None},&\text{elif}\ \hat{p}^{i}<i_{n}\ \&\ \mathbf{p}\in\text{ground},\\ (p^{x},p^{y},p^{z},p^{i}),&\text{elif}\ \mathbf{p}\notin\text{ground}.\end{cases} \tag{2}\] _3) Snow_. For each laser beam in snowy weathers, the set of particles in the air will intersect with it and derive the angle of the beam cross-section that is reflected by each particle, taking potential occlusions into account [50]. We follow [21] to simulate snow-corrupted data \(\mathcal{C}_{\text{now}}(\mathbf{p})\) which is similar to the fog simulation. This physically-based method samples snow particles in the 2D space and modify the measurement for each LiDAR beam in accordance with the induced geometry, where the number of sampling snow particles is set according to a given snowfall rate \(r_{s}\). _4) Motion Blur_. Since the LiDAR sensor is often mounted on the rooftop or side of the vehicle, it inevitably suffers from the blur caused by vehicle movement, especially on bumpy surfaces or during U-turning. To simulate blur-corrupted data \(\mathcal{C}_{\text{motion}}(\mathbf{p})\), we add a jittering noise to each coordinate \((p^{x},p^{y},p^{z})\) with a translation value sampled from the Gaussian distribution with standard deviation \(\sigma_{t}\). _5) Beam Missing_. The dust and insect tend to form agglomerates in front of the LiDAR surface and will not likely disappear without human intervention, such as drying and cleaning [44]. This type of occlusion causes zero readings on masked areas and results in the loss of certain light impulses. To mimic such a behavior, we randomly sample a total number of \(m\) beams and drop points on these beams from the original point cloud to generate \(\mathcal{C}_{\text{beam}}(\mathbf{p})\). _6) Crosstalk_. Considering that the road is often shared by multiple vehicles, the time-of-flight of light impulses from one sensor might interfere with impulses from other sensors within a similar frequency range [6]. Such a crosstalk phenomenon often creates noisy points within the mid-range areas in between two (or multiple) sensors. To simulate this corruption \(\mathcal{C}_{\text{cross}}(\mathbf{p})\), we randomly sample a subset of \(k_{t}\) percent points from the original point cloud and add large jittering noise with a translation value sampled from the Gaussian distribution with standard deviation \(\sigma_{c}\). _7) Incomplete Echo_. The near-infrared spectrum of the laser pulse emitted from the LiDAR sensor is vulnerable to vehicles or other instances with dark colors [82]. The LiDAR readings are thus incomplete in such scan echoes, resulting in significant point miss detection. We simulate this corruption which denotes \(\mathcal{C}_{\text{echo}}(\mathbf{p})\) by randomly querying \(k_{e}\) percent points for _vehicle_, _bicycle_, and _motorcycle_ classes, via either semantic masks or 3D bounding boxes. Next, we drop the queried points from the original point cloud, along with their point-level semantic labels. Note that we do not alter the ground-truth bounding boxes since they should remain at their original positions in the real world. _8) Cross-Sensor_. Due to the large variety of LiDAR sensor configurations (_e.g._, beam number, FOV, and sampling frequency), it is important to design robust 3D perception models that are capable of maintaining satisfactory performance under cross-device cases [78]. While previous works directly form such settings with two different datasets, the domain idiosyncrasy in between (_e.g._, different label mappings and data collection protocols) further hinders the direct robustness comparison. In our benchmark, we follow [68] and generate cross-sensor data \(\mathcal{C}_{\text{sensor}}(\mathbf{p})\) by first dropping points of certain beams from the point cloud and then sub-sample \(k_{c}\) percent points from each beam. ### Corruption Sets Following the above taxonomy, we create new robustness evaluation sets upon the _val_ sets of existing large-scale 3D perception datasets [19, 4, 7, 18, 61] to fulfill _SemanticKITTI-C_, _KITTI-C_, _nuScenes-C_, and _WOD-C_. They are constructed with eight corruption types under three severity levels, resulting in a total number of 97704, 90456, 144456, and 143424 annotated LiDAR point clouds, respectively. Kindly refer to Appendix for more details. ### Evaluation Metrics **Corruption Error (CE)**. We follow [24] and use the mean CE (mCE) as the primary metric in comparing models' robustness. To normalize the severity effects, we choose CenterPoint [81] and MinkUNet [63] as the baseline models for the 3D detectors and segmentors, respectively. The CE and mCE scores are calculated as follows: \[\text{CE}_{i}=\frac{\sum_{l=1}^{3}(1-\text{Acc}_{i,l})}{\sum_{l=1}^{3}(1- \text{Acc}_{i,l}^{\text{baseline}})}\,\quad\text{mCE}=\frac{1}{N}\sum_{i=1}^{N}\text{CE}_{i}\, \tag{3}\] where \(\text{Acc}_{i,l}\) denotes task-specific accuracy scores, _i.e._, mIoU, AP, NDS, or APH(L2), on corruption type \(i\) at severity level \(l\). \(N=8\) is the total number of corruption types. **Resilience Rate (RR)**. We define mean RR (mRR) as the relative robustness indicator for measuring how much accuracy can a model retain when evaluated on the corruption sets. The RR and mRR scores are calculated as follows. \[\text{RR}_{i}=\frac{\sum_{l=1}^{3}\text{Acc}_{i,l}}{3\times\text{Acc}_{\text {clean}}}\,\quad\text{mRR}=\frac{1}{N}\sum_{i=1}^{N}\text{RR}_{i}\, \tag{4}\] where \(\text{Acc}_{\text{clean}}\) denotes the task-specific accuracy score on the "clean" evaluation set. ## 4 Experimental Analysis ### Benchmark Configuration **3D Perception Models**. We benchmark 34 LiDAR-based detection and segmentation models and variants. _Detectors:_ SECOND [76], PointPillars [32] PointRCNN [56], Part-A\({}^{2}\)[57], PV-RCNN [53], CenterPoint [81], and PV-RCNN++ [55]. _Segmentors:_ SqueezeSeg [69], SqueezeSegV2 [70], RangeNet++ [40], SalsaNext [12], FIDNet [85], CENet [10], PolarNet [84], KPConv [64], PIDS [83], WaffleIron [45], MinkUNet [11], Cylinder3D [89], SPVCNN [63], RPVNet [74], CPGNet [35], 2DPASS [75], and GFNet [46]. We also include three recent 3D augmentation methods, _i.e._, Mix3D [42], LaserMix [31], and PolarMix [71]. **Evaluation Protocol**. Most models benchmarked follow similar data augmentation, pre-training, and validation configurations. We thus directly use public checkpoints for evaluation whenever applicable, or re-train the model following default settings. We notice that some models use extra tricks on original validation sets, _e.g._, test-time augmentation, model ensemble, _etc_. For such cases, we re-train their models with conventional settings and report the reproduced results. Kindly refer to Appendix for more details. ### Benchmark Analysis We draw the following observations based on the benchmark results and analyze the potential causes behind them. **O-1: 3D Perception Robustness** - _existing 3D detectors and segmentors are vulnerable to real-world corruptions_. As shown in Fig. 2, although the models' corruption errors often correlate with the task-specific accuracy (first row), their resilience scores are rather flattened or even descending towards vulnerabilities (second row). The per-corruption errors shown in Tab. 1 to Tab. 6 further verify such crux. Taking 3D segmentors as an example: although the very recent state-of-the-art methods [75, 46, 45] have achieved promising results on the standard benchmark, they are actually less robust than the baseline, _i.e._, their mCE scores are higher than MinkUNet [11]. A similar trend appears for the 3D detectors, _e.g._ Fig. 2(c), where models with higher NDS are becoming less resilient. Due to the lack of a robustness evaluation benchmark, the 3D perception models tend to overfit the "clean" data rather than realistic ones. **O-2: Sensor Configurations** - _models trained with LiDAR data from different sources exhibit inconsistent sensitivities to each corruption type_. As shown in the third row of Fig. 2, the same corruption applied on different datasets shows diverse behaviors. Different data collection protocols and sensor setups cause a direct impact on model representation learning. For example, 3D detectors trained on 64-beam datasets (KITTI, WOD) are less robust to _motion blur_ and _snow_, compared to their counterparts trained on the sparser dataset (nuScenes). We conjecture that the low-density inputs have incorporated certain resilience for models against noises that occur locally but might become fragile for scenarios that lose points in a global manner, _i.e._, _cross-sensor_. **O-3: Data Representations** - _representing the LiDAR data as raw points, sparse voxels, or the fusion of them tend to yield better robustness_. It can be easily seen from Fig. 3 that the corruption errors of projection-based methods (range view and BEV) are much higher than other modalities, for almost every corruption type. Such disadvantages also hold for fusion-based models that use a 2D branch, _e.g._, RPVNet [74] and GFNet [46]. In general, the point-based methods [64, 45, 83] are more robust to situations where a significant amount of points are missing while suffering from translation, jittering, and outliers. We conjecture that the sub-sampling and local aggregation widely used in point-based architectures are natural rescues for point drops and occlusions. Among all representations, voxel/pillar and point-voxel fusion exhibit a clear superiority under various corruption types, as verified in Tab. 1, Tab. 2, and Tab. 3. The voxelization process that quantizes the irregular points is conducive to mitigating the local variations and often yields a more steady representation for feature learning. **O-4: Task Particularity** - _The 3D detectors and segmentors show different sensitivities to corruption scenarios_. The detection task only targets classification and localization at the object level; corruptions that occur at points inside an instance range have less impact on detecting the object. However, the segmentation task is to identify the semantic meaning of each point in the point cloud. Such a task discrepancy is affecting the model's robustness across different corruptions. From Fig. 2 we find that 3D detectors tend to be more robust to point-level variations, such as _motion blur_ and _crosstalk_. These two corruptions likely yield noise offsets that are out of the grid size; while these point translations could easily be misclassified by the segmentation models. On the contrary, the 3D segmentors are Figure 3: The robustness comparisons among different LiDAR representations (modalities) on _SemanticKITTI-C_. Figure 2: Benchmarking results of _34_ LiDAR-based detection and segmentation models on the _six_ robustness sets in Robo3D. Figures from top to bottom: the task-specific accuracy (mAP, mIoU, NDS, mAPH) _vs._**[first row]** mean corruption error (mCE), **[second row]** mean resilience rate (mRR), and **[third row]** sensitivity analysis among different corruption types. more steady to environmental changes like _fog_, _wet ground_, and _snow_. From hindsight, we believe that a sophisticated combination of detection and segmentation tasks would be a viable solution for robust and reliable 3D perception. **O-5: Augmentation & Regularization Effects -**_The recent out-of-context augmentation (OCA) techniques improve 3D robustness by large margins; the flexible rasterization strategies help learn more robust features_. The in-context augmentations (ICAs), _i.e._, flip, scale, and rotation, are commonly used in 3D detectors and segmentors. Although these techniques help boost perception accuracy, they are less effective in improving robustness. Recent works [42, 31, 71] proposed OCAs with the goal of further enhancing model performance on the "clean" sets. We implement these augmentations on baseline models and test their effectiveness on our evaluation sets, as shown in Fig. 4 (b) & (d). Since corrupted data often deviate from the training distribution, the model will inevitably degrade under OoD scenarios. OCAs that mix and swap regions without maintaining the consistency of scene layouts are yielding much lower CE scores across all corruptions, except _wet ground_, where the loss of ground points restricts the effectiveness of scene mixing. Another key factor that influences the robustness (for voxel- and point-voxel fusion-based methods) is representation capacity, _i.e._, voxel size. As shown in Fig. 4 (a) & (c), the 3D segmentors under translations within small regions (_motion blur_) favor a larger voxel size to suppress global translations; conversely, they are more robust against outliers (_fog_, _snow_, and _crosstalk_) given more fine-grained voxelizations to eliminate local variations. For 3D detectors, a consensus is formed toward using a higher voxelization resolution, and improvements are constantly achieved across all corruption types. ## 5 Boosting Corruption Robustness Motivated by our observations, we propose two novel techniques to enhance the robustness against corruptions. We conduct experiments on _SemanticKITTI-C_ without loss of generality and include more details in the Appendix. **Flexible Voxelization**. The widely used sparse convolution [62] requires the formal transformation of the point coor \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline **Method** & **mCE\(\pm\)** & **Fog** & **Wet** & **Snow** & **Move** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline MinkU\({}_{\text{ms}}\)[11] & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline SqSeg [69] & 164.9 & 183.9 & 158.0 & 165.5 & 122.4 & 171.7 & 188.1 & 158.7 & 170.8 \\ SqSegV2 [70] & 152.5 & 165.5 & 141.2 & 165.4 & 155.2 & 155.2 & 176.0 & 10.3 & 16.5 \\ RGN\({}_{\text{ms}}\)[40] & 136.3 & 156.3 & 128.5 & 133.9 & 109.2 & 141.6 & 148.9 & 128.3 & 150.6 \\ RGN\({}_{\text{ms}}\)[40] & 130.7 & 144.3 & 123.7 & 128.4 & 104.2 & 135.5 & 129.4 & 125.8 & 153.9 \\ SalusNet [12] & 116.1 & 147.5 & 112.1 & 16.6 & 77.6 & 115.3 & 143.5 & 141.0 & 10.5 \\ FIDNet [15] & 113.8 & 127.7 & 105.1 & 107.7 & 88.9 & 116.0 & 121.3 & 113.7 & 130.0 \\ CENe [10] & 103.4 & 129.8 & 92.7 & 99.2 & 70.5 & 101.2 & 131.1 & 102.3 & 100.4 \\ \hline PolarNet [84] & 118.6 & 138.8 & 107.1 & 108.3 & 86.8 & 105.1 & 178.1 & 112.0 & 112.3 \\ \hline FCNorm [49] & 109.2 & 103.2 & 91.0 & 88.1 & 101.7 & 97.6 & 111.9 & 97.3 & 85.4 \\ PPLS\({}_{\text{ms}}\)[83] & 100.1 & 11.8 & 98.9 & 109.5 & 114.8 & 103.2 & 103.9 & 97.0 & 87.6 \\ PDS\({}_{\text{ms}}\)[83] & 101.2 & 110.6 & 95.7 & 104.6 & 115.6 & 98.6 & 102.2 & 97.5 & 84.8 \\ Wuffle [45] & 109.5 & 123.5 & 90.1 & 100.5 & 99.9 & 93.2 & 186.1 & 91.0 & 84.1 \\ \hline MultiU\({}_{\text{ms}}\)[11] & 100.6 & 105.3 & 99.4 & 106.7 & 98.7 & 97.6 & 99.9 & 99.0 & 98.3 \\ CyDxDx [89] & 103.4 & 142.5 & 92.5 & 113.6 & 70.9 & 97.0 & 105.7 & 104.2 & 99.7 \\ CyDxDx [89] & 103.1 & 142.5 & 101.3 & 116.9 & 61.7 & 98.9 & 111.4 & 99.0 & 93.4 \\ \hline SPV\({}_{\text{s}}\)[83] & 100.3 & 101.2 & 100.0 & 100.4 & 97.6 & 99.2 & 100.6 & 96.0 & 102.0 \\ SPV\({}_{\text{ms}}\)[63] & **99.2** & **98.5** & 100.7 & 102.0 & 97.8 & 99.0 & 98.4 & 98.8 & 98.1 \\ RPN\({}_{\text{ms}}\)[46] & 111.7 & 118.7 & 101.0 & 104.6 & 78.6 & 106.4 & 185.7 & 99.2 & 99.8 \\ CPCNe [35] & 107.3 & 141.0 & 92.6 & 104.3 & **61.1** & **90.9** & 1965.6 & 55.0 & **78.2** \\ ZDARS [75] & 106.1 & 134.9 & **95.5** & 110.2 & 62.9 & 94.4 & 171.7 & 96.9 & 92.7 \\ GFNet [46] & 108.7 & 131.3 & 94.4 & **92.7** & 617.7 & 98.6 & 198.9 & 98.2 & 93.6 \\ \hline \hline \end{tabular} \end{table} Table 1: The **Corruption Error (CE)** of _22 segmentors_ on _SemanticKITTI-C_. **Bold**: Best in col. Underline: Second best in col. Dark : Best in row. \(\mathsf{Red}:\) Worst in row. \begin{table} \begin{tabular}{c|c|c c c c c c c c} \hline \hline **Method** & **mCE\(\pm\)** & **Fog** & **Wet** & **Snow** & **Move** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline MinkU\({}_{\text{ms}}\)[11] & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline FIDNet [95] & 122.4 & 75.9 & 122.6 & 68.8 & 182.0 & 164.8 & 58.0 & 141.7 & 155.6 \\ CENe [10] & 112.8 & 71.2 & 115.5 & 64.3 & 156.7 & 150.0 & 92.3 & 129.1 & 153.4 \\ \hline PolarNet [84] & 115.1 & 90.1 & 11.5 & 59.0 & 200.8 & 121.1 & 81.7 & 128.2 & 118.2 \\ \hline Wuffle [45] & 106.7 & 94.7 & 99.9 & 84.5 & 152.4 & 101.7 & 91.1 & 106.4 & 114.2 \\ \hline MiniU\({}_{\text{ms}}\)[11] & 96.4 & 93.0 & 96.1 & 104.8 & **93.1** & **95.0** & 96.3 & **96.9** & **95.9** \\ Cy3Dx [89] & 111.8 & 86.6 & 104.7 & 70.3 & 217.5 & 113.0 & 75.7 & 109.2 & 117.8 \\ Cy3Dx [89] & 105.6 & 83.2 & 11.1 & 69.7 & 165.3 & 114.0 & 74.4 & 110.7 & 116.2 \\ \hline SPV\({}_{\text{ms}}\)[63] & 106.7 & 88.4 & 105.6 & 98.8 & 156.5 & 101.1 & 880.0 & 104.3 & 103.6 \\ SPV\({}_{\text{ms}}\)[63] & 97.5 & 95.2 & 90.5 & 97.3 & 95.2 & 98.7 & 97.9 & **96.9** & dinates \(\mathbf{p}_{k}=(p_{k}^{x},p_{k}^{y},p_{k}^{z})\) into a sparse voxel as follows: \[\mathbf{v}_{k}=(v_{k}^{x},v_{k}^{y},v_{k}^{z})=\texttt{floor}((\frac{p_{k}^{x}}{ l^{x}}),(\frac{p_{k}^{y}}{ly}),(\frac{p_{k}^{z}}{l^{z}}))\, \tag{5}\] where \(l^{x}\), \(l^{y}\), and \(l^{z}\) denote the voxel size along each axis and are often set as fixed values. As discussed in Fig. 4 (a) & (c), the model tends to show an erratic resilience under different corruptions, _e.g._, favor a larger voxel size for _motion blur_ while is more robust against _fog_, _snow_, and _crosstalk_ with a smaller voxel size. To pursue better generalizability among all corruptions, we switch the naive constant into a dynamic alternative \(l_{\text{d}\mathbf{v}}=(l^{x}\pm\text{d}\mathbf{v}^{x},l^{y}\pm\text{d} \mathbf{v}^{y},l^{z}\pm\text{d}\mathbf{v}^{z})\), where \(\text{d}\mathbf{v}^{x}\), \(\text{d}\mathbf{v}^{y}\), \(\text{d}\mathbf{v}^{z}\) are the offsets sampled from the continuous uniform distribution with an interval \(\gamma\). **Density-Insensitive Training**. The natural corruptions often cause severe occlusion, attenuation, and reflection of light impulses, resulting in the unavoidable loss of LiDAR points in certain regions around the ego-vehicle [59]. For example, the _wet ground_ absorbs energy and loses points on the surfaces [21]; the potential _incomplete echo_ and _beam missing_ caused by reflection or dust and insects occlusion may lead to serious object failure [82]. The 3D perception models that suffer from such OoD scenarios bear the risk of being involved in safety-critical issues. It is worth noting that such degradation is not compensable via either adjusting the voxel size or applying OCA (see Fig. 4). Inspired by recent masking-based representation methods [23, 16, 41, 25], we propose a robust finentuning framework (see Fig. 5) that tends to be less sensitive to density variations. Specifically, we design a two-branch structure - a teacher net \(\mathcal{G}_{\theta}^{\text{tea}}\) and a student net \(\mathcal{G}_{\theta}^{\text{stu}}\) - that takes a pair of high- and low-density point clouds (\(x\) and \(\tilde{x}\)) as the input, where the sparser one is generated by randomly masking the points from the original point cloud with a ratio \(\beta\). Note that here we use the random mask to sub-sample the given point clouds rather than simulating a specific corruption type defined in our benchmark, since the corruption "pattern" in the actual scenario is often hard to predict. The loss functions of the \(k\)-th sample from the "full" view and the "partial" view are calculated as follows: \[\mathcal{L}_{\text{full}}=\mathcal{L}_{\text{task}}(y_{k},\mathcal{G}_{\theta }^{\text{tea}}(x_{k}))\,\ \ \mathcal{L}_{\text{part}}=\mathcal{L}_{\text{task}}(\tilde{y}_{k},\mathcal{G}_{ \theta}^{\text{stu}}(\tilde{x}_{k})), \tag{6}\] where \(y_{k}\) and \(\tilde{y}_{k}\) are original and masked ground-truths, respectively. \(\mathcal{L}_{\text{task}}\) denotes the task-specific loss, _e.g._, RPN loss for detection and cross-entropy loss for segmentation. To encourage cross-consistency between the high- and low-density branches, we calculate \(\mathcal{L}_{\text{part2full}}\) and \(\mathcal{L}_{\text{full2part}}\), where the former is to mimic dense representations from sparse inputs (completion) and the latter is to pursue local agreements (confirmation). The completion loss is calculated as the distance between the teacher net's prediction of the "full" input and the interpolated student net's prediction \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE\(\,\)** & **Fog** & **Wet** & **Snow** & **Move** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline CenterPP [81] & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline \hline SCOD [70] & 124.1 & 117.9 & 136.5 & 127.5 & 1134.1 & 121.3 & 127.8 & 123.7 & 113.5 \\ \hline \hline \end{tabular} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline **ScoreOD [70]** & 127.5 & 120.8 & 135.2 & 129.7 & 115.2 & 123.0 & 151.7 & 131.6 & 113.1 \\ PVRCNN [53] & 104.9 & 110.1 & 104.2 & **95.7** & 101.3 & 110.7 & 101.8 & 106.0 & 109.4 \\ PVV++[55] & **91.6** & **95.7** & **88.3** & **90.1** & **93.2** & **92.5** & **88.9** & **90.8** & **93.2** \\ \hline \hline \end{tabular} \end{table} Table 6: The **Corruption Error (CE)** of _5 detectors_ on _WOD-C (Det3D)_. **Bold**: Best in col. Underline: Second best in col. Dark : Best in row. Red : Worst in row. Figure 4: Corruption sensitivity analysis on _voxel size_ (a & c) and _augmentation_ (b & d) for the baseline LiDAR semantic segmentation and 3D object detection models [11, 81]. Different corruptions exhibit variances under certain configurations. of the "partial" input, which can be calculated as follows: \[\mathcal{L}_{\text{part2full}}=||\;\mathcal{G}_{\theta}^{\text{tea}}(x),\;\texttt{interp }(\mathcal{G}_{\theta}^{\text{stu}}(\tilde{x}))\;||_{2}^{2}\;. \tag{7}\] Similarly, the confirmation loss for pursuing local agreements can be calculated as follows: \[\mathcal{L}_{\text{full2part}}=||\;\texttt{subsample}(\mathcal{G}_{\theta}^{ \text{tea}}(x)),\;\mathcal{G}_{\theta}^{\text{stu}}(\tilde{x})\;||_{2}^{2}\;. \tag{8}\] The final objective is to optimize the summation of the above loss functions, _i.e._, \(\mathcal{L}=\mathcal{L}_{\text{full}}+\mathcal{L}_{\text{part}}+\alpha_{1} \mathcal{L}_{\text{part2full}}+\alpha_{2}\mathcal{L}_{\text{full2part}}\), where \(\alpha_{1}\) and \(\alpha_{2}\) are the weight coefficients. **Implementation Details**. We ablate each component and show the results in Tab. 7. Specifically, \(\gamma\) is set as \(0.02\) in our experiments, along with a mask ratio \(\beta=0.4\) for models _w/_ ICA and \(\beta=0.6\) for models _w/_ OCA. We initialize both teacher and student networks with the same baseline model and finetune our framework for \(6\) epochs in total. The weight coefficients are set as \(50\) and \(100\), respectively. **Experimental Analysis**. Despite its simplicity, we found this framework is conducive to mitigating robustness degradation from corruptions. The simple modification on voxel partition can boost the corruption robustness by large margins; it reduces \(2.6\%\) mCE and \(1.5\%\) mCE upon the two baselines, respectively. Then, we incorporate the cross-consistency learning between "full" and "partial" views. Among all variants, the one with both completion (\(\mathcal{L}_{\text{part2full}}\)) and confirmation (\(\mathcal{L}_{\text{full2part}}\)) objectives achieves the best possible results in terms of mCE and mRR. We also show an ablation study of the masking ratio \(\beta\) in Fig. 6. We observe that there is a trade-off between the model's robustness and the proportion of information occlusion; a ratio between \(0.3\) to \(0.6\) tends to yield lower mCE (better robustness). It is worth noting that both flexible voxelization and density-insensitive training will slightly lower the task-specific accuracy on the "clean" sets, as shown in the last column of Tab. 7. We conjecture that such an out-of-context consistency regularization will likely relieve the model from overfitting the training distribution and in return, become more robust against unseen scenarios from the OoD distribution. ## 6 Discussion and Conclusion In this work, we establish a comprehensive evaluation benchmark dubbed _Robo3D_ for probing the robustness of LiDAR-based 3D perception models. We define eight distinct corruption types with three severity on four large-scale datasets. We systematically benchmarked and analyzed representative 3D detectors and segmentors to understand their resilience under real-world corruptions and sensor failure. Several key insights are drew from aspects including sensor \begin{table} \begin{tabular}{l|c c|c c|c c c} \hline \hline **Method** & **ICA** & **OCA** & **Size** & \(\mathcal{L}_{\text{part2full}}\) & \(\mathcal{L}_{\text{medium}}\) & **mCE** & **mRR**\(\uparrow\) & **Ocan** \\ \hline \hline Base [11] & ✓ & Fixed & & & & 100.0 & 81.9 & 62.8 \\ \hline Ours - (1) & ✓ & Flexible & & & & 97.4 & 84.2 & **62.9** \\ Ours - (2) & ✓ & Flexible & ✓ & & **96.4** & **85.1** & 62.7 \\ Ours - (3) & ✓ & Flexible & ✓ & ✓ & **96.1** & **85.6** & 62.7 \\ \hline Base [11] & ✓ & Fixed & & & & 86.0 & 84.7 & **69.2** \\ \hline Ours - (4) & ✓ & Flexible & & & & 84.5 & 86.8 & 68.2 \\ Ours - (5) & ✓ & Flexible & ✓ & & **83.8** & **85.1** & 67.9 \\ Ours - (6) & ✓ & Flexible & ✓ & ✓ & **83.2** & **89.7** & 68.1 \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study on: **[left]** in-context (ICA) and out-of-context (OCA) augmentations; **[middle]** voxelization strategies; and **[right]** density-insensitive training losses. Figure 5: The proposed density-insensitive training framework. The “full” and “partial” point clouds are fed into the teacher branch and student branch, respectively, for feature learning, while the latter is generated by randomly masking the original point cloud. To encourage cross-density consistency, we calculate the _completion_ and _confirmation_ losses which measure the distances of sub-sampled teacher’s prediction and interpolated student’s prediction between the other branch’s outputs. Figure 6: Ablation study on the masking ratio \(\beta\) for models: **[top]** trained _w/_ OCA and **[bottom]** trained _w/_ ICA. setups, data representations, task particularity, and augmentation effects. To pursue better robustness, we proposed a cross-density consistency training framework and a simple yet effective flexible voxelization strategy. We hope this work could lay a solid foundation for future research on building robust and reliable 3D perception models. **Potential Limitation**. Although we benchmarked a wide range of corruptions that occur in the real world, we do not consider cases that are coupled with multiple corruptions at the same time. Besides, we do not include models that take multi-modal inputs, which could form future directions. **Acknowledgement**. We sincerely thank Jiangmiao Pang and Tai Wang for their insightful discussions and feedback. ## Appendix In this appendix, we supplement more materials to support the findings and conclusions in the main body of this paper. Specifically, this appendix is organized as follows. * Section 7 provides a comprehensive case study for analyzing each of the eight corruption types defined in the Robo3D benchmark. * Section 8 elaborates on additional implementation details for the generation of each corruption type. * Section 9 includes additional (complete) experimental results and discussions for the 3D detectors and segmentors benchmarked in Robo3D. * Section 10 attaches qualitative results for the benchmarked methods under each corruption type. * Section 11 acknowledges the public resources used during the course of this work. ## 7 Case Study: 3D Natural Corruption The deployment environment of an autonomous driving system is diverse and complicated; any disturbances that occur in the sensing, transmission, or processing stages will cause severe corruptions. In this section, we provide concrete examples of the _formation_ and _effect_ of the eight corruption types defined in the main body of this paper, _i.e._, _fog, wet ground_, _snow_, _motion blur_, _beam missing_, _crosstalk_, _incomplete echo_, and _cross-sensor_. Similar to the main body, we denote a point in a LiDAR point cloud as \(\mathbf{p}\in\mathbb{R}^{4}\), which is defined by the point coordinates \((p^{x},p^{y},p^{z})\) and point intensity \(p^{i}\). We aim to simulate a corrupted point \(\mathbf{\hat{p}}\) via a mapping \(\mathbf{\hat{p}}=\mathcal{C}(\mathbf{p})\), with rules constrained by _physical principles_ or _engineering experiences_. The detailed case study for each corruption type defined in the Robo3D benchmark is illustrated as follows. ### Fog The weather phenomena are inevitable in driving scenarios [47]. Among them, foggy weather mainly causes back-scattering and attenuation of LiDAR pulse transmissions and results in severe shifts of both range and intensity for the points in a LiDAR point cloud, as shown in Fig. 7. In this work, we follow Hahner _et al._[22] to generate physically accurate fog-corrupted data using "clean" datasets. This approach uses a standard linear system [47] to model the light pulse transmission under foggy weather. For each \(\mathbf{p}\), we calculate its attenuated response \(p^{\textit{i}\textit{int}}\) and the maximum fog response \(p^{\textit{i}\textit{int}}\) as follows: \[p^{\textit{i}\textit{int}}=p^{i}e^{-2\alpha\sqrt{(p^{x})^{2}+(p^{y})^{2}+(p^{ x})^{2}}}, \tag{9}\] \[p^{\textit{i}\textit{int}}=p^{i}\frac{(p^{x})^{2}+(p^{y})^{2}+(p^{z})^{2}}{ \beta_{0}}\beta\times p^{i}_{\textit{tmp}} \tag{10}\] \[\mathbf{\hat{p}}=\mathcal{C}_{\text{fog}}(\mathbf{p})=\begin{cases}(\hat{p}^{ x},\hat{p}^{y},\hat{p}^{z},p^{\textit{i}\textit{int}}),&\text{if}\;\;p^{ \textit{i}\textit{int}}>p^{\textit{i}\textit{int}},\\ (p^{x},p^{y},p^{z},p^{\textit{i}\textit{int}}),&\text{else}.\end{cases} \tag{11}\] where \(\alpha\) is the attenuation coefficient, \(\beta\) denotes the back-scattering coefficient, \(\beta_{0}\) describes the differential reflectivity of the target, and the \(p^{i}_{\textit{tmp}}\) is the received response for the soft target term. ### Wet Ground As introduced in the main body, the emitted laser pulses from the LiDAR sensor tend to lose certain amounts of energy when hitting wet surfaces, which will cause significantly attenuated laser echoes depending on the water height \(d_{w}\) and mirror refraction rate [21], as shown in Fig. 8. In this work, we follow [21] to model the attenuation caused by ground wetness. A pre-processing step is taken to estimate the ground plane with existing semantic labels or RANSAC [17]. Next, a ground plane point of its measured intensity \(\hat{p}^{i}\) is obtained based on the modified reflectivity, and the point is only kept if its intensity is greater than the noise floor \(i_{n}\) via mapping: \[\mathcal{C}_{\text{wet}}(\mathbf{p})=\begin{cases}(p^{x},p^{y},p^{z},\hat{p} ^{i}),&\text{if}\quad\hat{p}^{i}>i_{n}\ \&\ \mathbf{p}\in\text{ground}\;,\\ \text{None},&\text{elif}\ \hat{p}^{i}<i_{n}\ \&\ \mathbf{p}\in\text{ ground}\;,\\ (p^{x},p^{y},p^{z},p^{i}),&\text{elif}\ \mathbf{p}\notin\text{ground}\;.\end{cases} \tag{12}\] ### Snow Snow weather is another adverse weather condition that tends to happen in the real-world environment. For each laser beam in snowy weather, the set of particles in the air will intersect with it and derive the angle of the beam cross-section that is reflected by each particle, taking potential occlusions into account [50]. Some typical examples of the snow-corrupted data are shown in Fig. 9. In this work, we follow [21] to simulate these snow-corrupted data \(\mathcal{C}_{\text{snow}}(\mathbf{p})\), which is similar to the fog simulation. This physically-based method samples snow particles in the 2D space and modify the measurement for each LiDAR beam in accordance with the induced geometry, where the number of sampling snow particles is set according to a given snowfall rate \(r_{s}\). ### Motion Blur As one of the common in-vehicle sensors, LiDAR is often mounted on the rooftop or side of the vehicle and inevitably suffers from the blur caused by vehicle movement, especially on bumpy surfaces or during U-turning. A typical example of the effect brought by _motion blur_ is shown in Fig. 12. In this work, to simulate blur-corrupted data \(\mathcal{C}_{\text{motion}}(\mathbf{p})\), we add a jittering noise to each coordinate \((p^{x},p^{y},p^{z})\) with a translation value sampled from the Gaussian distribution with standard deviation \(\sigma_{t}\). The \(\mathcal{C}_{\text{motion}}(\mathbf{p})\) is shown as: \[\mathcal{C}_{\text{motion}}(\mathbf{p})=(p^{x}+o_{1},p^{y}+o_{2},p^{z}+o_{3},p ^{i})\, \tag{13}\] where \(o_{1},o_{2},o_{3}\) are the random offsets sampled from Gaussian distribution \(N\in\{0,{\sigma_{t}}^{2}\}\) and \(\{o_{1},o_{2},o_{3}\}\in\mathbb{R}^{1\times 1}\). ### Beam Missing As shown in Fig. 10, the dust and insect tend to form agglomerates in front of the LiDAR surface and will not likely disappear without human intervention, such as drying and cleaning [44]. This type of occlusion causes zero readings on masked areas and results in the loss of certain light impulses. In this work, to mimic such a behavior, we randomly sample a total number of \(m\) beams and drop points on these beams from the original point cloud to generate \(\mathcal{C}_{\text{beam}}(\mathbf{p})\): \[\mathcal{C}_{\text{beam}}(\mathbf{p})=\begin{cases}(p^{x},p^{y},p^{z},p^{i}), &\text{if}\quad\mathbf{p}\notin m\,\\ \text{None},&\text{else}\.\end{cases} \tag{14}\] ### Crosstalk Considering that the road is often shared by multiple vehicles (see Fig. 11), the time-of-flight of light impulses from one sensor might interfere with impulses from other sensors within a similar frequency range [6]. Such a crosstalk phenomenon often creates noisy points within the mid-range ar Figure 8: An example of the geometrical optical model of the light pulse reflection in the _wet ground_ corruption. Depending on the water height and mirror refraction rate, the pulses emitted by the LiDAR sensor will lose certain amounts of energy when hitting wet surfaces. _Image credit:_ Hahner _et al._[21]. Figure 7: Examples of the data corruptions introduced by _fog_, where the range (bottom left) and intensity (bottom right) distributions are shifted from the uniform distribution of the ego-vehicle. _Image credit:_ Hahner _et al._[22]. Figure 9: Examples of the data corruptions introduced by _snow_. As shown in the top-left, the particles brought by snowfall will likely cause false predictions for the objects in the 3D scene. _Image credit:_ Hahner _et al._[21]. eas in between two (or multiple) sensors. Fig. 13 shows two real-world examples of crosstalk-corrupted point clouds. In this work, to simulate \(\mathcal{C}_{\text{cross}}(\mathbf{p})\), we randomly sample a subset of \(k_{t}\) percent points from the original point cloud and add large jittering noise with a translation value sampled from the Gaussian distribution with standard deviation \(\sigma_{c}\). \[\mathcal{C}_{\text{cross}}(\mathbf{p})=\begin{cases}(p^{x},p^{y},p^{z},p^{i}),& \text{if }\ \mathbf{p}\notin\text{set of }\{k_{t}\}\;,\\ (p^{x},p^{y},p^{z},p^{i})+\xi_{c},&\text{else },\end{cases} \tag{15}\] where \(\xi_{c}\) is the random offset sampled from Gaussian distribution \(N\in\{0,{\sigma_{c}}^{2}\}\) and \(\xi_{c}\in\mathbb{R}^{1\times 4}\). ### Incomplete Echo The near-infrared spectrum of the laser pulse emitted from the LiDAR sensor is vulnerable to vehicles or other instances with dark colors [82]. The LiDAR readings are thus incomplete in such scan echoes, resulting in significant point miss detection (see Fig. 14 for a real-world example). In this work, we simulate this corruption which denotes \(\mathcal{C}_{\text{echo}}(\mathbf{p})\) by randomly querying \(k_{e}\) percent points for _vehicle_, _bicycle_, and _motorcycle_ classes, via either semantic masks or 3D bounding boxes. Next, we drop the queried points from the original point cloud, along with their point-level semantic labels. Note that we do not alter the ground-truth bounding boxes since they should remain at their original positions in the real world. This can be formed as: \[\mathcal{C}_{\text{echo}}(\mathbf{p})=\begin{cases}(p^{x},p^{y},p^{z},p^{i}),& \text{if }\ \mathbf{p}\notin\text{set of }\{k_{e}\}\;,\\ \text{None},&\text{else }.\end{cases} \tag{16}\] ### Cross-Sensor A typical _cross-sensor_ example is shown in Fig. 15. Due to the large variety of LiDAR sensor configurations (_e.g._, beam number, FOV, and sampling frequency), it is important to design robust 3D perception models that are capable of maintaining satisfactory performance under cross-device cases [78]. While previous works directly form such settings with two different datasets, the domain idiosyncrasy in Figure 11: An Illustration of potential _crosstalk_ scenarios in a multi-LiDAR system. [Left] The basic principle of range detection in the LiDAR sensing cycle. [Middle] The direct crosstalk scenario in a dual-LiDAR system. [Right] The indirect crosstalk scenario caused by reflection in a dual-LiDAR system. _Image credit:_ Diehm _et al._[15]. Figure 12: Examples of the effect brought by _motion blur_ on the registration of a square room. The blue trajectory denotes the globally consistent map of the environment; the yellow points were acquired while the LiDAR sensor was moving from pose \(a\) to pose \(b\), resulting in a heavily skewed point cloud. _Image credit:_ Descènes _et al._[14]. Figure 10: Typical range measurement behaviors that will likely cause _beam missing_. (a) Echoes return from the target (“clean” scenarios). (b) Echoes return from a dusty cloud between the sensor and the target (partial beam missing). (c) No echo returns from either the dusty cloud or the target (complete beam missing). _Image credit:_ Phillips _et al._[44]. between (_e.g._, different label mappings and data collection protocols) further hinders the direct robustness comparison. In our benchmark, we follow [68] and generate cross-sensor data \(\mathcal{C}_{\text{sensor}}(\mathbf{p})\) by first dropping points of certain beams from the point cloud and then sub-sample \(k_{c}\) percent points from each beam: \[\mathcal{C}_{\text{sensor}}(\mathbf{p})=\begin{cases}\text{None},&\text{if }\ \mathbf{p}\in\text{set of }\{k_{c}\}\,\\ (p^{x},p^{y},p^{z},p^{i}),&\text{else }.\end{cases} \tag{17}\] ## 8 Additional Implementation Detail In this section, we provide additional implementation details to enable the reproduction of the corruption generations in the Robo3D benchmark. Note that our physically principled corruption creation procedures can also be used on other LiDAR-based point cloud datasets with minimal modifications. ### Fog Simulation Following [22], we uniformly sample the attenuation coefficient \(\alpha\) from \([0,0.005,0.01,0.02,0.03,0.06]\). For the _SemanticKITTI-C_, _KITTI-C_, _nuScenes-C_, and _WOD-C_ datasets, we set the back-scattering coefficient \(\beta\) to \(\{0.008,0.05,0.2\}\) to split severity levels into light, moderate, and heavy levels. The semantic classes of _fog_ are \(21\), \(41\), and \(23\) for _SemanticKITTI-C_, _nuScenes-C_, and _WOD-C_, respectively. And \(\mathbf{p}\) belongs to fog class will be mapped to class 0 or 255 (_i.e._, the _ignored_ label). ### Wet Ground Simulation We follow [21] and set the parameter of water height \(d_{w}\) to \(\{0.2\ mm,1.0\ mm,1.2\ mm\}\) for different severity levels of _wet ground_. Note that the ground plane estimation method is different across four benchmarks. We estimate the ground plane via RANDSAC [17] for the _KITTI-C_ Figure 14: Examples of the data corruptions introduced by _incomplete echo_. The black car on the left has nearly zero pulse return, due to the destroyed echo cycle. _Image credit:_ Yu _et al._[82]. Figure 13: Examples of the data corruptions introduced by _crosstalk_. The point clouds are acquired by a Velodyne HDL-64 with interference from another sensor of the same type in close vicinity. The crosstalk points are shown in blue. _Image credit:_ Diehm _et al._[15]. Figure 15: Examples of the data distribution discrepancy brought by _cross-sensor_ effect. (a) A typical point cloud acquired by a 64-beam LiDAR sensor. (b) A simulated point cloud from 64 beams to 32 beams. (c) A typical point cloud acquired by a 32-beam LiDAR sensor. _Image credit:_ Wei _et al._[68]. since it only provides detection labels. For _SemanticKITTI-C_, we use semantic classes of _road_, _parking_, _sidewalk_, and _other ground_ to build the ground plane. The _driveable surface_, _other flat_, and _sidewalk_ classes are used to construct the ground plane in _nuScenes-C_. For _WOD-C_, the ground plane is estimated by _curb_, _road_, _other ground_, _walkable_, and _sidewalk_ classes. ### Snow Simulation We use the method proposed in [21] to construct _snow_ corruptions. The value of snowfall rate parameter \(r_{s}\) is set to \(\{0.5,1.0,2.5\}\) to simulate light, moderate, and heavy snowfall for the _SemanticKITTI-C_, _KITTI-C_, _nuScenes-C_, and _WOD-C_ datasets, and the ground plane estimation is the same as the _wet ground_ simulation. The semantic class of snow is \(22\), \(42\), and \(24\) for the _SemanticKITTI-C_, _nuScenes-C_, and _WOD-C_ datasets, respectively. And \(\mathbf{p}\) belongs to snow class will also be mapped to class 0 or 255 (_i.e._, the _ignored_ label). ### Motion Blur Simulation We add jittering noise from Gaussian distribution with standard deviation \(\sigma_{t}\) to simulate motion blur. The \(\sigma_{t}\) is set to \(\{0.20,0.25,0.30\}\), \(\{0.04,0.08,0.10\}\), \(\{0.20,0.30,0.40\}\) and \(\{0.06,0.10,0.13\}\) for the _SemanticKITTI-C_, _KITTI-C_, _nuScenes-C_, and _WOD-C_ datasets, respectively. ### Beam Missing Simulation The value of parameter \(m\) (number of beams to be dropped) is set to \(\{48,32,16\}\) for the benchmark of _SemanticKITTI-C_, _KITTI-C_ and _WOD-C_, respectively, while set as \(\{24,16,8\}\) for the _nuScenes-C_ dataset. ### Crosstalk Simulation We set the parameter of \(k_{t}\) to \(\{0.006,0.008,0.01\}\) for the _SemanticKITTI-C_, _KITTI-C_, and _WOD-C_ datasets, respectively, and \(\{0.03,0.07,0.12\}\) for _nuScenes-C_ dataset. The semantic class of crosstalk is assigned to \(23\), \(43\), and \(25\) for _SemanticKITTI-C_, _nuScenes-C_, and _WOD-C_ datasets, respectively. Meanwhile, the \(\mathbf{p}\) belongs to crosstalk class will also be mapped to class 0 or 255 (_i.e._, the _ignored_ label). ### Incomplete Echo Simulation For _SemanticKITTI-C_, the point labels of classes _car_, _bicycle_, _motorcycle_, _truck_, _other-vehicle_ are used as the semantic mask. For _nuScenes-C_, we include _bicycle_, _bus_, _car_, _construction vehicle_, _motorcycle_, _truck_ and _trailer_ class label to build semantic mask. For _WOD-C_, we adopt the point labels of classes _car_, _truck_, _bus_, _other-vehicle_, _bicycle_, _motorcycle_ as the semantic mask. For _KITTI-C_, we use 3D bounding box labels to create the semantic mask. The value of parameter \(k_{e}\) is set to \(\{0.75,0.85,0.95\}\) for the four corruption sets during the _incomplete echo_ simulation. ### Cross-Sensor Simulation The value of parameter \(m\) is set to \(\{48,32,16\}\) for the _SemanticKITTI-C_, _KITTI-C_, and _WOD-C_ datasets, respectively, and \(\{24,16,12\}\) for the _nuScenes-C_ dataset. Based on [68], we then sub-sample 50\(\%\) points from the remaining point clouds with an equal interval. ## 9 Additional Experimental Result In this section, we provide the complete experimental results for each of the 3D detectors and segmentors benchmarked in Robo3D. ### SemanticKITTI-C The complete results in terms of corruption error (CE), resilience rate (RR), and task-specific accuracy (IoU) on the _SemanticKITTI-C_ dataset are shown in Tab. 8, Tab. 9, and Tab. 10, respectively. ### Kitti-C The complete results in terms of corruption error (CE), resilience rate (RR), and task-specific accuracy (AP) on the _KITTI-C_ dataset are shown in Tab. 11, Tab. 12, and Tab. 13, respectively. ### nuScenes-C (Seg3D) The complete results in terms of corruption error (CE), resilience rate (RR), and task-specific accuracy (IoU) on the _nuScenes-C_ (_Seg3D_) dataset are shown in Tab. 14, Tab. 15, and Tab. 16, respectively. In addition to the benchmark results, we also show the voxel size analysis results of _nuScenes-C_ (_Seg3D_) in Fig. 18 (a). ### nuScenes-C (Det3D) The complete results in terms of corruption error (CE), resilience rate (RR), and task-specific accuracy (NDS) on the _nuScenes-C_ (_Det3D_) dataset are shown in Tab. 17, Tab. 18, and Tab. 19, respectively. ### WOD-C (Seg3D) The complete results in terms of corruption error (CE), resilience rate (RR), and task-specific accuracy (APH) on the _WOD-C_ (_Det3D_) dataset are shown in Tab. 23, Tab. 24, and Tab. 25, respectively. ### Density-Insensitive Training As stated in the main body, the corruptions in the real-world environment often cause severe occlusion, attenuation, and reflection of LiDAR impulses, resulting in the unavoidable loss of points in certain regions around the ego-vehicle. To better handle such scenarios, we design a density-insensitive training framework, with realizations on both detection (see Fig. 16) and segmentation (see Fig. 17). Since detection and segmentation have different optimization objectives, we design different loss computation strategies within these two frameworks. Specifically, the _completion_ and _confirmation_ losses for the detection framework are calculated at the BEV feature maps; while for the segmentation framework, these two losses are computed at the logits level. Our experimental results in Tab. 26 verify the effectiveness of this approach on both tasks. Although we use a random masking strategy to avoid information leaks, we observe overt improvements in a wide range of corruption types that contain point loss scenarios, such as _beam missing_, _incomplete echo_, and _cross-sensor_. We believe more sophisticated designs based on our framework could further boost the corruption robustness of 3D perception models. ## 10 Qualitative Experiment In this section, we provide extensive qualitative examples for illustrating the proposed corruption types and for comparing representative models benchmarked in Robo3D. ### Corruption Types We show visualizations of the eight corruption types under three severity levels (light, moderate, and heavy) in Fig. 19 and Fig. 20. Figure 16: The **3D object detection realization** of the proposed density-insensitive training framework. The “full” and “partial” point clouds are fed into the teacher branch and student branch, respectively, for feature learning, while the latter is generated by randomly masking the original point cloud. To encourage cross-density consistency, we calculate the _completion_ and _confirmation_ losses which measure the distances of the teacher’s prediction (BEV feature map) and the student’s prediction (BEV feature map) between the other branch’s outputs. Figure 17: The **3D semantic segmentation realization** of the proposed density-insensitive training framework. The “full” and “partial” point clouds are fed into the teacher branch and student branch, respectively, for feature learning, while the latter is generated by randomly masking the original point cloud. To encourage cross-density consistency, we calculate the _completion_ and _confirmation_ losses which measure the distances of the sub-sampled teacher’s prediction and interpolated student’s prediction between the other branch’s outputs. ### Visual Comparisons For 3D object detection, we attach the qualitative results of SECOND [76] and CenterPoint [81] under each of the eight corruption types in the _WOD-C (Det3D)_ dataset. The results are shown in Fig. 21 and Fig. 22. For 3D semantic segmentation, we attach qualitative results of six segmentors, _i.e_., RangeNet++ [40], PolarNet [84], Cylinder3D [89], RPVNet [74], SPVCNN [63], and Wafflelron [45], under each of the eight corruption types in the _SemanticKITTI-C_ dataset. The results are shown in Fig. 23, Fig. 24, Fig. 25, and Fig. 26. ### Video Demos In addition to the figures shown in this file, we have included four video demos on our project page. Each of these demos consists of hundred of frames that provide a more comprehensive evaluation of our proposed benchmark. ## 11 Public Resources Used In this section, we acknowledge the use of the following public resources, during the course of this work: * SemanticKITTI2 \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\) CC BY-NC-SA 4.0 Footnote 2: [http://semantic-kitti.org](http://semantic-kitti.org). * SemanticKITTI-API3 \(\qquad\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 3: [https://github.com/PRBonn/semantic-kitti-api](https://github.com/PRBonn/semantic-kitti-api). * nuScenes4 \(\qquad\qquad\qquad\qquad\qquad\qquad\) CC BY-NC-SA 4.0 Footnote 4: [https://www.nuscenes.org/nuscenes](https://www.nuscenes.org/nuscenes). * nuScenes-devkit5 \(\qquad\qquad\qquad\qquad\qquad\) Apache License 2.0 Footnote 5: [https://github.com/nutonomy/nuscenes-devkit](https://github.com/nutonomy/nuscenes-devkit). * Waymo Open Dataset6 \(\qquad\qquad\qquad\qquad\) Waymo Dataset License Footnote 6: [https://waymo.com/open](https://waymo.com/open). * RangeNet++7 \(\qquad\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 7: [https://github.com/PRBonn/lidar-bonnetal](https://github.com/PRBonn/lidar-bonnetal). * SalsaNext8 \(\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 8: [https://github.com/TiagoCorthinal/SalsaNext](https://github.com/TiagoCorthinal/SalsaNext). Footnote 9: [https://github.com/placerofyming/IROS21-FIDNet-SemanticKITTI](https://github.com/placerofyming/IROS21-FIDNet-SemanticKITTI). * FIDNet9 \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\) Unknown Footnote 9: [https://github.com/huixiancheng/CENet](https://github.com/huixiancheng/CENet). * CENet10 \(\qquad\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 10: [https://github.com/Huixiancheng/CENet](https://github.com/Huixiancheng/CENet). * KPConv-PyTorch11 \(\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 11: [https://github.com/Huixiancheng/CENet](https://github.com/Huixiancheng/CENet). * PIDS12 \(\qquad\qquad\qquad\qquad\qquad\qquad\) MIT License Footnote 12: [https://github.com/lordzh666/WACV23_PIDS-Joint](https://github.com/lordzh666/WACV23_PIDS-Joint). * Wafflelron13 \(\qquad\qquad\qquad\qquad\qquad\qquad\) Apache License 2.0 Footnote 13: [https://github.com/valeeai/WaffleIron](https://github.com/valeeai/WaffleIron). * PolarSeg14 \(\qquad\qquad\qquad\qquad\qquad\qquad\) BSD 3-Clause License Footnote 14: [https://github.com/dwardzhoui30/PolarSeg](https://github.com/dwardzhoui30/PolarSeg). Footnote 15: [https://github.com/NVIDIA/MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine). Footnote 16: [https://github.com/xinge08/Cylinder3D](https://github.com/xinge08/Cylinder3D). Footnote 17: [https://github.com/rusty1s/pytorch_scatter](https://github.com/rusty1s/pytorch_scatter). Footnote 18: [https://github.com/traveler59/spconv](https://github.com/traveler59/spconv). Footnote 19: [https://github.com/mit-han-lab/torchsparse](https://github.com/mit-han-lab/torchsparse). Footnote 20: [https://github.com/mit-han-lab/ypmas](https://github.com/mit-han-lab/ypmas). Footnote 21: [https://github.com/TiaogCorthinal/SalsaNext](https://github.com/TiaogCorthinal/SalsaNext). Footnote 22: [https://github.com/placerofyming/IROS21-FIDNet-SemanticKITTI](https://github.com/placerofyming/IROS21-FIDNet-SemanticKITTI). [MISSING_PAGE_POST] Footnote 40: [https://www.nuscenes.org/nuscenes](https://www.nuscenes.org/nuscenes). [MISSING_PAGE_POST] \begin{table} \begin{tabular}{c|c|c c c c c c c c|c} \hline \hline \multicolumn{1}{c|}{**Method**} & **mCE**\(\downarrow\) & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **mIoU**\(\uparrow\) \\ \hline \hline MinkUNet\({}_{18}\)\({}^{\dagger}\)[11] & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(62.76\) \\ \hline SqueezeSeg [69] & \(164.87\) & \(183.89\) & \(158.01\) & \(165.45\) & \(122.35\) & \(171.68\) & \(188.07\) & \(158.74\) & \(170.81\) & \(31.61\) \\ SqueezeSegV2 [70] & \(152.45\) & \(168.50\) & \(141.23\) & \(154.64\) & \(115.16\) & \(155.24\) & \(176.00\) & \(145.27\) & \(163.52\) & \(41.28\) \\ RangeNet\({}_{21}\)[40] & \(136.33\) & \(156.27\) & \(128.49\) & \(133.93\) & \(102.62\) & \(141.58\) & \(148.87\) & \(128.29\) & \(150.58\) & \(47.15\) \\ RangeNet\({}_{33}\)[40] & \(130.66\) & \(144.28\) & \(123.73\) & \(128.38\) & \(104.20\) & \(135.53\) & \(129.43\) & \(125.81\) & \(153.88\) & \(50.29\) \\ SalsaNext [12] & \(116.14\) & \(147.54\) & \(112.06\) & \(116.55\) & \(77.62\) & \(115.32\) & \(143.52\) & \(114.04\) & \(102.47\) & \(55.80\) \\ FIDNet [85] & \(113.81\) & \(127.67\) & \(105.13\) & \(107.71\) & \(88.88\) & \(116.03\) & \(121.32\) & \(113.74\) & \(130.03\) & \(58.80\) \\ CENet [10] & \(103.41\) & \(129.84\) & \(92.72\) & \(99.23\) & \(70.50\) & \(101.24\) & \(131.13\) & \(102.26\) & \(100.39\) & \(62.55\) \\ \hline PolarNet [84] & \(118.56\) & \(138.82\) & \(107.09\) & \(108.26\) & \(86.81\) & \(105.08\) & \(178.13\) & \(112.00\) & \(112.25\) & \(58.17\) \\ \hline KPConv [64] & \(\underline{99.54}\) & \(103.20\) & \(\underline{91.94}\) & \(98.14\) & \(110.76\) & \(97.64\) & \(111.91\) & \(97.34\) & \(85.43\) & \(62.17\) \\ PIDS\({}_{1.2\times}\)[83] & \(104.13\) & \(118.06\) & \(98.94\) & \(109.46\) & \(114.83\) & \(103.18\) & \(103.94\) & \(96.97\) & \(87.64\) & \(63.25\) \\ PIDS\({}_{2.0\times}\)[83] & \(101.20\) & \(110.61\) & \(95.70\) & \(104.64\) & \(115.55\) & \(98.56\) & \(102.23\) & \(97.54\) & \(84.76\) & \(64.55\) \\ WaffleIn [45] & \(109.54\) & \(123.45\) & \(90.09\) & \(108.52\) & \(99.85\) & \(93.22\) & \(186.08\) & \(\mathbf{90.96}\) & \(\underline{84.11}\) & \(\mathbf{66.04}\) \\ \hline MinkUNet\({}_{34}\)[11] & \(100.61\) & \(105.28\) & \(99.39\) & \(106.66\) & \(98.69\) & \(97.64\) & \(\underline{99.09}\) & \(99.01\) & \(98.33\) & \(63.78\) \\ Cylinder3Dspc [89] & \(103.25\) & \(142.53\) & \(92.48\) & \(113.57\) & \(70.89\) & \(96.98\) & \(105.66\) & \(104.21\) & \(99.68\) & \(63.42\) \\ Cylinder3Dspc [89] & \(103.13\) & \(142.51\) & \(101.28\) & \(116.89\) & \(61.66\) & \(98.88\) & \(111.40\) & \(90.91\) & \(93.38\) & \(61.00\) \\ \hline SPVCNN\({}_{18}\)[63] & \(100.30\) & \(101.25\) & \(100.02\) & \(103.98\) & \(97.60\) & \(99.20\) & \(100.58\) & \(99.63\) & \(100.19\) & \(62.47\) \\ SPVCNN\({}_{34}\)[63] & \(\mathbf{99.16}\) & \(\mathbf{98.50}\) & \(100.67\) & \(101.99\) & \(97.81\) & \(98.99\) & \(\mathbf{98.42}\) & \(98.82\) & \(98.11\) & \(63.22\) \\ RPVNet [74] & \(111.74\) & \(118.65\) & \(100.98\) & \(104.60\) & \(78.58\) & \(106.43\) & \(185.69\) & \(99.21\) & \(99.78\) & \(63.75\) \\ CPGNet [35] & \(107.34\) & \(140.97\) & \(92.61\) & \(104.32\) & \(\mathbf{61.05}\) & \(\mathbf{90.91}\) & \(195.63\) & \(94.97\) & \(\mathbf{78.24}\) & \(61.50\) \\ 2DPASS [75] & \(106.14\) & \(134.92\) & \(\mathbf{85.46}\) & \(110.17\) & \(62.91\) & \(94.37\) & \(171.72\) & \(96.91\) & \(92.66\) & \(\underline{64.61}\) \\ GFNet [46] & \(108.68\) & \(131.34\) & \(94.39\) & \(\mathbf{92.66}\) & \(61.73\) & \(98.56\) & \(198.90\) & \(98.24\) & \(93.64\) & \(63.00\) \\ \hline \hline \end{tabular} \end{table} Table 8: [Complete Results] The **Corruption Error (CE)** of each method on _SemanticKITTI-C_. **Bold**: Best in column. Underline: Second best in column. All scores are given in percentage (\(\%\)). **Dark** : Best in row. **Red** : Worst in row. Symbol \({}^{\dagger}\) denotes the baseline model adopted in calculating the CE scores. \begin{table} \begin{tabular}{c|c|c c c c c c c|c} \hline \hline \multicolumn{1}{c|}{**Method**} & **mRR**\(\uparrow\) & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **mIoU**\(\uparrow\) \\ \hline \hline SqueezeSeg [69] & \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE \(\downarrow\)** & **mRR \(\uparrow\)** & **mIoU \(\uparrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline SqueezeSeg [69] & \(164.87\) & \(66.81\) & \(31.61\) & \(18.85\) & \(27.30\) & \(22.70\) & \(17.93\) & \(25.01\) & \(21.65\) & \(27.66\) & \(7.85\) \\ SqueezeSegV2 [70] & \(152.45\) & \(65.29\) & \(41.28\) & \(25.64\) & \(35.02\) & \(27.75\) & \(22.75\) & \(32.19\) & \(26.68\) & \(33.80\) & \(11.78\) \\ RangeNet\({}_{21}\)[40] & \(136.33\) & \(73.42\) & \(47.15\) & \(31.04\) & \(40.88\) & \(37.43\) & \(31.16\) & \(38.16\) & \(37.98\) & \(41.54\) & \(18.76\) \\ RangeNet\({}_{53}\)[40] & \(130.66\) & \(73.59\) & \(50.29\) & \(36.33\) & \(43.07\) & \(40.02\) & \(30.10\) & \(40.80\) & \(46.08\) & \(42.67\) & \(16.98\) \\ SalsaNext [12] & \(116.14\) & \(80.51\) & \(55.80\) & \(34.89\) & \(48.44\) & \(45.55\) & \(47.93\) & \(49.63\) & \(40.21\) & \(48.03\) & \(44.72\) \\ FIDNet [85] & \(113.81\) & \(76.99\) & \(58.80\) & \(43.66\) & \(51.63\) & \(49.68\) & \(40.38\) & \(49.32\) & \(49.46\) & \(48.17\) & \(29.85\) \\ CENet [10] & \(103.41\) & \(81.29\) & \(62.55\) & \(42.70\) & \(57.34\) & \(53.64\) & \(52.71\) & \(55.78\) & \(45.37\) & \(53.40\) & \(45.84\) \\ \hline PolarNet [84] & \(118.56\) & \(74.98\) & \(58.17\) & \(38.74\) & \(50.73\) & \(49.42\) & \(41.77\) & \(54.10\) & \(25.79\) & \(48.96\) & \(39.44\) \\ \hline KPConv [64] & \(99.54\) & \(82.90\) & \(62.17\) & \(54.46\) & \(57.70\) & \(54.15\) & \(25.70\) & \(57.35\) & \(53.38\) & \(55.64\) & \(53.91\) \\ PIDS\({}_{1.2\times}\)[83] & \(104.13\) & \(77.94\) & \(63.25\) & \(47.90\) & \(54.48\) & \(48.86\) & \(22.97\) & \(54.93\) & \(56.70\) & \(55.81\) & \(52.72\) \\ PIDS\({}_{2.0\times}\)[83] & \(101.20\) & \(78.42\) & \(64.55\) & \(51.19\) & \(55.97\) & \(51.11\) & \(22.49\) & \(56.95\) & \(57.41\) & \(55.55\) & \(54.27\) \\ WaffleInform [45] & \(109.54\) & \(72.18\) & \(\mathbf{66.04}\) & \(45.52\) & \(58.55\) & \(49.30\) & \(30.32\) & \(59.28\) & \(22.48\) & \(\mathbf{58.55}\) & \(\underline{54.62}\) \\ \hline MinkUNet\({}_{18}\)[11] & \(100.00\) & \(81.90\) & \(62.76\) & \(55.87\) & \(53.99\) & \(53.28\) & \(32.92\) & \(56.32\) & \(58.34\) & \(54.43\) & \(46.05\) \\ MinkUNet\({}_{34}\)[11] & \(100.61\) & \(80.22\) & \(63.78\) & \(53.54\) & \(54.27\) & \(50.17\) & \(33.80\) & \(57.35\) & \(58.38\) & \(54.88\) & \(46.95\) \\ Cylinder3Dpx [89] & \(103.25\) & \(80.08\) & \(63.42\) & \(37.10\) & \(57.45\) & \(46.94\) & \(52.45\) & \(57.64\) & \(55.98\) & \(52.51\) & \(46.22\) \\ Cylinder3Dpx [89] & \(103.13\) & \(\mathbf{83.90}\) & \(61.00\) & \(37.11\) & \(53.40\) & \(45.39\) & \(58.64\) & \(56.81\) & \(53.59\) & \(54.88\) & \(49.62\) \\ \hline SPVCNN\({}_{18}\)[63] & \(100.30\) & \(82.15\) & \(62.47\) & \(55.32\) & \(53.98\) & \(51.42\) & \(34.53\) & \(56.67\) & \(58.10\) & \(54.60\) & \(45.95\) \\ SPVCNN\({}_{34}\)[63] & \(\mathbf{99.16}\) & \(82.01\) & \(63.22\) & \(\mathbf{56.53}\) & \(53.68\) & \(52.35\) & \(34.39\) & \(56.76\) & \(\mathbf{59.00}\) & \(54.97\) & \(47.07\) \\ RPVNet [74] & \(111.74\) & \(73.86\) & \(63.75\) & \(47.64\) & \(53.54\) & \(51.13\) & \(47.29\) & \(53.51\) & \(22.64\) & \(54.79\) & \(46.17\) \\ CPGNet [35] & \(107.34\) & \(81.05\) & \(61.50\) & \(37.79\) & \(57.39\) & \(51.26\) & \(\mathbf{59.05}\) & \(\mathbf{60.29}\) & \(18.50\) & \(\underline{56.72}\) & \(\mathbf{57.79}\) \\ 2DPASS [75] & \(106.14\) & \(77.50\) & \(\underline{64.61}\) & \(40.46\) & \(\mathbf{60.68}\) & \(48.53\) & \(57.80\) & \(58.78\) & \(28.46\) & \(55.84\) & \(50.01\) \\ GFNet [46] & \(108.68\) & \(77.92\) & \(63.00\) & \(42.04\) & \(56.57\) & \(\mathbf{56.71}\) & \(58.59\) & \(56.95\) & \(17.14\) & \(55.23\) & \(49.48\) \\ \hline \hline \end{tabular} \end{table} Table 10: [Complete Results] The **Intersection-over-Union (IoU)** of each method on _SemanticKITTI-C_. **Bold**: Best in column. _Underline_: Second best in column. All scores are given in percentage (\(\%\)). _Dark_ : Best in row. _Red_ : Worst in row. \begin{table} \begin{tab \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE \(\downarrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **mIoU \(\uparrow\)** \\ \hline \hline MinkUNet\({}_{34}\)[11] & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(75.76\) \\ \hline FIDNet [85] & \(122.42\) & \(75.93\) & \(122.58\) & \(68.78\) & \(192.03\) & \(164.84\) & \(57.95\) & \(141.66\) & \(155.56\) & \(71.38\) \\ CENet [10] & \(112.79\) & \(71.16\) & \(115.48\) & \(64.31\) & \(156.67\) & \(159.03\) & \(53.27\) & \(129.08\) & \(153.35\) & \(73.28\) \\ \hline PolarNet [84] & \(115.09\) & \(90.10\) & \(115.33\) & \(58.98\) & \(208.19\) & \(121.07\) & \(80.67\) & \(128.17\) & \(118.23\) & \(71.37\) \\ \hline WaffleIron [45] & \(106.73\) & \(94.76\) & \(99.92\) & \(84.51\) & \(152.35\) & \(110.65\) & \(91.09\) & \(106.41\) & \(114.15\) & \(76.07\) \\ \hline MinkUNet\({}_{34}\)[11] & \(96.37\) & \(92.95\) & \(96.09\) & \(104.78\) & \(93.05\) & \(95.04\) & \(96.27\) & \(96.88\) & \(95.90\) & \(76.90\) \\ Cylinder3D\({}_{\text{PSC}}\)[89] & \(111.84\) & \(86.60\) & \(104.68\) & \(70.29\) & \(217.47\) & \(113.00\) & \(75.67\) & \(109.21\) & \(117.78\) & \(76.15\) \\ Cylinder3D\({}_{\text{PSC}}\)[89] & \(105.56\) & \(83.22\) & \(111.08\) & \(69.74\) & \(165.28\) & \(113.95\) & \(74.42\) & \(110.67\) & \(116.15\) & \(73.54\) \\ \hline SPVCNN\({}_{18}\)[63] & \(106.65\) & \(88.42\) & \(105.56\) & \(98.78\) & \(156.48\) & \(110.11\) & \(86.04\) & \(104.26\) & \(103.55\) & \(74.40\) \\ SPVCNN\({}_{34}\)[63] & \(97.45\) & \(95.21\) & \(99.50\) & \(97.32\) & \(95.34\) & \(98.73\) & \(97.92\) & \(96.88\) & \(98.74\) & \(76.57\) \\ 2DPASS [75] & \(98.56\) & \(76.57\) & \(\mathbf{89.08}\) & \(76.35\) & \(142.65\) & \(102.23\) & \(89.39\) & \(101.77\) & \(110.44\) & \(\mathbf{77.92}\) \\ GFNet [46] & \(\mathbf{92.55}\) & \(\mathbf{65.60}\) & \(\underline{93.83}\) & \(\mathbf{47.23}\) & \(152.46\) & \(112.94\) & \(\mathbf{45.25}\) & \(105.45\) & \(117.64\) & \(76.79\) \\ \hline \hline \end{tabular} \end{table} Table 14: [Complete Results] The **Corruption Error (CE)** of each method on _nuScenes-C (Seg3D)_. **Bold**: Best in column. **Underline**: Second best in column. All scores are given in percentage (\(\%\)). **Dark**: Best in row. **Red**: Worst in row. \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mRE \(\downarrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **mIoU \(\uparrow\)** \\ \hline \hline FIDNet [85] & \(73.33\) & \(90.78\) & \(95.29\) & \(82.61\) & \(68.51\) & \(67.44\) & \(80.48\) & \(68.31\) & \(33.20\) & \(71.38\) \\ CENet [10] & \(76.04\) & \(\mathbf{91.44}\) & \(95.35\) & \(84.12\) & \(79.57\) & \(68.19\) & \(\underline{83.09}\) & \(72.75\) & \(33.82\) & \(73.28\) \\ \hline PolarNet [84] & \(76.34\) & \(81.59\) & \(97.95\) & \(90.82\) & \(62.49\) & \(86.75\) & \(57.12\) & \(75.16\) & \(58.86\) & \(71.37\) \\ \hline WaffleIron [45] & \(72.78\) & \(73.71\) & \(97.19\) & \(65.19\) & \(78.16\) & \(85.70\) & \(43.54\) & \(80.86\) & \(57.85\) & \(76.07\) \\ \hline MinkUNet\({}_{18}\)[11] & \(74.44\) & \(70.80\) & \(97.56\) & \(53.26\) & \(96.87\) & \(90.47\) & \(35.08\) & \(84.25\) & \(67.25\) & \(75.76\) \\ \hline MinkUNet\({}_{34}\)[11] & \(75.08\) & \(74.01\) & \(97.44\) & \(48.76\) & \(\mathbf{97.84}\) & \(\mathbf{91.16}\) & \(38.13\) & \(84.47\) & \(\mathbf{68.87}\) & \(76.90\) \\ Cylinder3D\({}_{\text{PSC}}\)[89] & \(72.94\) & \(78.59\) & \(95.46\) & \(76.26\) & \(55.33\) & \(84.64\) & \(58.36\) & \(79.45\) & \(55.46\) & \(76.15\) \\ Cylinder3D\({}_{\text{PSC}}\)[89] & \(78.08\) & \(83.52\) & \(96.57\) & \(79.41\) & \(76.18\) & \(87.23\) & \(61.68\) & \(81.55\) & \(85.51\) & \(73.54\) \\ \hline SPVCNN\({}_{18}\)[63] & \(74.70\) & \(79.31\) & \(97.39\) & \(55.22\) & \(78.44\) & \(87.85\) & \(49.50\) & \(83.72\) & \(66.14\) & \(74.40\) \\ SPVCNN\({}_{34}\)[63] & \(75.10\) & \(72.95\) & \(96.70\) & \(54.79\) & \(97.47\) & \(90.04\) & \(36.71\) & \(\mathbf{84.84}\) & \(67.35\) & \(76.57\) \\ 2DPASS [75] & \(75.24\) & \(82.78\) & \(\mathbf{98.51}\) \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE**\(\downarrow\) & **mRR**\(\uparrow\) & **mIoU**\(\uparrow\) & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline FIDNet [85] & \(122.42\) & \(73.33\) & \(71.38\) & \(64.80\) & \(68.02\) & \(58.97\) & \(48.90\) & \(48.14\) & \(57.45\) & \(48.76\) & \(23.70\) \\ CENet [10] & \(112.79\) & \(76.04\) & \(73.28\) & \(67.01\) & \(69.87\) & \(61.64\) & \(58.31\) & \(49.97\) & \(60.89\) & \(53.31\) & \(24.78\) \\ \hline PolarNet [84] & \(115.09\) & \(76.34\) & \(71.37\) & \(58.23\) & \(69.91\) & \(64.82\) & \(44.60\) & \(61.91\) & \(40.77\) & \(53.64\) & \(42.01\) \\ \hline WaffleInron [45] & \(106.73\) & \(72.78\) & \(76.07\) & \(56.07\) & \(73.93\) & \(49.59\) & \(59.46\) & \(65.19\) & \(33.12\) & \(61.51\) & \(44.01\) \\ \hline MinkUNet\({}_{18}\)[11] & \(100.00\) & \(74.44\) & \(75.76\) & \(53.64\) & \(73.91\) & \(40.35\) & \(73.39\) & \(68.54\) & \(26.58\) & \(63.83\) & \(50.95\) \\ MinkUNet\({}_{34}\)[11] & \(96.37\) & \(75.08\) & \(76.90\) & \(56.91\) & \(74.93\) & \(37.50\) & \(75.24\) & \(70.10\) & \(29.32\) & \(\mathbf{64.96}\) & \(\mathbf{52.96}\) \\ Cylinder3Dpsc [89] & \(111.84\) & \(72.94\) & \(76.15\) & \(59.85\) & \(72.69\) & \(58.07\) & \(42.13\) & \(64.45\) & \(44.44\) & \(60.50\) & \(42.23\) \\ Cylinder3Dpsc [89] & \(105.56\) & \(78.08\) & \(73.54\) & \(61.42\) & \(71.02\) & \(58.40\) & \(56.02\) & \(64.15\) & \(45.36\) & \(59.97\) & \(43.03\) \\ \hline SPVCNN\({}_{18}\)[63] & \(106.65\) & \(74.70\) & \(74.40\) & \(59.01\) & \(72.46\) & \(41.08\) & \(58.36\) & \(65.36\) & \(36.83\) & \(62.29\) & \(49.21\) \\ SPVCNN\({}_{34}\)[63] & \(97.45\) & \(75.10\) & \(76.57\) & \(55.86\) & \(74.04\) & \(41.95\) & \(74.63\) & \(68.94\) & \(28.11\) & \(\mathbf{64.96}\) & \(51.57\) \\ 2DPASS [75] & \(98.56\) & \(75.24\) & \(\mathbf{77.92}\) & \(64.50\) & \(\mathbf{76.76}\) & \(54.46\) & \(62.04\) & \(67.84\) & \(34.37\) & \(63.19\) & \(45.83\) \\ GFNet [46] & \(\mathbf{92.55}\) & \(\mathbf{83.31}\) & \(76.79\) & \(\mathbf{69.59}\) & \(75.52\) & \(\mathbf{71.83}\) & \(59.43\) & \(64.47\) & \(\mathbf{66.78}\) & \(61.86\) & \(42.30\) \\ \hline \hline \end{tabular} \end{table} Table 16: [Complete Results] The **Intersection-over-Union (IoU)** of each method on _nuScenes-C (Seg3D)_. **Bold**: Best in column. Underline: Second best in column. All scores are given in percentage (\(\%\)). Dark : Best in row. Red : Worst in row. \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE**\(\downarrow\) & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **NDS**\(\uparrow\) \\ \hline \hline CenterPoint-PP\({}^{\dagger}\)[81] & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(45.99\) \\ \hline SECOND-MH [76] & \(97.50\) & \(95.40\) & \(96.01\) & \(96.09\) & \(100.81\) & \(99.26\) & \(92.16\) & \(97.64\) & \(102.64\) & \(47.87\) \\ PointPillars-MH [32] & \(102.90\) & \(102.85\) & \(104.56\) & \(102.53\) & \(106.44\) & \(102.39\) & \(100.94\) & \(102.42\) & \(\mathbf{101.05}\) & \(43.33\) \\ CenterPoint-LR [81] & \(98.74\) & \(97.88\) & \(96.46\) & \(97.70\) & \(102.15\) & \(101.06\) & \(95.54\) & \(\mathbf{95.60}\) & \(103.53\) & \(49.72\) \\ CenterPoint-HR [81] & \(\mathbf{95.80}\) & \(\mathbf{93.01}\) & \(\mathbf{92.01}\) & \(\mathbf{94.91}\) & \(\mathbf{97.56}\) & \(\mathbf{98.38}\) & \(\mathbf{91.11}\) & \(96.21\) & \(103.23\) & \(\mathbf{50.31}\) \\ \hline \hline \end{tabular} \end{table} Table 17: [Complete Results] The **Corruption Error (CE)** of each method on _nuScenes-C (Det3D)_. **Bold**: Best in column. All scores are given in percentage (\(\%\)). Dark : Best in row. Red : Worst in row. \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \hline \hline **Method** & **mCE**\(\downarrow\) & **mRR**\(\uparrow\) & **NDS**\(\uparrow\) & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline PointPillars-MH [32] & \(102.90\) & \(\mathbf{77.24}\) & \(43.33\) & \(33.16\) & \(42.92\) & \(29.49\) & \(38.04\) & \(33.61\) & \(34.61\) & \(30.90\) & \(25.00\) \\ SECOND-MH [76] & \(97.50\) & \(76.96\) & \(47.87\) & \(38.00\) & \(47.59\) & \(33.92\) & \(41.32\) & \(35.64\) & \(40.30\) & \(34.12\) & \(23.82\) \\ CenterPoint-PP [81] & \(100.00\) & \(76.68\) & \(45.99\) & \(35.01\) & \(45.41\) & \(31.23\) & \(41.79\) & \(35.1 \begin{table} \begin{tabular}{c|c|c c c c c c c c|c} \hline \hline **Method** & **mCE \(\downarrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **mIoU \(\uparrow\)** \\ \hline \hline MinkUNet\({}_{18}\)[11] & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(69.06\) \\ \hline MinkUNet\({}_{34}\)[11] & \(\mathbf{96.21}\) & \(\mathbf{96.00}\) & \(\mathbf{94.90}\) & \(99.53\) & \(\mathbf{96.20}\) & \(\mathbf{95.43}\) & \(\mathbf{96.79}\) & \(\mathbf{96.75}\) & \(\mathbf{94.08}\) & \(\mathbf{70.15}\) \\ Cylinder3DTrSc [89] & \(106.02\) & \(111.81\) & \(104.08\) & \(\mathbf{98.39}\) & \(110.30\) & \(105.77\) & \(106.87\) & \(108.24\) & \(102.69\) & \(65.93\) \\ \hline SPVCNN\({}_{18}\)[63] & \(103.60\) & \(105.63\) & \(104.79\) & \(\mathbf{99.17}\) & \(105.41\) & \(104.85\) & \(\mathbf{99.74}\) & \(104.28\) & \(104.91\) & \(67.35\) \\ SPVCNN\({}_{34}\)[63] & \(98.72\) & \(99.67\) & \(\mathbf{96.36}\) & \(100.43\) & \(100.00\) & \(98.55\) & \(101.93\) & \(97.87\) & \(94.97\) & \(69.01\) \\ \hline \hline \end{tabular} \end{table} Table 21: [Complete Results] The **Resilience Rate (RR)** of each method on _WOC-C (Seg3D)_. **Bold**: Best in column. Underline: Second best in column. All scores are given in percentage (\(\%\)). \(\box \begin{table} \begin{tabular}{c|c|c c c c c c c|c|c} \hline \hline **Method** & **mCE \(\downarrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** & **Acc** \\ \hline \hline MinkUNet\({}_{18}\)[11], ICA & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(62.76\) \\ \hline **Ours**, ICA & \(96.10\) & \(104.44\) & \(97.09\) & \(106.31\) & \(85.42\) & \(97.34\) & \(97.30\) & \(95.33\) & \(86.08\) & \(62.70\) \\ \hline \hline MinkUNet\({}_{18}\)[11], OCA & \(86.09\) & \(87.67\) & \(77.90\) & \(82.62\) & \(73.82\) & \(87.66\) & \(97.88\) & \(95.99\) & \(85.15\) & \(69.21\) \\ \hline **Ours**, OCA & \(83.23\) & \(77.01\) & \(101.23\) & \(72.53\) & \(75.73\) & \(79.11\) & \(97.78\) & \(87.56\) & \(66.85\) & \(68.13\) \\ \hline \hline CenterPoint [81] & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(100.00\) & \(63.59\) \\ \hline **Ours** & \(99.05\) & \(99.77\) & \(99.52\) & \(100.00\) & \(100.00\) & \(97.26\) & \(99.14\) & \(99.70\) & \(97.02\) & \(63.56\) \\ \hline \hline \end{tabular} \end{table} Table 26: The **Corruption Error (CE)** comparisons between the proposed density-insensitive training framework and the baseline models [11, 81], on _SemanticKITTI-C_ and _WOD-C (Det3D)_, respectively. The task-specific accuracy is mean Intersection-over-Union (mIoU) for 3D semantic segmentation and mean Average Precision (mAPH) for 3D object detection. \begin{table} \begin{tabular}{c|c|c|c c c c c c c c} \hline \hline **Method** & **mCE \(\downarrow\)** & **mRR \(\uparrow\)** & **mAPH \(\uparrow\)** & **Fog** & **Wet** & **Snow** & **Motion** & **Beam** & **Cross** & **Echo** & **Sensor** \\ \hline \hline PointPillars [32] & \(127.53\) & \(81.23\) & \(50.17\) & \(31.24\) & \(49.75\) & \(46.07\) & \(34.93\) & \(43.93\) & \(39.80\) & \(43.41\) & \(36.67\) \\ SECOND [76] & \(121.43\) & \(81.12\) & \(53.37\) & \(32.89\) & \(52.99\) & \(47.20\) & \(35.98\) & \(44.72\) & \(49.28\) & \(46.84\) & \(36.43\) \\ PV-RCNN [53] & \(104.90\) & \(82.43\) & \(61.27\) & \(37.32\) & \(61.27\) & \(60.38\) & \(42.78\) & \(49.53\) & \(59.59\) & \(54.43\) & \(38.73\) \\ CenterPoint [81] & \(100.00\) & \(83.30\) & \(63.59\) & \(43.06\) & \(62.84\) & \(58.59\) & \(43.53\) & \(54.41\) & \(60.32\) & \(57.01\) & \(43.98\) \\ PV-RCNN++ [55] & \(\mathbf{91.60}\) & \(\mathbf{84.14}\) & \(\mathbf{67.45}\) & \(\mathbf{45.50}\) & \(\mathbf{67.18}\) & \(\mathbf{62.71}\) & \(\mathbf{47.35}\) & \(\mathbf{57.83}\) & \(\mathbf{64.71}\) & \(\mathbf{60.96}\) & \(\mathbf{47.77}\) \\ \hline \hline \end{tabular} \end{table} Table 25: [Complete Results] The **Average Precision (APH)** of each method on _WOD-C (Det3D)_. **Bold**: Best in column. Underline: Second best in column. All scores are given in percentage (\(\%\)). \(|\)**Dark**: Best in row. Red : Worst in row. Figure 18: Corruption sensitivity analysis of the _voxel size_ for the baseline LiDAR semantic segmentation model [11]. The experiments are conducted on: a) the _nuScenes-C (Seg3D)_ dataset; and b) the _WOD-C (Seg3D_) dataset. Different corruptions exhibit variances under certain configurations. Figure 19: Visual examples of each corruption type under three severity levels in our _SemanticKITTI_-C dataset. Figure 20: Visual examples of each corruption type under three severity levels in our _nuScenes-C_ dataset. Figure 21: **Qualitative results of SECOND [76] under each of the eight corruptions in _WOD-C (Det3D)_. The green boxes represent the groundtruth, while the red boxes are the predictions. Best viewed in colors.** Figure 22: **Qualitative results of CenterPoint [81] under each of the eight corruptions in _WOD-C (Det3D)_. The green boxes represent the groundtruth, while the red boxes are the predictions. Best viewed in colors. Figure 23: **Qualitative comparisons (error maps) of three LiDAR segmentation models (RPVNet [74], SPVCNN [63], Wafflelron [45]) under the _fog_, _wet ground_, _snow_, and _motion blur_ corruptions, in _SemanticKITTI-C_. To highlight the differences, the correct / **incorrect** predictions are painted in gray / **red**, respectively. Each scene is visualized from the LiDAR bird’s eye view and covers a \(50\)m by \(50\)m region, centered around the ego-vehicle. Best viewed in colors. ## Appendix A Figure 24: **Qualitative comparisons (error maps) of three LiDAR segmentation models (RPVNet [74], SPVCNN [63], Waffelron [45]) under the _beam missing_, _crosstalk_, _incomplete echo_, and _cross-sensor_ corruptions, in _SemanticKITTI-C_. To highlight the differences, the correct / **incorrect** predictions are painted in gray / **red**, respectively. Each scene is visualized from the LiDAR bird’s eye view and covers a \(50\)m by \(50\)m region, centered around the ego-vehicle. Best viewed in colors. Figure 25: **Qualitative comparisons (error maps) of three LiDAR segmentation models (RangeNet++ [40], PolarNet [84], Cylinder3D [89]) under the _fog_, _wet ground_, _snow_, and _motion blur_ corruptions, in _SemanticKITTI-C_. To highlight the differences, the correct / **incorrect** predictions are painted in gray / **red**, respectively. Each scene is visualized from the LiDAR bird’s eye view and covers a \(50\)m by \(50\)m region, centered around the ego-vehicle. Best viewed in colors. ## Appendix A Figure 26: **Qualitative comparisons (error maps) of three LiDAR segmentation models (RangeNet++ [40], PolarNet [84], Cylinder3D [89]) under the _beam missing_, _crosstalk_, _incomplete echo_, and _cross-sensor_ corruptions, in _SemanticKITITI-C_. To highlight the differences, the correct / **incorrect** predictions are painted in gray / **red**, respectively. Each scene is visualized from the LiDAR bird’s eye view and covers a \(50\)m by \(50\)m region, centered around the ego-vehicle. Best viewed in colors.
2306.12231
Predicting protein variants with equivariant graph neural networks
Pre-trained models have been successful in many protein engineering tasks. Most notably, sequence-based models have achieved state-of-the-art performance on protein fitness prediction while structure-based models have been used experimentally to develop proteins with enhanced functions. However, there is a research gap in comparing structure- and sequence-based methods for predicting protein variants that are better than the wildtype protein. This paper aims to address this gap by conducting a comparative study between the abilities of equivariant graph neural networks (EGNNs) and sequence-based approaches to identify promising amino-acid mutations. The results show that our proposed structural approach achieves a competitive performance to sequence-based methods while being trained on significantly fewer molecules. Additionally, we find that combining assay labelled data with structure pre-trained models yields similar trends as with sequence pre-trained models. Our code and trained models can be found at: https://github.com/semiluna/partIII-amino-acid-prediction.
Antonia Boca, Simon Mathis
2023-06-21T12:44:52Z
http://arxiv.org/abs/2306.12231v2
# Predicting protein variants with ###### Abstract Pre-trained models have been successful in many protein engineering tasks. Most notably, sequence-based models have achieved state-of-the-art performance on protein fitness prediction while structure-based models have been used experimentally to develop proteins with enhanced functions. However, there is a research gap in comparing structure- and sequence-based methods for predicting protein variants that are better than the wildtype protein. This paper aims to address this gap by conducting a comparative study between the abilities of equivariant graph neural networks (EGNNs) and sequence-based approaches to identify promising amino-acid mutations. The results show that our proposed structural approach achieves a competitive performance to sequence-based methods while being trained on significantly fewer molecules. Additionally, we find that combining assay labelled data with structure pre-trained models yields similar trends as with sequence pre-trained models. Our code and trained models can be found at: [https://github.com/semiluna/partIII-amino-acid-prediction](https://github.com/semiluna/partIII-amino-acid-prediction). Machine Learning, ICML ## 1 Introduction In recent years, pre-trained models have garnered significant attention in the field of protein representation. Notably, models have been developed to deal with both the sequence and structure modalities of proteins (Rives et al., 2021; Elnaggar et al., 2022; Zhang et al., 2023). These models have demonstrated their potential in various applications such as protein fitness prediction (Meier et al., 2021; Notin et al., 2022) while being employed in a "zero-shot" manner, without the need for additional training data. Their success has also shown promising experimental results in protein engineering (Shroff et al., 2020; Lu et al., 2022). Additionally, Hsu et al. (2021) have observed that augmenting simple models for assay labelled data with fitness predictions extracted from pre-trained sequence models can enhance their performance. Despite the experimental success of pre-trained structural methods for protein engineering, particularly those based on predicting residues given local atom environments (Torng and Altman, 2017; Lu et al., 2022), several crucial aspects remain unexplored. Firstly, these methods have not been systematically compared with sequence-based approaches using the same datasets. Secondly, their potential to augment assay labelled data, when available, has not been evaluated. This paper aims to fill this research gap by conducting a study of the comparative performance of structure-based and sequence-based methods on predicting variants _that are better than the wildtype protein_. We compare representatives of the most successful equivariant graph neural networks (EGNNs) on the task of residue identity prediction, namely GVP (Jing et al., 2021) and EQGAT (Le et al., 2022), with representatives of the most successful sequence-based approaches: Tranception (Notin et al., 2022), ESM-1v (Meier et al., 2021) and the MSA Transformer (Rao et al., 2021). By undertaking this comparative analysis, we aim to provide insights into the performance and suitability of geometric GNNs in protein engineering, specifically in the context of predictions based on the local atomic environment. Our contributions are as follows: * We apply the most successful pre-training approach for structural methods (Shroff et al., 2020) to equivariant GNNs by using the ATOM3D RES dataset (Townsend et al., 2022) for residue identity prediction (Table 1); * We benchmark the resulting structure-based pre-trained models with the most successful zero-shot sequence-based approaches (Table 2). We observe that structure does not trump sequence in downstream tasks when used in this way, although the amount of available structures used during pre-training is significantly lower than the number of sequences used in training large language models; * We extend the simple combination approach for assay labelled data and pre-trained model outputs (Hsu et al., 2021) to the structure pre-trained domain. We find the same general trends as with sequence pre-trained models, as assay-labelled data quickly allows us to surpass zero-shot pre-trained sequence-based models with at few as 100 datapoints (Figure 2). ## 2 Methodology We pre-train two equivariant graph neural networks on the task of residue identity prediction, also known as the RES task (Townshend et al., 2022). We choose the Geometric Vector Perceptron (Jing et al., 2021) and the Equivariant Graph Attention Network (Le et al., 2022). While Lu et al. (2022) used 3D-CNNs to engineer plastic enzymes, Jing et al. (2021) benchmark 3D-CNNs on the RES task and show that the GVP outperforms them, so we choose to focus on this structural method instead. Table 1 shows a comparison between the reported accuracies of the two models and the accuracies achieved in this paper. We achieve a higher performance on the GVP model than originally reported in Jing et al. (2021). This jump in performance can be explained by the fact that Jing et al. (2021) only use a third of the original training dataset to train the GVP, possibly due to computational constraints. More details on our training parameters can be found in A.9. ### RES task formalism We formalise the RES classification task as follows. For a given point-cloud atomic graph \(G=(V,E)\) with nodes \(i,j\in V\) and edges \((i\to j)\in E\). Given a node \(t\in V\) representing the \(\text{C}_{\alpha}\) of a residue in the atomic graph, we can define the _node classification function_\(\text{RES}:\mathcal{V}\times\mathcal{G}\rightarrow\mathbb{R}^{20}\) that takes as input node \(t\) and a _masked_ atomic graph \(G_{t}\) from which we have removed the side-chain atoms of node \(t\) and returns the likelihood scores of each of the 20 naturally occurring amino-acids to be part of the side-chain of node \(t\). A more extended version of this formalism can be found in A.1. ### The scoring function We now formalise the function we use to score each amino-acid mutation in a sequence. For a wildtype protein sequence \(x_{1}\dots x_{n}\) with \(x_{i}\in\mathcal{A}=\{1,2,\dots,20\}\) we associate the point-cloud atomic graph \(G=(V,E)\) corresponding to the protein's structure. Edges are drawn between any two atoms that are less than 4.5A apart. Then, using the formalism defined in 2.1, the score associated with the presence of amino-acid \(a\in\mathcal{A}\) at position \(i\) can be defined as: \[S(i,a)=[\text{RES}(g(i),G)]_{a} \tag{1}\] Where \(g:\{1,2,\dots,n\}\rightarrow\{1,2,\dots,|V|\}\) is a mapping function from positions to the index of the node representing the central \(\text{C}_{\alpha}\) of the amino-acid residue present at each position. Here, \(G_{g(i)}\) denotes the masked graph from which we removed the side-chain attached to node \(g(i)\). Equation 1 essentially represents the score of amino-acid \(a\) for target position \(i\), associated with node \(g(i)\) in the atomic graph. Note that the true amino-acid at the same position is denoted by \(x_{i}\). ### Mutation generation Once the equivariant models have been trained on the RES task, we use them to inform the generation of single-point mutations in monomers and homo-oligomers from the ProteinGym substitutions dataset (Notin et al., 2022). For each wildtype sequence we recover its structure, mask each amino-acid residue in turn, and retrieve the scores generated by the EGNN model for each of the 20 naturally occuring amino-acids. These scores are then ranked according to two strategies to determine the most promising single-point mutations. Figure 1 illustrates this approach visually. Structure recovery.The ProteinGym substitutions dataset contains 87 molecular sequences; for each of these sequences, a number of experimentally tested mutations are scored according to their _fitness_. We evaluate our methods \begin{table} \begin{tabular}{l c c} \hline \hline Model & Reported & Our \\ & test accuracy & test accuracy \\ \hline EQGAT & 0.540 & 0.524 \\ GVP & 0.527 & **0.580** \\ \hline \hline \end{tabular} \end{table} Table 1: Classification accuracies on the ATOM3D RES dataset. Figure 1: For every sequence, we recover the structure from the PDB and mask each amino-acid in turn. We pass the masked graph through a pre-trained EGNN model to recover the score associated with each amino-acid, which we then rank. The key idea is that this pre-training allows the model to identify amino acids which seem “unusual” given their local environment and propose better fitting candidates instead. on a subset of the original dataset for which we could find either monomeric or homo-oligomeric structures. For each wildtype sequence, we recover the corresponding biological assembly from the Protein Data Bank (Berman et al., 2000). When multiple assemblies are available, we choose one at random. When assemblies are incomplete, we instead use the monomeric AlphaFold prediction (Jumper et al., 2021) if available. Otherwise, we discard the sequence. ### Mutation ranking Our approach allows us to score every possible residue mutation for each position in a sequence. Our goal is to generate meaningful mutations that have a higher chance of being bio-physically relevant, so we discard positions where the equivariant model makes the wrong prediction. A more detailed analysis of this design choice can be found in Appendix A.4. Global ranking.We rank the remaining mutations according to two strategies: _global_ and _positional_. When performing global ranking, we sort mutations in descending order of their score, regardless of their position. If we denote the single-point mutation to amino-acid \(a\) at position \(i\) by \(\mathbf{m}_{i}^{a}\), then \(\forall i,j\) and \(\forall a,b\in\mathcal{A}\) s.t. \(a\neq x_{i}\) and \(b\neq x_{j}\), we say that: \[\mathbf{m}_{i}^{a}\text{ is better than }\mathbf{m}_{j}^{b}\iff S(i,a)>S(j,b) \tag{2}\] Positional ranking.The second approach follows when we prioritise the positions we want to mutate instead of the amino-acids we mutate to. Formally, this can be quantified as: \[\begin{split}&\mathbf{m}_{i}^{a}\text{ is better than }\mathbf{m}_{j}^{b}\\ \iff&\Big{(}S(i,x_{i})<S(j,x_{j})\Big{)}\vee\\ &\Big{(}S(i,x_{i})=S(j,x_{j})\wedge S(i,a)>S(j,b)\Big{)}\end{split} \tag{3}\] Note that when we perform positional ranking, we only keep the 3 top mutations for each position. ### Protein fitness prediction The GVP and EQGAT trained on the ATOM3D RES task can be thought of as unsupervised models that can suggest amino-acid mutations. We extend our original approach to perform fitness prediction using a ridge regression model augmented with the positional scores generated by equivariant GNNs, in a similar manner to that introduced by Hsu et al. (2021). For a given sequence of amino-acids \(x_{1}\dots a_{i}\dots x_{n}\) with a single-point mutation at position \(i\), we embed each amino-acid using either the one-hot encoding or _AAIndex_ embeddings (Kawashima et al., 1999) on which we perform PCA to render 19-dimensional features per amino-acid. We flatten and concatenate these encodings to render feature vectors \(\mathbf{h}_{\text{one-hot}}\in\mathbb{R}^{20\times n}\) and \(\mathbf{h}_{\text{aa-index}}\in\mathbb{R}^{19\times n}\). To this feature vector we concatenate the score predicted by the GNN model for amino-acid \(a_{i}\) at position \(i\): \[\mathbf{x}_{\text{one-hot}} =[\mathbf{h}_{\text{one-hot}}\mid\mid S(i,a_{i})] \tag{4}\] \[\mathbf{x}_{\text{aa-index}} =[\mathbf{h}_{\text{aa-index}}\mid\mid S(i,a_{i})] \tag{5}\] Here, \(S(i,a_{i})\) is the same scoring function defined in Equation 1. These features are then used to train a ridge regression model to predict protein fitness using subsets of single-point mutated sequences for each of the ProteinGym DMS assays we have model scores for. ## 3 Results ### Mutation generation We generate single-point mutations for 49 out of the 87 DMS assays in the ProteinGym substitutions dataset (Notin et al., 2022). When we generate mutations, we discard any that we cannot find in the experimental dataset of the target sequence. We are interested in understanding how good our models are at suggesting mutations that are _better than the wildtype_ sequence, hence we propose three metrics through which to perform comparisons: (1) Spearman's rank correlation restricted to better than wildtype sequences, (2) the precision of the top 10 mutations, and (3) the recall of the top 10 mutations. To compute the last two metrics we only considered whether a mutation proposed by the model is better than the wildtype, disregarding its actual score. Table 2 shows the performance of our models, depending on the type of ranking used. We note that the equivariant models have a competitive performance to Tranception (Notin et al., 2022) when ranking mutations that are better than the wildtype, indicating that they represent a viable strategy for aiding the discovery process in protein engineering. Per-dataset performance metrics can be found in A.2. EGNN models require a significantly smaller number of protein structures during training in order to reach a similar ranking correlation coefficient to Tranception for mutations that are better than the wildtype. While Tranception is trained on the UniRef100 database (Suzek et al., 2015), which contains over 4 million source sequences, our models are trained on the ATOM3D RES dataset (Townshend et al., 2022), which contains fewer than 22k molecules from which local environments are sampled. Structure vs Sequence.We believe EGNNs may require less training because structure is more informative than sequence for fitness prediction. While sequence-based models attend the full protein and subsequently learn to focus on the important bits, EGNNs attend only local environments, thus learning to identify important atoms faster. Further experiments could be run to compare the power of sequence and structure-based models when their level of training is comparable. However, we point out that training EGNNs to the same level as present state-of-the-art sequence models may be infeasible due to both data and computational constraints. Correlation to sequence-based models.As part of our analysis, we also compute the correlation between the better than wildtype predictions made by our EGNN models and Tranception. Per-model and per-dataset statistics can be found in A.3; we note that the highest rank correlation we find is **0.212**, in the case of the EQGAT model. Since these approaches seem to be weakly correlated, we believe there are improvements to be gained from ensembling both structure- and sequence-based approaches. The impact of design choices.As mentioned in Section 2.4, we discard mutations at positions where the EGNNs make the wrong prediction, as we find that incorporating these is detrimental to the overall performance (see A.4). This indicates that these structure-based models are still undertrained, with potential for improvement coming both from larger datasets and more data engineering. ### Protein fitness prediction We train 4 types of ridge regression models on each of the 49 DMS datasets separately. The baseline non-augmented model uses only features \(\mathbf{h}_{\text{one-hot}}\) or \(\mathbf{h}_{\text{aa-index}}\) defined in Section 2.5; the remaining 3 models are augmented with single-point mutation scores from GVP, EQGAT, and Tranception, respectively. For each model type and each DMS array we first set aside 20% of the single-point mutated sequences for testing; We train the regression on increasingly larger training subsets. We repeat the process 20 times with different random subsets and report the average Spearman rank correlation on better than wildtype sequences, as seen in Figure 2. The performance on other metrics can be found in A.8. Similar to the results reported by Hsu et al. (2021), the augmented linear models allow us to surpass the baseline zero-shot fitness prediction models with as few as 100 datapoints in the case of the model augmented with EQGAT scores. While the linear model augmented with Tranception scores performs best overall, we point out that Tranception is fine-tuned to predict protein fitness, while the scores retrieved from our models merely represent the confidence in a certain amino-acid for a target position. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Ranking strategy & \begin{tabular}{c} Top 10 \\ precision \\ \end{tabular} & \begin{tabular}{c} Top 10 \\ recall \\ \end{tabular} & \multicolumn{4}{c}{Spearman’s rank correlation} \\ \cline{3-6} & & & Average & Worse than WT & Better than WT \\ \hline EQGAT & Positional & 0.486 & 0.187 & 0.223 & 0.128 & 0.118 \\ EQGAT & Global & 0.491 & 0.072 & 0.262 & 0.154 & 0.157 \\ GVP & Positional & 0.462 & **0.419** & 0.106 & \(-0.009\) & **0.276** \\ GVP & Global & 0.426 & 0.100 & 0.202 & 0.128 & \(-0.011\) \\ \hline Tranception & & 0.619 & 0.012 & 0.429 & 0.299 & 0.143 \\ ESM-1v & & 0.618 & 0.018 & 0.407 & 0.288 & 0.135 \\ MSA Transformer & & **0.638** & 0.018 & **0.434** & **0.327** & 0.135 \\ \hline \hline \end{tabular} \end{table} Table 2: Ranking performance of the models across 49 DMS assays. Numbers in **bold** represent the highest score per column, while numbers with an underline represent the second highest score per column. We note that two equivariant GNNs have the highest rank correlation for better than wildtype mutations. Figure 2: Performance on mutations that are better than the wildtype for four regression models using two types of embeddings. Statistics are aggregated across 49 DMS assays. We note that we can improve the fitness prediction performance above the Tranception baseline (in black) across all regression models by training on as few as 144 data points. ## 4 Limitations and future work We apply pre-trained EGNNs to both mutation generation and protein fitness prediction, and find that structural approaches are a competitive approach to sequence-based language models for the prediction of mutations that are better than the wildtype, while also requiring **181x** fewer molecules to train. While the results look promising, this comparison is limited in scope, as our approach does not deal with more complex (hetero-oligomeric) structures from the ProteinGym dataset. Types of fitness.Additionally, the benchmarking dataset contains a wide range of sequences for which "fitness" can be interpreted in many different ways. DMS assays in the ProteinGym dataset come from humans, viruses, prokaryotes, and eukaryotes. In particular, in the subset of 49 DMS assays used in this paper, **5** come from eukaryotes, **21** from humans, **18** from prokaryotes, and **5** from viruses. Fitness, in the case of viruses, is interpreted as infectivity or the likelihood of mutation. In the rest of the cases, fitness can range from stress resistance to efficiency. For example, in their experimental paper, Lu et al. (2022) focused on improving thermal stability. Hence, the fitness score used by Notin et al. (2022) in the ProteinGym dataset represents a "fuzzy" concept that is context-dependent. Future work could focus more closely on identifying the types of fitness structure-based approaches excel at. AcknowledgementsSVM was supported by the UKRI Centre for Doctoral Training in Application of Artificial Intelligence to the study of Environmental Risks (EP/S022961/1).
2305.14310
Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science
Instruction-tuned Large Language Models (LLMs) have exhibited impressive language understanding and the capacity to generate responses that follow specific prompts. However, due to the computational demands associated with training these models, their applications often adopt a zero-shot setting. In this paper, we evaluate the zero-shot performance of two publicly accessible LLMs, ChatGPT and OpenAssistant, in the context of six Computational Social Science classification tasks, while also investigating the effects of various prompting strategies. Our experiments investigate the impact of prompt complexity, including the effect of incorporating label definitions into the prompt; use of synonyms for label names; and the influence of integrating past memories during foundation model training. The findings indicate that in a zero-shot setting, current LLMs are unable to match the performance of smaller, fine-tuned baseline transformer models (such as BERT-large). Additionally, we find that different prompting strategies can significantly affect classification accuracy, with variations in accuracy and F1 scores exceeding 10\%.
Yida Mu, Ben P. Wu, William Thorne, Ambrose Robinson, Nikolaos Aletras, Carolina Scarton, Kalina Bontcheva, Xingyi Song
2023-05-23T17:48:21Z
http://arxiv.org/abs/2305.14310v3
Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science ###### Abstract Instruction-tuned Large Language Models (LLMs) have exhibited impressive language understanding and the capacity to generate responses that follow specific instructions. However, due to the computational demands associated with training these models, their applications often rely on zero-shot settings. In this paper, we evaluate the zero-shot performance of two publicly accessible LLMs, ChatGPT and OpenAssistant, in the context of Computational Social Science classification tasks, while also investigating the effects of various prompting strategies. Our experiment considers the impact of prompt complexity, including the effect of incorporating label definitions into the prompt, using synonyms for label names, and the influence of integrating past memories during the foundation model training. The findings indicate that in a zero-shot setting, the current LLMs are unable to match the performance of smaller, fine-tuned baseline transformer models (such as BERT). Additionally, we find that different prompting strategies can significantly affect classification accuracy, with variations in accuracy and F1 scores exceeding 10%. ## 1 Introduction Instruction fine-tuning and Reinforcement Learning with Human Feedback (RLHF) Christiano et al. (2017) has facilitated transfer learning for Large Language Models (LLMs) to unseen tasks at scale. To leverage LLMs as versatile natural language processors, there is an immediate effort to ascertain their zero-shot performance on challenging tasks. Social media is an active area of research with a number of complex, domain-specific tasks which can utilised for harm reduction Waseem et al. (2017) and preventing the spread of misinformation Zubiaga et al. (2018). While LLMs have great potential to assist in this domain, through applications such as automatic data annotation and social media analysis, it is important to understand their capabilities. We focus on modifying prompts in a zero-shot context, since this is a commonly proposed application of LLMs in academic, commercial and public settings Kuzman et al. (2023). The large parameter counts of LLMs enable impressive reasoning ability as well as deep comprehension of their training corpus, making them attractive for natural language tasks Wei et al. (2022). However, the cost of fine-tuning these models on downstream tasks is often infeasible and closed source models cannot be easily fine-tuned by the end users. In this paper, we evaluate two popular assistant-oriented LLMs, OpenAssistant (OA) and GPT-3.5-turbo (ChatGPT), by instructing them with natural language prompts on six social media-based NLP tasks. We compare against baselines that use standard techniques such as fine-tuning BERT. GPT-3.5-turbo is a commercial implementation of Instruct-GPT Ouyang et al. (2022). It uses RLHF to help align the language model to human preferences, mitigate toxic outputs and improve response usefulness. This builds on the original work in NLP by Stiennon et al. (2020) who used RLHF to improve summarisation performance. The success of GPT-3.5-turbo encouraged rapid competition from the community. OpenAssistant (OA) Kopf et al. (2023) is a major open-source competitor to GPT-3.5-turbo. They released an open-source dataset and framework to fine-tune LLMs as well a collection of fine-tuned models. In our work we use the fine-tuned OpenAssistant-SFT-7-LLMa-30B version. In their paper, the OpenAssistant authors conducted a preference study of 7042 examples, comparing the outputs of ChatGPT and a Pythia-12B OA model. The outputs of the OA model were chosen to be 95.3% as preferable as its competitor. In this work, we aim to examine the use of different prompting strategies for the evaluation of zero-shot performance of LLMs in computational social science. To that end, we conduct a battery of controlled experiments to investigate the zero-shot performance of using basic and complex prompting strategies (e.g., including paper information in the prompt) against supervised approaches. In addition, we explore the possibility of replacing the original labels with synonyms to investigate the generalizability of LLMs. ## 2 Data In this paper, we evaluate the zero-shot classification performance of LLMs on six NLP tasks in computational social science. These datasets are in English with manually annotated class labels. We display dataset specifications and statistics in Table 1. * **Rumour Stance** We first evaluate the RumorEval 2017 dataset which is developed by Derczynski et al. (2017). Here, we use the dataset for the 4-way stance classification, i.e., determining the stance of a reply towards a given source post (i.e. rumour) as either supporting, denying, questioning, or commenting. * **Sarcasm** The sarcasm detection task is to identify whether a given tweet is intended to be sarcastic or not. We evaluate the task on the Semeval-2022 Task 6 dataset Farha et al. (2022), which contains 4,868 tweets labelled as either sarcasm or not sarcasm. * **Vaccine Stance** This task aims to automatically predict the stance of tweets towards the COVID-19 vaccination Cotfas et al. (2021). This dataset provides 2,792 tweets belonging to one of three stance categories: pro vaccine, anti vaccine, or neutral. * **Complaint** This task aims to identify whether a tweet expresses a complaint, which is defined as 'a negative mismatch between reality and expectations in a particular situation' (e.g., customer complaints on Twitter) Olshtain and Weinbach (1987). We use a dataset developed by Preotiuc-Pietro et al. (2019) consisting of 3,449 English tweets annotated with one of two categories, i.e., complaints or not complaints. * **Bragging** This task aims to classify whether a tweet is bragging or not bragging. We evaluate on a dataset developed by Jin et al. (2022) which contains 6,696 tweets labelled as either bragging or not bragging. * **Hate Speech** The task of hate speech detection aims to study anti-social behaviours, e.g., racism and sexism in social media. We evaluate on a dataset developed by Waseem and Hovy (2016) with a binary classification setup, i.e., offensive or non-offensive. ## 3 Experimental Setup ### Large Language Models Our experiment is conducted based on two publicly accessible large language models: GPT-3.5-turbo (GPT) and OpenAssistant-LLaMa (OA). GPT-3.5-turboWe perform experiments with GPT-3.5-turbo 1 which is an enhanced version of the GPT-3 language model with instruction fine-tuning. GPT-3.5 can be employed for a wide range of NLP tasks, including machine translation, common sense reasoning, and question answering. We call the GPT-3.5 model via the official OpenAI API.2 Footnote 1: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Footnote 2: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference) OpenAssistant-LLaMAWe employ the OpenAssistant (OA) model developed by LAIONAI, which fine-tunes the LLaMA Touvron et al. (2023) 30B model using the OA dataset. Since LLaMA is not open-source, LAIONAI could not release the weights for OA on huggingface but released xor weights3 are applied to original LLaMA weights and check sum calculations performed to validate the conversion. Running experiments locally was restricted due to hardware constraints and as such 8-bit quantisation was applied via BitsAndBytes Dettmers et al. (2021) at model load to decrease inference memory requirements. Footnote 3: We use the oASST-sft-7-llama-30b version of the model. The xor weights can be found at: [https://huggingface.co/OpenAssistant/oASst-sft-7-llama-30b-xor](https://huggingface.co/OpenAssistant/oASst-sft-7-llama-30b-xor) ### Baselines We compare the zero-shot performance of LLMs with a weak baseline Logistic Regression and a strong baseline BERT-large: Logistic RegressionWe represent the text using TF-IDF and consider tokens that appear more than 5 times. Bert-largeWe fine-tune Bert-large4Devlin et al. (2019) by adding a linear classifier on top of the 24-layer transformer blocks. The special token '[CLS]' is used as the representation of each text. Footnote 4: [https://huggingface.co/bert-large-uncased](https://huggingface.co/bert-large-uncased) ### Data Splits For each benchmark, we initially split the dataset into training (80%) and test (20%) sets using stratified random splits5. The training set is used for supervised fine-tuning, and it is further divided into two train/validation set (in a ratio of 3:1) for hyper-parameter tuning (e.g., early stopping) purposes, respectively. Subsequently, we evaluate the performance of the fine-tuned baselines and zero-shot LLMs on the test set (20%). Footnote 5: To generate class-stratified subsets, we employ a dataset split tool from [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection) ### Prompting Strategies Following the prompting approaches described by Child et al. (2019); Ziems et al. (2023), we develop prompts by (i) adding instructions after the context (e.g., task description) and (ii) using constraints (e.g., 'Only reply with Bragging' or Not Bragging.') at the end. We observe that using constraints can effectively avoid cases of model uncertainty (e.g., 'As an AI model, I cannot answer this question.') and guide models to generate the expected outputs. For consistency, we use the same prompts for both GPT and OA. Two examples for Round 1 and Round 2 are displayed in Table 2. To examine the zero-shot predictive performance of LLMs, we design a battery of experiments using three different prompting strategies. Basic Instruction (Basic)We only provide a basic instruction without including detailed task and label descriptions. For example, for the bragging detection task, our prompt is: _'Identify whether or not a tweet includes a bragging statement. + Constraints + Text'_. Task and Label Description (T/L Desc)Building upon the Basic Instruction Round, we provide additional information in the prompt by including \begin{table} \begin{tabular}{|l|c|c|} \hline **Dataset** & **\# of Posts** & **Class (\# of Posts)** \\ \hline _Rumour Stance_ & 5,568 & Support (1,004) / Deny (415) / Query (464) / Comment (3,685) \\ \hline _Vaccine Stance_ & 2,792 & Pro Vaccine (991) / Anti Vaccine (791) / Neutral (1,010) \\ \hline _Complaint_ & 3,449 & Complaint (1,232) / Not Complaint (2,217) \\ \hline _Bragging_ & 6,696 & Bragging (781) / Not Bragging (5,915) \\ \hline _Sarcasm_ & 4,868 & Sarcasm (1,067) / Not Sarcasm (3,801) \\ \hline _Hate speech_ & 16,907 & Offensive (5,348) / Non-offensive (11,559) \\ \hline \end{tabular} \end{table} Table 1: Dataset Specifications. \begin{table} \begin{tabular}{|l|l|} \hline **Round 1** & **Basic** \\ \hline _Bragging_ & Identify whether or not a tweet includes a bragging statement. \\ & **+ Constraints + Text** \\ \hline _Vaccine_ & Annotate a tweet into one of three stance categories: pro vaccine, anti vaccine, or neutral. \\ & **+ Constraints + Text** \\ \hline \multicolumn{2}{|l|}{**Round 2**} & **Basic + T/L Desc** \\ \hline \multirow{4}{*}{_Bragging_} & **Basic Instruction** + Bragging is a speech act which explicitly or implicitly attributes credit to the speaker for some \\ & \(\text{good}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text task and label descriptions. Note that we use the same label and task descriptions as mentioned in the original paper. The format of prompt used for the Task and Label Description Round is: _'Basic Instruction + Task and Label Descriptions + Constraints + Text'_. Memory Recall (Recall)We observe that both GPT and OA can recall papers published before September 2021 (prompts and responses see Table 5). Since arXiv papers are part of the training corpus used by LLMs, we also include the title of the source paper in the prompt when evaluating the zero-shot performance of LLMs. For example, we include paper information by using this prompt: _'Recall this paper [Paper Title] + Basic Instruction + Constraints + Text'_. For this round, we only perform experiments on datasets published before September 2021. SynonymsLLMs trained with RLHF can generate different outputs when using prompts which are semantically similar (e.g., synonyms). To test the generalizability of LLMs, we substitute the names of each class with with words that have the same or similar meaning. For example, we test the synonyms 'hateful', 'toxic', and 'abusive') to replace the original category 'offensive'. ### Evaluation Metrics We employ two evaluation metrics in this study: 1) Accuracy - this involves a direct comparison between the model predictions and the ground truth label; and 2) Macro-F1 scores are reported for situations where accuracy may not provide an adequate representation of performance, particularly for certain imbalanced datasets utilised in this paper, such as _Bragging_ and _Rumour Stance_. ### Hyper-parameters During initial explorations, we observed that using a higher temperature (e.g., 0.8 for GPT-3.5 and 2 for OA) results in inadequate classification performance (introduces more randomness to the model's output). This suggests that higher temperature settings cause the model outputs to be non-reproducible. In this work, we use a low temperature (i.e., 0.2)6 for GPT-3.5 to make the model more focused and deterministic. Footnote 6: [https://platform.openai.com/docs/api-reference/chat/create](https://platform.openai.com/docs/api-reference/chat/create) For OA, we follow the 'precise hyper-parameter \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Complaint**} & \multicolumn{2}{c|}{**Vaccine Stance**} & \multicolumn{2}{c|}{**Bragging**} \\ \cline{2-7} & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** \\ \hline _Logistic Regression_ & 81.4 & 79.7 & 72.8 & 73.1 & 88.6 & 58.8 \\ \hline _BERT-large_ & 89.4 & 88.6 & **81.5** & **81.3** & **91.3** & **76.1** \\ \hline _GPT Basic_ & **89.7** & **88.7** & 73.0 & 73.8 & 85.1 & 67.6 \\ \hline _GPT T/L Desc_ & 89.0 & 88.0 & 73.3 & 73.7 & 84.9 & 67.4 \\ \hline _GPT Memory Recall_ & 87.1 & 86.4 & 66.2 & 66.9 & - & - \\ \hline _OA Basic_ & 72.3 & 72.3 & 61.7 & 60.3 & 89.3 & 57.6 \\ \hline _OA T/L Desc_ & 65.3 & 65.2 & 73.7 & 73.6 & 88.4 & 48.2 \\ \hline _OA Memory Recall_ & 82.6 & 82.1 & 64.2 & 63.8 & - & - \\ \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Rumor Stance**} & \multicolumn{2}{c|}{**Sarcasm**} & \multicolumn{2}{c|}{**Hata Speech**} \\ \cline{2-7} & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** \\ \hline _Logistic Regression_ & 68.5 & 40.9 & 76.1 & 53.5 & 83.2 & 79.2 \\ _BERT-large_ & **73.2** & **48.2** & **78.9** & 58.4 & **84.5** & **81.2** \\ \hline _GPT Basic_ & 49.4 & 33.4 & 67.3 & **62.1** & 75.5 & 72.4 \\ \hline _GPT T/L Desc_ & 59.2 & 45.7 & 61.3 & 57.9 & 76.9 & 72.1 \\ \hline _GPT Memory Recall_ & 40.2 & 30.9 & - & - & 71.7 & 69.6 \\ \hline _OA Basic_ & 45.2 & 27.4 & 71.9 & 48.6 & 63.5 & 63.3 \\ \hline _OA T/L Desc_ & 56.2 & 29.0 & 75.9 & 49.9 & 75.5 & 73.3 \\ \hline _OA Memory Recall_ & 52.4 & 34.6 & - & - & 55.4 & 55.4 \\ \hline \end{tabular} \end{table} Table 3: LLMs zero-shot classification results across all prompt settings. All datasets are evaluated with accuracy and macro-F1 scores. Blue highlighted cells denote prompt settings where zero-shot LLMs beat the strong supervised baseline (i.e., Bert-large fine-tuned on the training set). **Bold text** denotes the best result per task. setup7 indicated in the OpenAssistant web interface, where the Temperature is 0.1, Top P is 0.95, Repetition Penalty is 1.2 and Top K is 50. Our early exploratory studies showed that using low temperatures can stabilise the output of the model to facilitate the reproducible results. Footnote 7: [https://open-assistant.io/dashboard](https://open-assistant.io/dashboard) For Bert-large, we set the learning rate as 2e-5, the batch size as 16, and the maximum sequence length as 256. We run all baseline models three times with different random seeds and report average results. We fine-tune Bert-large on an Nvidia RTX Titan GPU with 24GB memory and run OA on an Nvidia A100 GPU with 40GB memory. The inference rates of OA and GPT are approximately 1,200 and 3,000 samples per hour respectively. ## 4 Results Table 3 displays the prediction results of all zero-shot LLMs for all rounds. In general, we observe that supervised baselines still outperform LLMs on the majority of prompt settings (4 out of 6 tasks). Furthermore, we observe that GPT consistently outperforms OA across all prompt settings and tasks when considering only the F1-macro measure. However, our results show that the accuracy of OA is better than GPT on some imbalanced datasets, such as 'Bragging and Sarcasm.' This may be due to OA defaulting to the neutral class (labels without any specific speech act, such as 'Not Bragging and Not Sarcastic'). GPT achieves the best predictive performance on two speech act detection downstream tasks, namely _Complaint_ (89.7 accuracy and 88.7 F1-macro) and Sarcasm (62.1 F1-macro). This suggests that LLMs can be employed as strong baseline models for zero-shot classification tasks. When comparing the results of T/L Desc and Memory Recall against Basic Instruction, it is observed that using a more complex prompt (e.g., adding label and paper information) does not necessarily improve model performance and may even introduce additional noise, leading to a degradation in model performance. For speech act detection tasks such as _Complaint and Bragging_, the accuracy of LLMs exceeds 85%, indicating that LLMs can potentially be used for data annotation to reduce the cost of human resources. Standard data annotation tasks typically \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{**Synonyms**}} & \multicolumn{2}{c|}{**GPT**} & \multicolumn{2}{c|}{**OA**} \\ & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** \\ \hline **Task 1** & & & & \\ \hline Complaint / not Complaint & **89.7** & **88.7** & 72.3 & 72.3 \\ \hline Grievance / not Grievance & 86.2 & 84.6 & **82.0** & **81.6** \\ \hline Criticism / not Criticism & 80.2 & 77.8 & 76.4 & 76.1 \\ \hline Dissatisfaction / no Dissatisfaction & 82.9 & 82.2 & 66.9 & 66.9 \\ \hline **Task 2** & & & & \\ \hline ProVaccine / Anti Vaccine / Neutral & **73.0** & **73.8** & 61.7 & 60.3 \\ \hline In Favour of the Vaccine / Against the Vaccine / Neutral & 72.7 & 73.5 & **64.2** & **63.8** \\ \hline Positive Sentiment / Negative Sentiment / Neutral & 68.7 & 68.7 & 59.6 & 53.3 \\ \hline **Task 3** & & & & \\ \hline Bragging / not Bragging & **85.1** & **67.6** & **89.3** & **57.6** \\ \hline Boasting / not Boasting & 82.7 & 65.1 & 68.7 & 55.1 \\ \hline Showing off / not Showing off & 78.7 & 63.0 & 62.9 & 52.6 \\ \hline **Task 4** & & & & \\ \hline Support / Deny / Query / Comment & **49.4** & **33.4** & 45.2 & 27.4 \\ \hline Support / Dismiss / Questioning / Comment & 39.9 & 30.7 & **55.5** & **39.1** \\ \hline **Task 5** & & & & \\ \hline Sarcasm / not Sarcasm & 67.3 & 62.1 & 71.9 & 48.6 \\ \hline frontier / not Ironic & 74.4 & **66.0** & 55.7 & 52.5 \\ \hline Insince / Sincere & 72.6 & 63.5 & 66.7 & 41.7 \\ \hline Disingenous / Genuine & **77.1** & 60.5 & 59.5 & 40.6 \\ \hline Saire / not Satire & & & & \\ \hline **Task 6** & & & & \\ \hline Offensive/ Non-offensive & **75.5** & **72.4** & 63.5 & 63.3 \\ \hline Toxic / not Toxic & 70.0 & 67.6 & 60.1 & 60.1 \\ \hline Abusive / not Abusive & 72.6 & 69.4 & **66.3** & **65.6** \\ \hline Hateful / not Hateful & 73.4 & 70.5 & 63.7 & 63.5 \\ \hline \end{tabular} \end{table} Table 4: LLMs zero-shot classification results using synonyms across all tasks. Green highlights are the original class names. Blue highlighted cells denote where synonyms prompt settings beat the original label. **Bold text** denotes the best result per model per task. rely on two annotators in the first round, but one of them can be replaced by LLMs. According to the annotation details8 of the vaccine stance task [13], the agreement rate between the two annotators is approximately 62%. Footnote 8: [https://github.com/sohampoddar26/covid-vax-stance/tree/main/dataset](https://github.com/sohampoddar26/covid-vax-stance/tree/main/dataset) Table 4 shows all the zero-shot results of using synonyms across all tasks. We observe that revising prompts with synonyms can substantially improve the zero-shot performance of OA, except for the _Bragging_ dataset (where 5 out of 6 tasks improved). It is worth noting that the Sarcasm dataset is the only one where the prompt using original categories performs worse on both datasets. This suggests that replacing original labels with synonyms allows the OA model to better understand the task requirements. This may be due to the diversity in the distribution of training examples used in the RLHF fine-tuning for both GPT and OA. For example, OA model might be fine-tuned on corpus like: _'[Text including offensive language] + [Category: Abusive]'_. Therefore, we believe that it is important to test similar words in place of the original labels when designing instructions. ## 5 Related Work Ziems et al. (2023) sets a roadmap for employing LLMs as data annotators by establishing prompting best practices and an evaluation of the zero-shot performance of 13 language models on 24 tasks in computational social sciences. Our work is distinct from this piece as we evaluate a different set of benchmarks and models and experiment with different prompt modifications such as using synonyms for class labels and adding arXiv paper titles. To evaluate the zero-shot performance of Chat-GPT for text classification, Kuzman et al. (2023) compares against a fine-tuned XLM-RoBERTa model for the task of automatic genre classification in English and Slovenian. They show that Chat-GPT outperforms the baseline on unseen datasets and doesn't drop performance when provided with Slovenian examples. Since our focus is primarily on out-of-the-box performance, we experiment with simple alterations with the prompt. Other research, Arora et al. (2022) looks at prompt aggregation as well as using LLMs to auto-generate prompts. We also do not explore advanced methods such as chain-of-thought prompting, which improves LM performance by encouraging it to output its intermediate reasoning steps [24]. ## 6 Conclusion In this paper, we delve into the exploration of prompting strategies for the application of Large Language Models (LLMs) in computational social science. We carry out a range of controlled experiments to gauge the efficacy of various prompt configurations across six publicly available datasets. Our conclusions are summarised as follows: * Task-specific fine-tuned models generally tend to outperform LLMs in zero-shot settings. * More detailed and complex prompts do not necessarily enhance classification performance. * The selection of specific words or phrases as the class label can considerably affect classification outcomes. We argue that developing prompts for zero-shot classification presents a significant challenge. We recommend testing different prompt configurations before proceeding with experiments, while keeping in mind the time constraints9 and financial costs associated with LLMs (see Table 6). Footnote 9: [https://platform.openai.com/docs/guides/rate-limits/overview](https://platform.openai.com/docs/guides/rate-limits/overview) ## Ethics Statement Our work has received ethical approval from the Ethics Committee of our university (Reference Number: 053640) and complies with the research policies of Twitter. All datasets are obtained through the links provided in the related papers or by requesting them directly from the authors. It is important to note that we do not collect any new data from Twitter for this work. Furthermore, we can confirm that the data has been fully anonymised before being fed to the LLMs for model inference.
2306.02022
ACI-BENCH: a Novel Ambient Clinical Intelligence Dataset for Benchmarking Automatic Visit Note Generation
Recent immense breakthroughs in generative models such as in GPT4 have precipitated re-imagined ubiquitous usage of these models in all applications. One area that can benefit by improvements in artificial intelligence (AI) is healthcare. The note generation task from doctor-patient encounters, and its associated electronic medical record documentation, is one of the most arduous time-consuming tasks for physicians. It is also a natural prime potential beneficiary to advances in generative models. However with such advances, benchmarking is more critical than ever. Whether studying model weaknesses or developing new evaluation metrics, shared open datasets are an imperative part of understanding the current state-of-the-art. Unfortunately as clinic encounter conversations are not routinely recorded and are difficult to ethically share due to patient confidentiality, there are no sufficiently large clinic dialogue-note datasets to benchmark this task. Here we present the Ambient Clinical Intelligence Benchmark (ACI-BENCH) corpus, the largest dataset to date tackling the problem of AI-assisted note generation from visit dialogue. We also present the benchmark performances of several common state-of-the-art approaches.
Wen-wai Yim, Yujuan Fu, Asma Ben Abacha, Neal Snider, Thomas Lin, Meliha Yetisgen
2023-06-03T06:42:17Z
http://arxiv.org/abs/2306.02022v1
Aci-bench: a Novel Ambient Clinical Intelligence Dataset for Benchmarking Automatic Visit Note Generation ###### Abstract Recent immense breakthroughs in generative models such as in GPT4 have precipitated re-imagined ubiquitous usage of these models in all applications. One area that can benefit by improvements in artificial intelligence (AI) is healthcare. The note generation task from doctor-patient encounters, and its associated electronic medical record documentation, is one of the most arduous time-consuming tasks for physicians. It is also a natural prime potential beneficiary to advances in generative models. However with such advances, benchmarking is more critical than ever. Whether studying model weaknesses or developing new evaluation metrics, shared open datasets are an imperative part of understanding the current state-of-the-art. Unfortunately as clinic encounter conversations are not routinely recorded and are difficult to ethically share due to patient confidentiality, there are no sufficiently large clinic dialogue-note datasets to benchmark this task. Here we present the Ambient Clinical Intelligence Benchmark (aci-bench) corpus, the largest dataset to date tackling the problem of AI-assisted note generation from visit dialogue. We also present the benchmark performances of several common state-of-the-art approaches. ## 1 Background & Summary Healthcare needs are an inescapable facet of daily life. Current patient care at the medical facilities requires involvement not only from a primary care provider, but also from pharmacy, billing, imagining, labs, and specialist care. For every encounter, a clinical note is created as documentation of clinician-patient discussions, patient medical conditions. They serve as a vital record for clinical care and communication with patients and other members of the care team, as well as outline future plans, tests, and treatments. Similar to typical meeting summaries, these documents should highlight important points while compressing itemized instances into condensed themes; unlike typical meeting summaries, clinical notes are purposely and technically structured into semi-structured documents, contain telegraphic and bullet-point phrases, use medical jargon that do not appear in the original conversation, and will reference outside information often from the electronic medical record, including prose-written content or injections of structured data. While the widespread adoption of electronic health records (EHR's), spurred by the HITECH Act of 2009, has led to greater health information availability and interoperability, it has also spawned a massive documentation burden shifted to clinicians. Physicians have expressed concerns that writing notes in electronic health records (EHRs) takes more time than using traditional paper or dictation methods. As a result, notes may not be completed and accessible to other team members until long after rounds[1]. Furthermore, as another unintended consequence of EHR use complications, electronic notes have been criticized for their poor readability, completeness, and excessive use of copy and paste[2]. To save time and adequately capture details, clinicians may choose to write their notes during their time with a patient. This may detract from the clinicians' attention toward the patient (e.g. in reading non-verbal cues), and may leave patients feeling a want of empathy[3]. Alternatively, some clinicians or provider systems may hire medical assistants or scribes to partake in some or all of the note creation process, which has been linked with improved productivity, increased revenue, and improved patient-clinician interactions[4]. However such systems are both costly and, more importantly, often require a substantial investment in time from the providers in managing and training their scribes[5] - a problem that is often multiplied by the high attrition rates in the field. One promising solution is the use of automatic summarization to capture and draft notes, before being reviewed by a clinician. This technology has attracted increasing attention in the last 5 years as a result of several key factors: (1) the improvement of speech-to-text technology, (2) widespread adoption of electronic medical records in the United States, (3) the rise of transformer models. Several works have adopted early technology in this area, including use of statistical machine translation methods, use of RNNs, transformers, and pre-trained transformer models [6, 7, 8, 9, 10, 11]. However, a massive bottleneck in understanding the state-of-the-art is the lack of publicly share-able data to train and evaluate [12]. This challenge is inherent in the required data's characteristics as (1) meeting audio and transcripts from medical encounters are not typically recorded and saved, and (2) medical information is highly personal and sensitive data and cannot be easily, ethically shared publicly. Private companies may construct or acquire their own private datasets; however, results and algorithms cannot be systematically compared. Recent ground-breaking performances by large language models such as ChatGPT and GPT4 provide promising general model solutions; however without common datasets that may be studied publicly it would be impossible for the scientific community to understand strength, weaknesses, and future directions. In this paper, we present the Ambient Clinical Intelligence Benchmark (aci-bench) corpus. The corpus, created from domain experts, is designed to model three variations of model-assisted clinical note generation from doctor-patient conversations. These include conversations with (a) calls to a virtual assistant (e.g. required use of wake words or prefabricated, canned phrases), (b) unconstrained directions or discussions with a scribe, and (c) natural conversations between a doctor and patient. We also provide data to experiment between using human transcription and automatic speech recognition (ASR); or between ASR and corrected ASR. Table 1 shows a comparison of the 8 corpora described in state-of-the-art work. Only two other similar corpora are publicly available. primock57 [14] contains a small set of 57 encounters. MTS-dialog [13] contains \(\sim\)1700 samples however its focus is on on dialogue snippets rather than full encounters. To our knowledge, aci-bench is the largest and most comprehensive corpus publicly available for model-assisted clinical note generation. In the following sections, we provide details of the aci-bench Corpus. We (1) discuss the dataset construction and cleaning, (2) provide statistics and the corpus structure, (3) describe our content validation methods and comparison with real data, (4) quantify several diverse baseline summarization methods on this corpus. ## 2 Methods ### Data Creation Clinical notes may be written by the physician themselves or in conjunction with a medical scribe or assistant; alternatively physicians may choose to dictate the contents of an entire note to a human transcriptionists or an automatic dictation tool. In cases with human intervention, scribe-assisted or transcriptionist-assisted cases, physician speech may include a mixture of commands (e.g. "newline", "add my acne template"), free-text requiring almost word-for-word copying (e.g. "To date, the examinee is a 39 year-old golf course maintenance worker") [6], or free-text communication to the medical assistance (e.g. "let's use my normal template, but only keep the abnormal parts", "can you check the date and add that in?"). With trained medical scribes participating in the clinic visit, in addition to directions from the doctor, they are expected to listen in on the patient-doctor dialogue and generate clinical note text independently. To mirror this reality, the aci-bench corpus consists of three subsets representing common modes of note generation from doctor-patient conversations: **virtual assistant (virtual assistst)**: In this mode, the doctor may use explicit terms to activate a virtual assistance device (e.g. "Hey Dragon show me the diabetes labs") during the visit. This necessitates some behavioral changes on the part of the provider. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline dataset & description & src-len (tok/turns) & target-len (tok/sent) & size & open \\ \hline MTS-dialogue [13] & dialogue-note snippets where conversations are created using & 142/9 & 48/3 & 1701 & Y \\ & clinical note sections & & & & \\ primock57 [14] & role-played dialogue-note pairs & 1489/97 & 161/23 & 57 & Y \\ **aci-bench[this work]** & role-played dialogue-note pairs & 1302/55 & 490/49 & 207 & Y \\ \hline 3M Health [9] & dialogue-note pairs where notes are created using conversations & -/- & -/- (hpi only) & 1342 & N \\ Abridge [8] & dialogue-note pairs where notes are created using conversations & 1500/- & -/27 & 6862 & N \\ Augmedix [11] & real clinical dialogue-note pairs & -/175 & -/47 & 500 & N \\ emr.ai [6] & real clinical dictation-note pairs & 616/1 & 550/- & 9875 & N \\ Nuance [7] & real clinical dialogue-note pairs & 972 avg/- & 452 total/-1 & 802k & N \\ \hline \end{tabular} \end{table} Table 1: Comparable corpora for doctor-patient dialogue2note generation. The majority of datasets are proprietary and unshare-able for community evaluation. (src-len=source/transcript length, target-len=target/note length, =unreported) **virtual scribe (virtscribe)**: In this mode, the doctor may expect a separate scribe entity (automated or otherwise) to help create the clinical note. This subset is characterized by pre-ambles (e.g. short patient descriptions prior to a visit) and after-visit dictations (e.g. used to specify non-verbal parts of the visit such as the physical exam or to dictate the assessment and plan). The rest of the doctor-patient conversation will be natural and undisturbed. **ambient clinical intelligence (aci)**: This data is characterized by natural conversation between a patient and a doctor; without explicit calls to a virtual assistant or additional language addressed to a scribe. Transcripts from subsets **virtassist** and **virtscribe** were created by a team of 5+ medical experts including medical doctors, physician assistance, medical scribes, and clinical informaticians based on experience and studying real encounters. Subset **aci** was created with a certified doctor and a volunteer lay person, who must role-play a real doctor-patient encounter, given a list of symptom prompts. Clinical notes were generated using an automatic note generation system and checked and re-written by domain experts (e.g. medical scribes, or physicians). The **virtscribe** dataset includes the human transcription as well as an ASR transcript; meanwhile the **virtassist** and **aci** subsets were created with only a human transcription and ASR transcript available, respectively. #### 2.1.1 Data Cleaning and Annotation Our final dataset was distilled from encounters originally created for marketing demonstration purposes. During this initial dataset creation, imaginary EHR injections were placed within the note to contribute to realism, though many without basis from the conversation. Although EHR inputs, independent from data intake from a conversation, are a critical aspect of real clinical notes, in this dataset we do not model EHR input or output linkages with the clinical note (e.g. smart links to structured data such as vitals values, structured survey data, order codes, and diagnosis codes). In order to identify unsupported information of note text to the transcript, we created systematic annotation guidelines for \begin{table} \begin{tabular}{l|l|l} \hline type & annotated & example \\ \hline dates (none mentioned) & Y & PSA 0.6 ng/mL[, 05/25/2021] \\ exam & Y & [Consutional: Well-developed, well-nourished, in no apparent distress] \\ & Y & Neck: [Supple without thyromegaly or lymphadenopathy.] No carotid fruits appreciable. \\ medication context & Y & 1 tablet [by oral route] daily \\ medical reasoning & Y & recommended that we obtain an MRI of the right shoulder [to evaluate for a possible rotator cuff tear]. \\ & Y & referred her to formal physical therapy [to strengthen her right shoulder] \\ patient acquiescence & Y & [All questions were answered.] \\ & Y & [The patient understands and agrees with the recommended medical treatment plan] \\ review of system & Y & Ears, Nose, Mouth and Throat: [Denies ear pain, hearing loss, or discharge.] Endorses nasal congestion from allergies. \\ vitals & Y & [Blood Pressure:124/82 mmHg] \\ \hline dates (year kept if only month mentioned) & N & 03/[2022] \\ higher granularity problem/test/treatment & N & diabetes [type II] \\ & N & [3 views] of the shoulder \\ & N & MRI [of the head] \\ measurements & N & 25 [mg/dl] \\ patient name/age & N & [John Smith] is a [53]-year-old male \\ other names & N & he was seen by Jane [Smith, PA-C] \\ \hline \end{tabular} \end{table} Table 2: Examples of unsupported text (demarked by square brackets). In the original demo data, these items were added for realism without basis in the source text, the doctor-patient conversation. Some unsupported items were purposely left unmarked in cases where removal would lead to note quality / meaning degradation. After human annotated text-span level identification, these were automatically removed from the clinical note. labeling unsupported note sentences. These unsupported information included items such as reasoning for treatment (which may not be part of the original conversation) or could be information from imaginary EHR inputs (e.g. vitals). Examples of the different types of unsupported information are included in Table 2. We tasked four independent annotators with medical backgrounds to complete this task. The partial span overlap agreement was 0.85 F1. Marked text spans were removed during automatic processing. Because the datasets were originally created and demonstrated for a short period, as such, these notes were created under greater time constraints and less review. To ensure quality, four annotators identified and corrected note errors, such as inconsistent values. Finally, as the aci-bench dataset used ASR transcripts, there were cases where the note and the transcript information would conflict due to ASR errors. For example, "hydronephosis" in the clinical note may be wrongly automatically transcribed as "high flow nephrosis". Another example may be a names; "Castillo" may be transcribed as "kastio". As part of this annotation, we tasked annotators to identify these items and provide corrections. After annotation, the data was processed such that note errors were corrected and unsupported note sentences were removed. To study the effect of ASR errors, ASR transcripts were processed into two versions: (a) original and (b) ASR-corrected (ASR outputs corrected by humans). After automatic processing, encounters were again manually reviewed for additional misspelling and formatting issues. ### Note Division Definition Motivated by a need to simplify clinical note structure, improve sparsity problems, and simplify evaluation, in this section, we describe our system for segmenting a full clinical note into continuous divisions. Clinical notes are semi-structural documents with hierarchical organization. Each physician, department, and institution may have their own set of commonly used formats. However, no universal standard exists [15]. The same content can appear in multiple forms structured under different formats. This is illustrated in the subjective portions of two side-by-side notes in Figure 1. In this example, contextual medical history appears in their own sections (e.g. "current complaint (cc)" and "history of present illness (hpi)", "past medical history") in the report on the left; and merged into one history section in the report on the right. These variations in structure pose challenges for both generation and evaluation. Specifically, if evaluating by fine-grained sections in the reference, it is possible that generated notes may include the same content in other sections. Likewise, generating with fine-grained sections would require sufficient samples from each section; however as not every note has every type of section - the sample size becomes sparser. Finally, it is important to note, current state-of-the-art pre-trained embedding based evaluation metrics (e.g. bertscore, bleurt, bart-score) are limited by the original trained sequence length which are typically shorter than our full document lengths. This is illustrated in Figure 2, where for one system (Text-davinci-003) the length of the concatenated reference and system summaries will typically far exceed the typical pre-trained BERT-based 512 subtoken limit. To simplify training and evaluation, as well as maintain larger samples of data, we partition notes and group multiple sections together into four divisions, as shown in Figure 1 These divisions were inspired by the SOAP standard, where the subjective includes items taken during verbal exam and typically written in the chief complaint, history of present Figure 1: Note division example. The same content in a clinical note can appear under different sections. As an example, in the left note, “past medical history” contents are written in the “history” portion of the note on the right. To seperate the full note target into smaller text and minimize data sparsity problems if modeling by individual sections, notes are partitioned into separate subjective, objective_exam, objective_results, and assessment_and_plan continuous divisions. This also allows evaluation and generation at a higher granularity compared to a full note level. illness, and past social history; the objective_exam includes content from the physical examination on the day of the visit; the objective_results includes diagnostics taken prior to the visit, including laboratory or imaging results; and the assessment_and_plan includes the doctor's diagnosis and planned tests and treatments16. In our dataset, the divisions are contiguous and appear in the order previously introduced. Another practical benefit of partitioning the note into contiguous divisions is the greater ability to leverage pretrained sequence-to-sequence models, typically trained with shorter sequences. Furthermore, evaluation at a sub-note level allows a greater resolution for assessing performances. Footnote 16: [https://github.com/abachaa/MEDIQA-Chat-2023](https://github.com/abachaa/MEDIQA-Chat-2023) Footnote 17: [https://www.imageclef.org/2023/medical/media](https://www.imageclef.org/2023/medical/media) ### Data Statistics The full dataset was split into train, validation, and three test sets. Each subset was represented in the splits through randomized strategied sampling. The test sets 1 and 2 corresponds to the test sets from ACL ClinicalNLP MEDIQA-Chat 2023 TaskB and TaskC, respectively. Test 3 corresponds to TaskC of CLEF MEDIQA-SUM 2023. The frequency of each data split are shown in Table 3. Figure 2: BERT subtoken lengths of concatenated gold/system summaries (test1 Text-davinci-003 system) for doctor-patient dialogue to clinical note generation task. As embedding-based models require encoding the concatenated reference and hypothesis, on this dataset it would be difficult to fairly evaluate the corpus using current pretrained BERT models which have a 512 subtoken limit. ## 3 Data Records The aci-bench Corpus can be found at [LINK TO BE UPDATED]. Code for pre-processing, evaluation, and running baselines can be found in [LINK TO BE UPDATED]. ### Folder and naming organization Data used in the ACL-clinicalnlp MEDIQA-CHAT and CLEF MEDIQASUM challenges are located in the _challenge_data_ folder, whereas ASR experiment data is located in the _src_experiment_data_ folder. Each data split has two associated files: a metadata and a data file (further described below). Train, validation, test1, test2, and test3 data files are prefixed with the following names: train, valid, clinicalnlp_taskB_test1, clinicalnlp_taskC_test2, and clef_taskC_test3, respectively. Source experiment data files offer subset-specific versions of train/validation/test in which the transcript may be the alternate forms of ASR or ASR-corrected versions. The naming convention prefix of these is according to the pattern: {split}_{subset}_{transcript-version}. Therefore, for example, train_virtscribe_humantrans.csv will give the training data from the **virtscribe** subset with the original human transcription version; whereas train_virtscribe_asr.csv will give the ASR transcription version. ### Metadata files ("_metadata.csv) Metadata files include columns for the dataset name (e.g. **virtassist**, **virtscribe**, **aci**), _id_, _encounter_id_, _doctor_name_, _patient_firstname_, _patient_familyname_, _gender_, _chief complaint (cc)_, and _secondary complaints (2nd_complaints)_. Both _id_ and _encounter_id_ can be used to identify a unique encounter. The _encounter_id_ were the identifiers used for the MEDIQA-CHAT and MEDIQASUM 2023 competitions. The _id_ unique identifier will also denote a specific subset. ### Transcript/Note files (".csv) In the source-target data files, transcript and note text are given along with the dataset name and _id_ or _encounter_id_. This file may be joined with the metadata files using either _id_ or _encounter_id_. _encounter_id_ should be used for challenge data, whereas the _id_ should be used for the source experiment data. ## 4 Technical Validation ### Content validation After dataset creation and cleaning, an additional content validation step was conducted to ensure medical soundness. For each encounter, medical annotators were tasked with reviewing each symptom, test, diagnosis and treatment from the encounter. In cases where the medical annotation specialist was unsure of certain facts (e.g. can drug X be prescribed at the same time as drug Y?), the encounter undergoes two possible additional reviews. Firstly, if the phenomenon in question can be searched identified from a +3M store of propriety clinical notes (which we will refer to as the consult dataset)4, we deemed the information credible. Alternatively, if the information is not something that could be identified by the first approach, the question is escalated \begin{table} \begin{tabular}{l l l l l l} \hline & train & valid & test1 & test2 & test3 \\ \hline number encounters & 67 & 20 & 40 & 40 & 40 \\ \hline dialogue & & & & & \\ avg number turns & 56 & 53 & 52 & 56 & 58 \\ avg length (tok) & 1301 & 1221 & 1231 & 1382 & 1334 \\ \hline note & & & & & \\ avg length (tok) & 483 & 492 & 476 & 500 & 505 \\ avg length (sentences) & 48 & 49 & 47 & 50 & 50 \\ \# subjective & 67 & 20 & 40 & 40 & 40 \\ \# objective\_exam & 64 & 19 & 40 & 39 & 39 \\ \# objective\_results & 53 & 18 & 32 & 29 & 27 \\ \# assessment\_and\_plan & 67 & 20 & 40 & 40 & 40 \\ \hline subset & & & & & \\ \# **virtassist** & 20 & 5 & 10 & 10 & 10 \\ \# **virtscribe** & 12 & 4 & 8 & 8 & 8 \\ \# **aci** & 35 & 11 & 22 & 22 & 22 \\ \hline \end{tabular} \end{table} Table 3: Corpus statistics to a clinical expert annotator. Encounters with unexplainable or severe logical or medical problems identified by a medical annotators were removed (e.g. using a medication for urinary tract infection for upper respiratory infection). ### Comparison with real data To study differences between the aci-bench dataset and a set of real encounters, we conduct statistical comparison with 163 randomly chosen family medicine clinical encounters (including pairs human transcriptions and corresponding clinical notes) with in-depth alignment annotation, from the consult dataset. Tables 4 and 5 show the statistical comparison between the 20 encounters in the validation set (aci-validation) and the consult encounters. In general, the aci-bench dataset had on average shorter notes, at 492 tokens versus 683 tokens for the consult dataset. Except for the objective_results division, every division was longer in the consult data (Table 4). The aci-bench dataset also exhibits shorter dialogue lengths, by approximately 100 tokens and 20 sentences; as well a shorter notes by approximately 100 tokens (Table 5). One reason for the shorter note length is our removal of unsupported note text. We additionally annotated for alignments of data between the source and target on the validation set (20 encounters) and consult set, similar to that of previous work17. This annotation marks associations between note sentences and their corresponding source transcript sentences. Unmarked note sentences indicate that a sentence may be purely structural (e.g. section header) or may include unsupported content. Likewise, unmarked transcript sentences may indicate that the content is superfluous. Comparing the portions of annotated alignments in separate corpora gives indications of corpora similarity with respect to relative content transfer. Other useful metrics which provide measures of alignment/generation difficulty include : (a) the fraction of alignment crossings (whether content appear monotonically versus "out-of-order"/"crossing")18), (b) the similarity of corresponding text segments, and (c) percentage of transcript speech modes. The results of these comparisons are shown in Table 5. Footnote 5: DICATION: besides punctuation and formatting word-for-word copy-paste statements from the transcript, QA: question-answer conversation adjacency pairs, STATEMENT: conversation statements, STATEMENT2SCRIBE: directed instructions or content to a external scribe. Labeled alignment annotations show that approximately the same fractions of dialogue and note sentences were labeled (0.34 and 0.49 transcript, 0.84 and 0.95 note for the consult and aci-bench corpus respectively); with a high 0.95 fraction for the aci-bench corpus, as designed by the removal of unsupported text. With shorter transcripts (1203 tokens in aci-bench vs 1505 tokens in the consult set), the aci-bench corpus also had a 15% more aligned transcript sentences. The text similarity (Jaccard unigram) of alignments were similar (0.15 and 0.12) as was the fraction of crossing annotations (0.67 and 0.95) for the consult and aci-bench corpus respectively; though the dialogue-note document similarity was higher in the aci-bench corpus. The percentage of note sentences annotated with different labels6 show across the board lower percentages in the consult data. This is explainable as the transcript length and thus the percentage of note sentences annotated with a certain label will decrease. However, it is interesting to show that the aci-bench corpus had a higher percentage of note sentences coming from question-answer paired transcript sentences and conversation statements rather than dictation/statement2scribe. For example while in the consult dataset, important QA makes up twice as much transcript sentences as in dictation (15% and 8%), in the aci-bench dataset there are ten times more QA labeled sentences than dictation (43% vs 4%). Meanwhile in the consult dataset, transcript sentences identified with an alignment using the "statement" tag was about three times that of dictation, however this was about seven times in the aci-bench corpus. Together, this data suggests that the aci-bench corpus may be slightly less challenging in terms of documents lengths and has a skew towards question-answer and statements information content; though the magnitudes in lengths and similarity are comparable. Footnote 6: DICATION: besides punctuation and formatting word-for-word copy-paste statements from the transcript, QA: question-answer conversation adjacency pairs, STATEMENT: conversation statements, STATEMENT2SCRIBE: directed instructions or content to a external scribe. ### Baseline experiments In this section, we present our baseline experiments designed to benchmark the aci-bench Corpus. These experiments encompass various note-generation tasks and incorporate state-of-the-art note-generation techniques. To assess the robustness \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline & subjective & objective\_exam & objective\_results & assessment\_and\_plan & full \\ \hline **consult** & & & & & \\ \hline avg length (tok) & 393 & 149 & 19 & 122 & 683 \\ avg length (sentences) & 35 & 19 & 2 & 11 & 66 \\ \hline **aci-validation** & & & & & \\ \hline avg length (tok) & 229 & 48 & 23 & 192 & 492 \\ avg length (sentences) & 24 & 8 & 4 & 16 & 49 \\ \hline \end{tabular} \end{table} Table 4: Data statistic comparing notes from aci-validation with a sample of real doctor-patient. of note-generation techniques, we also examine the impact of different clinical doctor-patient dialogue transcript generation methods with and without human correction on the quality of automatically generated clinical notes derived from these transcripts. #### 4.3.1 Note generation models The experiments on note-generation models to benchmark the aci-bench Corpus are listed below: **Transcript-copy-and-paste**: Previous research finds taking the longest sentence [19] as dialogue summarization is a good baseline. In the spirit of this approach, we adopt several variations to generate the clinical note: (1) the longest speaker's turn, (2) the longest doctor's turn, (3) the first two and the last ten speaker's turns, (4) the first two and the last ten doctors turns and (5) the entire transcript. **Retrieval-based**: Borrowing from retrieval-based response generation [20], we pose a simple baseline that retrieves a relevant note in the training corpus rather than generating new text. To generate a clinical note for a new transcript, we employ transcript UMLS concept set similarity to retrieve the most similar transcript from the train set. The note that corresponds to this transcript in the training set is selected as the summarization for the new transcript, based on the assumption that the semantic overlap between the UMLS concepts in the two transcripts is a reliable indicator of their content similarity. Following the same manner, we adopt a similar retrieval-based method on the document embedding similarity from the spaCy English natural language process pipeline ([https://spacy.io/](https://spacy.io/)). **BART-based**: We employ the SOTA transformer model, bidirectional autoregressive transformer (BART) [21]. We also include its two variants: (1) a version with continued pre-training on PubMed abstract [22], aimed at learning domain-specific language and knowledge, and (2) a version fine-tuned on the SAMSum corpus [23], designed to enhance the model's performance on conversational summarization tasks. For all BART-based models, we use the BART-Large version. It is important to note that although BART and BioBART have the same model structure, they possess distinct tokenizers and vocabulary sizes. These differences play a significant role in determining their respective performance on the aci-bench corpus. The corresponding fine-tuning parameters can be found in the Appendix. **LED-based**: We leverage the Longformer-Encoder-Decoder (LED) architecture [24], which incorporates an attention mechanism that can scale up to longer sentences. LED-based models have the same limit of 16K tokens. Because the transcript is long, LED overcomes the sentence length limit from BART. We also include its variant, which is finetuned on the Pubmed dataset [25], to enhance the model's summarization ability in the biomedical context. The corresponding fine-tuning parameters can be found in the Appendix. \begin{table} \begin{tabular}{l|l|l} \hline & consult & aci-corpus \\ \hline dialogue & & \\ avg length (no speaker tokens) (tok) & 1505 & 1203 \\ avg length (sentences) & 141 & 80 \\ note & & \\ avg length (tok) & 683 & 492 \\ avg length (sentences) & 66 & 49 \\ \hline annotation & & \\ fraction note sentences aligned & 0.84 & 0.95 \\ fraction transcript sentences aligned & 0.34 & 0.49 \\ fraction crossing annotations & 0.67 & 0.75 \\ avg alignment text similarity & 0.15 & 0.12 \\ avg encounter dialogue-note text similarity & 0.26 & 0.31 \\ \hline \(\%\) note sentences with labels & & \\ DICTATION & 8 & 4 \\ QA & 15 & 43 \\ STATEMENT & 23 & 29 \\ STATEMENT2SCRIBE & 17 & 7 \\ \hline \end{tabular} \end{table} Table 5: Alignment statistic comparison of aci-validation with a sample of real doctor-patient. **OpenAI models**: We experimented with the latest OpenAI models and APIs6: (i) Text-davinci-002, (ii) Text-davinci-003, (iii) ChatGPT (gpt-3.5-turbo), and (iv) GPT-4. The first three models have the same limit of 4,097 tokens, shared between the prompt and the output/summary, whereas GPT-4 allows 32k tokens. We used the following prompt: Footnote 6: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) * Prompt: "summarize the conversation to generate a clinical note with four sections: HISTORY OF PRESENT ILLNESS, PHYSICAL EXAM, RESULTS, ASSESSMENT AND PLAN. The conversation is:" To allow adequate division detection, we added some light rule-based post-processing, adding endlines before and after for each section header. This post-processing described in Appendix Table 13. #### 4.3.2 Full-note- vs division-based note-generation approaches In the cases of the fine-tuned pre-trained models, we investigated note generation with two overall approaches: full note generation versus division-based generation and concatenation. The first approach generates a complete note from the transcript using a single model or approach. The latter approach is motivated by the long input and output lengths of our data - which may exceed that of those pre-trained models are typically trained for. To this end, full notes were divided into the subjective, objective_exam, objective_results, and assessment_and_plan divisions using a rule-based regular-expression section detection. As the notes were followed a handful of regular patterns, this section detection was highly performant. In cases where certain sections were missing, an _EMPTY_ flag was used as the output. Each division generation model was separately fine-tuned. The final note was created by concatenating the divisions. #### 4.3.3 Automatic Evaluation Metrics We employ a variety of widely-used automatic evaluation metrics to evaluate performances in different perspectives. Specifically, we measure at least one lexical n-gram metric, an embedding-based similarity metric, a learned metric, and finally an information extraction metric. We evaluate the note generation performance both in the full note and in each division. For the ngram-based lexical metric, we compute ROUGE26 (1/2/-L), which computes unigram, bigram, and the longest common subsequence matches between reference and candidate clinical notes. For an embedding-based metric, we applied BERTScore27 which greedily matches contextual token embeddings from pairwise cosine similarity. BERTScore efficiently captures synonym and context information. For a model-based learned metric, we used BLEURT28, which is trained for scoring candidate-reference similarity. Additionally, we incorporate a medical concept- based evaluation metric (**medcon**) to gauge the accuracy and consistency of clinical concepts. This metric calculates the F1-score to determine the similarity between the Unified Medical Language System (UMLS) concept sets in both candidate and reference clinical notes.7 The extraction of UMLS concepts within clinical notes is performed using a string match algorithm applied to the UMLS concept database through the QuickUMLS package30. To ensure clinical relevance, we restrict the **medcon** metric to specific UMLS semantic groups, designated as _Anatomy_, _Chemicals &Drugs_, _Device_, _Disorders_, _Genes & _Molecular Sequences_, _Phenomena_ and _Physiology_. To consolidate the various evaluation metrics, we first take the average of the three ROUGE submetrics as ROUGE, and the average of ROUGE, BERTScore, BLEURT, and **medcon** scores as the final evaluation score. Because BERTScore and BLEURT are limited by their pre-trained embedding length, we only use these evaluations for the division-based evaluation. Footnote 7: This is similar to the CheXpert evaluation for radiology summarization however our concepts are not restricted to 14 predetermined categories, and do not include weightings or assertion status.29 Footnote 8: TaskB in [https://github.com/abacha/MEDIQA-Chat-2023](https://github.com/abacha/MEDIQA-Chat-2023) #### Results We fine-tune the models on the train set and select the best trained model based on evaluation on the validation set. Performances were evaluated on three test sets. Test sets 1 and 2 correspond to the test sets from ACL ClinicalNLP MEDIQA-Chat 2023 TaskB full-note generation and TaskC dialogue generation, respectively. Test 3 corresponds to CLEF MEDIQA-SUM 2023 Subtask C full-note generation. Our test 1 full note evaluation results can be found in Tables 6. Per-division subjective, objective_exam, objective_results, and assessment_and_plan results for test 1 are accounted for in Tables 7, 8, 9, and 10. In the main body of this paper, we discuss the results of test 1 which was used for our first full note generation task challenge8. We will first provide an overview of the model performance in both full-note and division-based evaluations. We will then describe each model type's performance. For reference, we provide the results of test 2 and test 3 in the Appendix. Footnote 8: TaskB in [https://github.com/abacha/MEDIQA-Chat-2023](https://github.com/abacha/MEDIQA-Chat-2023) In the full-note evaluation, the **BART+FT\({}_{\text{SAMSum}}\)** (Division) model achieved the highest ROUGE scores, with 53.46 for ROUGE-1, 25.08 for ROUGE-2 and 48.62 for ROUGE-L. This is because when **BART+FT\({}_{\text{SAMSum}}\)** (Division) model was fine-tuned on our **aci-bench** training set, it learned more specific clinical jargon in the **aci-bench** corpus, such as accurate subsection headers ("CHIEF COMPLAINT", "HISTORY OF PRESENT ILLNESS",...) and physical examination results ("-Monitoring of the heart: No murmurs, gallops..." ). On the contrary, GPT-4 demonstrated the highest **medcon** evaluation score of 57.78, while achieving the second to third-best performance in ROUGE scores, with 51.76 for ROUGE-1, 22.58 for ROUGE-2 and 45.97 for ROUGE-L. The great performance can be attributed to the model's gigantic size, intensive pretraining, huge context size, and great versatility. GPT-4 captured many relevant clinical facts and thus had the highest **medcon**. However, since it was not specifically fine-tuned for the aci-bench corpus clinical note format, it exhibited slightly inferior performance in capturing the structured aci-bench clinical notes. An example of a note generated from different models can be found in the Appendix Table 14. Interestingly, the retrieval-based baselines showed very competitive ROUGE performances out-of-the-box with ROUGE-L of 40.47 F1 for and 38.20 F1 for the UMLS and sentence versions respectively. Furthermore, the simple transcript copy-and-paste baselines produced high starting points that out-performed untreated LED-based models. For example, simply copying the transcript achieved a 40.47 F1 ROUGE-L and 33.30 F1 **medcon** score, whereas the fine-tuned division based LED model achieved 29.30 F1 and 32.67 F1. In division-based evaluations, we found that different models achieved the highest average score across different note divisions, BART+FTSAMSum (Division) scored 51.08 in the subjective division (Table 7), Text-davinci-003 reached 55.30, 48.90 and 46.19 in the objective_exam (Table 8), objective_results (Table 9), and assessment_and_plan (Table 10) divisions, respectively. These results indicate that all three models can be good candidates for the note-generation task. However, since BART+FTSAMSum (Division) required fine-tuning and Text-davinci-003 did not, the latter two models demonstrated greater potential. A few additional examples for Text-davinci-003 could potentially enhance their performance, by enabling the models to learn specific clinical jargon in each division. In comparing the full-note and division-based note-generation approaches, our experiments demonstrated that, for our pretrained BART- and LED-based models, division-based note-generation methods resulted in significant improvements over full-note-generation methods. These improvements ranged from 1 to 14 point increases in both ROUGE and **medcon** evaluations for the full-note-based evaluation. This finding implies that breaking down a complex summarization problem into smaller divisions effectively captures more critical information. For division-based evaluations, the increase is not obvious \begin{table} \begin{tabular}{l c c c c} \hline **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **medcon** \\ \hline **Transcript-copy-and-paste** & & & & \\ longest speaker turn & 27.84 & 9.32 & 23.44 & 32.37 \\ longest doctor turn & 27.47 & 9.23 & 23.20 & 32.33 \\ 12 speaker turns & 33.16 & 10.60 & 30.01 & 39.68 \\ 12 doctor turns & 35.88 & 12.44 & 32.72 & 47.79 \\ transcript & 32.84 & 12.53 & 30.61 & 55.65 \\ \hline **Retrieval-based** & & & & \\ train\({}_{\text{ UMLS}}\) & 43.87 & 17.55 & 40.47 & 33.30 \\ train\({}_{\text{sent}}\) & 41.59 & 15.50 & 38.20 & 26.17 \\ \hline **BART-based** & & & & \\ BART & 41.76 & 19.20 & 34.70 & 43.38 \\ BART (Division) & 51.56 & 24.06 & 45.92 & 47.23 \\ BART+FTSAMSum & 40.87 & 18.96 & 34.60 & 41.55 \\ BART+FTSAMSum (Division) & **53.46** & **25.08** & **48.62** & 48.23 \\ BioBART & 39.09 & 17.24 & 33.19 & 42.82 \\ BioBART (Division) & 49.53 & 22.47 & 44.92 & 43.06 \\ \hline **LED-based** & & & & \\ LED & 28.37 & 5.52 & 22.78 & 30.44 \\ LED (Division) & 34.15 & 8.01 & 29.80 & 32.67 \\ LED+FTPubMed & 27.19 & 5.30 & 21.80 & 27.44 \\ LED+FTPubMed (Division) & 30.46 & 6.93 & 26.66 & 32.34 \\ \hline **OpenAI (wo FT)** & & & & \\ Text-Davinci-002 & 41.08 & 17.27 & 37.46 & 47.39 \\ Text-Davinci-003 & 47.07 & 22.08 & 43.11 & 57.16 \\ ChatGPT & 47.44 & 19.01 & 42.47 & 55.84 \\ GPT-4 & 51.76 & 22.58 & 45.97 & **57.78** \\ \hline \end{tabular} \end{table} Table 6: Results of the summarization models evaluated at the full note level, test set 1. Simple retrieval-based methods provided strong baselines wih better out-of-the-box performances than LED models and full-note BART models. In general for BART and LED fine-tuned models, division-based generation worked better. OpenAI models with simple prompts were shown to give competitive outputs despite no additional fine-tuning or dynamic prompting. for the subjective divisions, but around 20 percent in the average score for objective_exam, objective_results and assessment_and_plan divisions. This can be attributed to the generation of the latter three divisions at the end of clinical notes, which often exceeds the word length of typical summarization tasks that BART-based and LED-based models are used for. Additionally, since some notes in the training set lack these divisions, the note-generation models struggle to learn the division structure during fine-tuning from the full note. As the division of clinical notes is identified by a rule-based division header extraction method, even when the information from a specific division is generated as a few sentences, the corresponding division information cannot be detected by the evaluation program. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Evaluation score on the subjective division**} \\ \cline{2-9} **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **BERTScore** & **BLEURT** & **medCON** & **Average** \\ \hline **Retrieval-based** & & & & & & & \\ train\({}_{\text{UMLS}}\) & 41.70 & 23.45 & 31.64 & 72.10 & 39.01 & 23.04 & 41.60 \\ train\({}_{\text{sent}}\) & 41.12 & 20.62 & 29.20 & 70.78 & 37.94 & 18.86 & 39.47 \\ \hline **BART-based** & & & & & & & \\ BART & 48.19 & 25.81 & 30.13 & 68.93 & 43.83 & 44.41 & 47.97 \\ BART (Division) & 47.25 & 26.05 & 31.21 & 70.05 & 43.55 & 44.20 & 48.16 \\ BART+FT\({}_{\text{SAMSum}}\) & 46.33 & 25.52 & 29.88 & 68.68 & **45.01** & 43.21 & 47.70 \\ BART+FT\({}_{\text{SAMSum}}\) (Division) & **52.44** & **30.44** & **35.83** & **72.41** & 44.51 & **47.84** & **51.08** \\ BioBART & 45.79 & 23.65 & 28.96 & 68.49 & 41.09 & 41.10 & 45.87 \\ BioBART (Division) & 46.29 & 25.99 & 32.43 & 70.30 & 42.99 & 41.14 & 47.33 \\ \hline **LED-based** & & & & & & & \\ LED & 24.81 & 5.29 & 11.00 & 55.60 & 30.68 & 20.19 & 30.04 \\ LED (Division) & 31.27 & 8.31 & 15.99 & 56.94 & 25.40 & 24.03 & 31.22 \\ LED+FT\({}_{\text{PubMed}}\) & 23.48 & 4.72 & 10.49 & 54.46 & 20.32 & 17.91 & 26.40 \\ LED+FT\({}_{\text{PubMed}}\) (Division) & 26.03 & 6.17 & 12.93 & 56.41 & 19.19 & 20.46 & 27.78 \\ \hline **OpenAI (wo FT)** & & & & & & & \\ Text-Davinci-002 & 29.73 & 12.38 & 20.13 & 58.98 & 36.70 & 32.47 & 37.22 \\ Text-Davinci-003 & 33.29 & 15.24 & 23.76 & 60.63 & 38.06 & 36.14 & 39.73 \\ ChatGPT & 32.70 & 14.05 & 22.69 & 65.14 & 39.48 & 38.21 & 41.49 \\ GPT-4 & 41.20 & 19.02 & 26.56 & 63.34 & 43.18 & 44.25 & 44.93 \\ \hline \hline \end{tabular} \end{table} Table 7: Results of the summarization models on the subjective division, test set 1. BART-based models generated at both full note and division levels had similar levels of performances, which were in general better than the other model classes. As in the full note evaluation, retrieval-based methods provided competitive baselines. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Evaluation score on the objective\_results division**} \\ \cline{2-7} **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **BERTScore** & **BLEURT** & **medcon** & **Average** \\ \hline **Retrieval-based** & & & & & & & \\ train\({}_{\text{UMLS}}\) & 43.43 & 25.77 & 36.74 & **74.63** & 40.96 & 24.63 & 43.88 \\ train\({}_{\text{sent}}\) & 37.02 & 19.58 & 31.20 & 71.47 & 35.83 & 14.52 & 37.77 \\ \hline **BART-based** & & & & & & & \\ BART & 0.56 & 0.36 & 0.56 & 40.25 & 10.66 & 0.00 & 12.85 \\ BART (Division) & 49.77 & 31.63 & 38.92 & 73.75 & 44.19 & 34.80 & 48.21 \\ BART+FT\({}_{\text{SAMSum}}\) & 6.22 & 3.74 & 5.21 & 44.33 & 14.82 & 4.14 & 17.09 \\ BART+FT\({}_{\text{SAMSum}}\) (Division) & 47.73 & 29.51 & 36.98 & 73.41 & 42.86 & 35.91 & 47.56 \\ BioBART & 2.57 & 1.04 & 1.68 & 42.10 & 12.38 & 1.22 & 14.36 \\ BioBART (Division) & 42.51 & 26.15 & 32.19 & 71.57 & 42.18 & 29.55 & 44.23 \\ \hline **LED-based** & & & & & & & \\ LED & 0.00 & 0.00 & 0.00 & 0.00 & 14.87 & 0.00 & 3.72 \\ LED (Division) & 27.03 & 7.96 & 16.88 & 54.48 & 14.47 & 18.84 & 26.27 \\ LED+FT\({}_{\text{PubMed}}\) & 0.00 & 0.00 & 0.00 & 0.00 & 14.87 & 0.00 & 3.72 \\ LED+FT\({}_{\text{PubMed}}\) (Division) & 20.24 & 6.30 & 12.14 & 54.13 & 12.67 & 18.07 & 24.44 \\ \hline **OpenAI (wo FT)** & & & & & & & \\ Text-Davinci-002 & 43.68 & 22.31 & 35.03 & 68.25 & 45.68 & 35.41 & 45.75 \\ Text-Davinci-003 & **54.17** & **32.42** & **44.54** & 73.40 & **51.29** & **52.79** & **55.30** \\ ChatGPT & 49.44 & 27.29 & 38.60 & 71.39 & 49.39 & 48.95 & 52.04 \\ GPT-4 & 50.11 & 28.20 & 40.43 & 71.79 & 51.11 & 42.59 & 51.27 \\ \hline \hline \end{tabular} \end{table} Table 8: Results of the summarization models on the objective\_exam division, test set 1. BART and LED full note generation models suffered a significant drop at the objective\_exam. This may be attributable to the lower amounts of content required to be generated, the appearance of text later in the sequence, as well as the higher variety of structures. The OpenAI were in general better performant with BART division-based models as next best. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Evaluation score on the objective\_results division**} \\ \cline{2-7} **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **BERTScore** & **BLEURT** & **medcon** & **Average** \\ \hline **Retrieval-based** & & & & & & & \\ train\({}_{\text{UMLS}}\) & 30.26 & 14.89 & 29.87 & 66.24 & 37.25 & 8.91 & 34.35 \\ train\({}_{\text{sent}}\) & 40.52 & 18.21 & 38.87 & **73.33** & 45.79 & 12.45 & 41.03 \\ \hline **BART-based** & & & & & & & \\ BART & 0.00 & 0.00 & 0.00 & 0.00 & 5.45 & 0.00 & 1.36 \\ BART (Division) & 30.48 & 19.16 & 27.80 & 66.64 & 43.07 & 21.56 & 39.27 \\ BART+FT\({}_{\text{SAMSum}}\) & 20.79 & 0.46 & 20.67 & 54.54 & 28.32 & 0.77 & 24.40 \\ BART+FT\({}_{\text{SAMSum}}\) (Division) & 29.45 & 18.01 & 26.63 & 66.43 & 40.75 & 20.17 & 38.01 \\ BioBART (Division) & 17.50 & 0.00 & 17.50 & 52.44 & 25.33 & 0.00 & 22.36 \\ BioBART (Division) & 35.38 & 14.33 & 32.79 & 68.40 & 47.63 & 15.69 & 39.81 \\ \hline **LED-based** & & & & & & & \\ LED & 0.00 & 0.00 & 0.00 & 0.00 & 5.45 & 0.00 & 1.36 \\ LED (Division) & 14.04 & 4.97 & 11.08 & 48.86 & 9.61 & 7.86 & 19.09 \\ LED+FT\({}_{\text{PubMed}}\) (Division) & 0.00 & 0.00 & 0.00 & 0.00 & 5.45 & 0.00 & 1.36 \\ LED+FT\({}_{\text{PubMed}}\) (Division) & 10.48 & 3.64 & 8.32 & 42.43 & 7.13 & 8.86 & 16.48 \\ \hline **OpenAI (wo FT)** & & & & & & & \\ Text-Davinci-002 & 41.48 & 20.12 & 39.95 & 70.61 & 50.79 & 24.42 & 44.92 \\ Text-Davinci-003 & **44.92** & **25.21** & **43.84** & 72.35 & **55.87** & **29.37** & **48.90** \\ ChatGPT & 34.50 & 17.75 & 30.84 & 66.68 & 48.51 & 22.28 & 41.29 \\ GPT-4 & 37.65 & 19.94 & 35.73 & 68.33 & 48.50 & 26.73 & 43.67 \\ \hline \hline \end{tabular} \end{table} Table 9: Results of the summarization models on the objective\_results division, test set 1. Similar to objective\_exam, BART and LED full note generation models suffered a significant drop at the objective\_results division. This may be attributable to the higher sparsity of this division, low amounts of content (sometimes only 2-3 sentences), and the appearance of text later in the sequence. The OpenAI were in general better performant with BART division-based models as next best. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Evaluation score on the assessment\_and\_plan division**} \\ \cline{2-7} **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **BERTScore** & **BLEURT** & **medcon** & **Average** \\ \hline **Retrieval-based** & & & & & & & \\ train\_UMLS & **44.59** & **21.50** & **29.66** & **70.39** & 44.77 & 24.70 & 42.94 \\ train\_sent & 41.28 & 19.73 & 28.02 & 69.48 & 43.18 & 18.79 & 40.28 \\ \hline **BART-based** & & & & & & & \\ BART & 0.00 & 0.00 & 0.00 & 0.00 & 29.05 & 0.00 & 7.26 \\ BART (Division) & 43.31 & 20.59 & 26.55 & 67.49 & 40.99 & 32.30 & 42.73 \\ BART+FT\_SAMSum & 1.52 & 0.49 & 0.87 & 35.38 & 19.79 & 1.00 & 14.28 \\ BART+FT\_SAMSum (Division) & 43.89 & 21.37 & 27.56 & 68.09 & 41.96 & 31.33 & 43.08 \\ BioBART & 0.00 & 0.00 & 0.00 & 0.00 & 29.05 & 0.00 & 7.26 \\ BioBART (Division) & 42.44 & 19.44 & 26.42 & 67.57 & 43.88 & 31.12 & 43.00 \\ \hline **LED-based** & & & & & & & \\ LED & 0.00 & 0.00 & 0.00 & 0.00 & 29.05 & 0.00 & 7.26 \\ LED (Division) & 28.23 & 6.13 & 12.44 & 55.75 & 27.78 & 21.94 & 30.27 \\ LED+FTpubMed & 0.00 & 0.00 & 0.00 & 0.00 & 29.05 & 0.00 & 7.26 \\ LED+FTpubMed (Division) & 28.00 & 5.99 & 13.07 & 55.68 & 20.95 & 25.01 & 29.33 \\ \hline **OpenAI (wo FT)** & & & & & & & \\ Text-Davinci-002 & 30.90 & 12.27 & 21.44 & 61.01 & 44.98 & 35.04 & 40.64 \\ Text-Davinci-003 & 35.41 & 14.86 & 25.38 & 63.97 & 49.18 & **46.40** & **46.19** \\ ChatGPT & 36.43 & 12.50 & 23.32 & 63.56 & 48.21 & 43.71 & 44.89 \\ GPT-4 & 38.16 & 14.12 & 24.90 & 64.26 & **49.41** & 42.36 & 45.44 \\ \hline \hline \end{tabular} \end{table} Table 10: Results of the summarization models on the assessment_and_plan division, test set 1. Similar to objective_exam and objective_results, BART and LED full note generation models suffered a significant drop at the objective_results division. This may be attributable to the appearance of text later in the sequence. The OpenAI were in general better performant with BART division-based models as next best. Our observations on the performance for each type of model are summarized below: **Transcript-copy-and-paste**: models are only evaluated in the full note. It demonstrated suboptimal performance, which is around 17 points less than the best ROUGE scores. This is primarily because transcripts from doctor-patient dialogues serve to facilitate doctor-patient interactions with questions, answers, and explanations related to various health phenomena. In contrast, clinical notes, which are created by and intended for healthcare professionals, generally follow the SOAP format to convey the information concisely and accurately. Therefore, transcripts and notes can differ significantly in terms of terminology, degree of formality, relevance to clinical issues, and the organization of clinical concepts. On the other hand, the original transcript often achieves the fourth highest score in **medcon** evaluation at 55.65, owing to its ability to capture relevant UMLS concepts explicitly mentioned within the transcript. **Retrieval-based**: models have the best BERTScore in objective_exam, objective_results and assessment_and_plan divisions in test 1, with around 1 to 5 points increase over the best BART-based and OpenAI models. They also have shown sometimes promising results with the second and first average scores in objective_results and assessment_and_plan divisions from test 3. This is because clinical notes with similar transcripts tend to have more similar clinical notes, especially when objective_results sections use standard phrasing and templates, or in scenarios where patients share common symptoms and health examinations from different medical problems. However, their performance in the **medcon** evaluation metric is often poor because of the less accurate patient-specific medical conditions. As a result, these models may perform well in non-**medcon** evaluation metrics but may not produce accurate **medcon** evaluations. **BART-based**: models demonstrated superior performance. In full-note evaluation, BART+FT\({}_{\text{SAMSum}}\) (Division) had the best ROUGE score performance with **medcon** evaluation scores only secondary to OpenAI models. In subjective division, BART+FT\({}_{\text{SAMSum}}\) (Division) had top performance in all scores except BLEURT. These findings suggest that using a model fine-tuned on a similar dataset serves as a solid foundation for summarization tasks. Meanwhile, BioBART exhibits a comparatively weaker performance than BART, which could be attributed to the choice of vocabularies, tokenizers, and consequently, the quality of contextual embeddings. For BART-based models, the division-based note-generation approach improved the performance from the full-note-generation approach with around a 5 to 40 points increase in all division-based average scores. This implies that dividing the complex note-generation tasks into simpler subtasks can boost model performance. **LED-based**: models were generally inferior to that of BART-based models with around 15 to 40 points lower scores in full-note ROUGE and **medcon** scores. We observed that compared with the BART-based models, the LED-based models generate notes with worse fluency, less essential clinical information, and poorer division structure. On the other hand, the effect of the division method on LED-based models was similar to that on BART-based models, which lead to a 1 to 9 points increase in full-note ROUGE and **medcon** scores and a 2 to 25 points increase in division-based average scores. **OpenAI**: models exhibited good general performance and using a generic prompt, without fine-tuning. GPT4 outperformed other OpenAI models, at around 10 ROUGE-1 F1 points in full-note evaluation. This is consistent as GPT4 is known to have been trained with more parameters and has had shown to have made impressive performances across a variety of human tasks [31, 32]. While Text-davinci-003 and ChatGPT were within 4 ROUGE-1 points in test 1, there were larger 4-9 point gaps in test 2 and 3 respectively. This information combined with the relatively stable ROUGE-1 score for GPT4 (at around \(\sim\)50 Rouge-1), suggests that the earlier models had more unstable performances. Assessing the division-based performances, we see the relative ranking of the OpenAI's were more variable (with the exception of Text-davinci-002 consistently performing below the other models). _Effect of ASR vs human transcription and correction_ In practice, automatic speech recognition (ASR) is widely deployed because it provides an affordable, real-time text-based transcript. However, the quality of ASR is usually worse than the human transcript, influenced by its model type, hardware, and training corpus. To study the effect of ASR vs human transcription on clinical note generation from dialogue, we evaluate the note-general model performance on transcripts generated from these two approaches. We compare the performance between human transcription versus ASR for the **virtscribe** subset of the data; and ASR versus ASR-corrected in the **aci** subet. We conduct this ablation study with one of the best models in the previous section, BART+FT\({}_{\text{SAMSum}}\) (Division), and compare the result on the split of the three test sets. To study the difference with human transcription versus ASR for the **virtscribe** subset, we experiment with feeding the raw ASR transcripts instead of it's original human transcription. We also fine-tune the model further to adapt to the ASR version by additionally learning for an additional 3 epochs with the same parameters using the ASR version of the **virtscribe** train set. To understand effects of the train/decode discrepancies, we evaluate the results of feeding in the original human-transcription source as well as ASR versions to both the original fine-tuned model and the further ASR-fined tuned model. The results of the **virtscribe** source experiments are presented in Table 11. We observed that the model setup with best ROUGE-1/2/L and **medcon** scores are different for each test set. Namely, BART+FTSAMSum (Division) with transcripts generated by ASR from **virtscribe** dialogues do not exhibit outstanding differences with the human transcription (when using ASR source input performance dropped to 41.74 F1 ROUGE-L instead of 43.98 ROUGLE-L with the original model human transcript input for test1). Further fine-tuning BART+FTSAMSum (Division) with ASR notes in the train set also did not greatly improve the performance (fine-tuning improved 2 points in F1 to 43.82 F1 for an ASR transcript source, and a minimal drop to 43.59 when applying the original human transcription source). This indicates that ASR and the human transcript do not have a remarkable impact on the note-generation performance from dialogue with **virtscribe**. To study the effect of ASR versus ASR-corrected, we conduct similar experiments for the **aci** subset by substituting the original ASR transcripts with corrected versions. The results of these experiments are shown in Table 12, we also observed that the model setup with best ROUGE-1/2/L and **medcon** scores are different for each test set. The ASR-corrected did not exhibit more outstanding improvement from the original ASR on the BART+FTSAMSum (Division)'s note generation performance (with approximately 1 F1 point difference amongst all the test sets and evaluation versions and metrics). Further fine-tuning BART+FTSAMSum (Division) with ASRcorr notes in the train set also did not substantially change performance. This indicates that those ASR errors corrected by humans do not have a remarkable impact on the note generation performance. In summary, our investigation of ASR versus human transcription shows that although ASR can generate errors in the transcript, those errors do not have a remarkable impact on the note-generation performance and are thus tolerable by our current model setting. However, this could be due to our automatic evaluation metrics evaluating the n-grams and clinical facts with uniform weights. In clinical practice, some particular medical fact errors from the ASR can have a non-trivial impact. ## 5 Usage Notes We have provided instructions in the README file in the Figshare repository describing how to process the aci-bench dataset. Examples of processing the data for different summarization evaluations can be found in the code located at the GitHub repository provided below. ### Limitations There are several limitations to this work. The data here is small and produced synthetically by medical annotators or patient actors in a single institution. Therefore, this dataset may not cover in a statistically representative way, all health topics, speech variations, and note format variations present in the real word. The data here is intended to be used for benchmarking methods related to clinician-patient dialogue summarization. It should not be used for training models to make medical diagnosis. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Test** & **Bart** & **Test** & & & & \\ **set** & **Fine-tuning** & **Split** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** & **medcon** \\ \hline \multirow{4}{*}{1} & train & ASR & 48.61 & 18.94 & 41.74 & 42.63 \\ & +train\({}_{\text{ASR}}\) & ASR & **49.70** & 19.96 & 43.82 & 41.96 \\ & train & human & 48.28 & **20.09** & **43.98** & **46.13** \\ & +train\({}_{\text{ASR}}\) & human & 48.50 & 19.52 & 43.59 & 42.85 \\ \hline \multirow{4}{*}{2} & train & ASR & **51.29** & **21.31** & 43.76 & **45.21** \\ & +train\({}_{\text{ASR}}\) & ASR & 50.42 & 21.30 & **44.68** & 43.71 \\ & train & human & 50.11 & 20.80 & 44.44 & 43.35 \\ & +train\({}_{\text{ASR}}\) & human & 48.44 & 20.47 & 43.68 & 44.28 \\ \hline \multirow{4}{*}{3} & train & ASR & 50.41 & **20.01** & 43.79 & **49.91** \\ & +train\({}_{\text{ASR}}\) & ASR & 49.22 & 19.72 & 43.19 & 44.18 \\ \cline{1-1} & train & human & **50.86** & 19.50 & **44.59** & 45.48 \\ \cline{1-1} & +train\({}_{\text{ASR}}\) & human & 47.42 & 18.42 & 42.67 & 44.72 \\ \hline \hline \end{tabular} \end{table} Table 11: Model performance on different test sets splits, comparison between _virtscribe_ dialogues with ASR and human transcript. The model finetuned on the train set is the BART+FTSAMSum (Division) fine-tuned with 10 epochs on the original train set, as in the baseline methods. The train + train\({}_{\text{ASR}}\) model refers to the BART+FTSAMSum (Division) finetuned for 3 more epochs on the _virtscribe_ with ASR split of the train set. No patient data was used or disclosed here. Names of the original actors were changed. The gender balance of the entire dataset is roughly equal. Other demographic information was not modeled in this dataset. ## 6 Code availability All code used to run data statistics, baseline models, and evaluation to analyze the aci-bench corpus is freely available at [LINK TO BE UPDATED].
2308.13483
Highlights on top quark physics with the ATLAS experiment at the LHC
The large top-quark samples collected with the ATLAS experiment at the LHC have yielded measurements of the inclusive \ttbar production cross section of unprecedented precision and differential measurements in new kinematic regimes.They have also enabled new measurements of top-quark properties that were previously inaccessible, enabled the observation of many rare top-quark production processes predicted by the Standard Model and boosted searches for flavour-changing-neutral-current interactions of the top-quark, that are heavily suppressed in the SM. In this contribution the highlights of the ATLAS top-quark physics program are presented, as well as projections of the expected sensitivity after the High Luminosity phase of the LHC. Talk presented at the International Workshop on Future Linear Colliders (LCWS 2023), 15-19 May 2023. C23-05-15.3.
Benedikt Gocke
2023-08-25T16:46:20Z
http://arxiv.org/abs/2308.13483v1
# Highlights on top-quark physics with ###### Abstract The large top-quark samples collected with the ATLAS experiment at the LHC have yielded measurements of the inclusive \(t\bar{t}\) production cross section of unprecedented precision and differential measurements in new kinematic regimes. They have also enabled new measurements of top-quark properties that were previously inaccessible, enabled the observation of many rare top-quark production processes predicted by the Standard Model and boosted searches for flavour-changing-neutral-current interactions of the top-quark, that are heavily suppressed in the SM. In this contribution the highlights of the ATLAS top-quark physics program are presented, as well as projections of the expected sensitivity after the High Luminosity phase of the LHC. Talk presented at the International Workshop on Future Linear Colliders (LCWS 2023), 15-19 May 2023. C23-05-15.3. Copyright 2023 CERN for the benefit of the ATLAS Collaboration. Reproduction of this article or parts of it is allowed as specified in the CC-BY-4.0 license Introduction The top-quark is the heaviest knwon elementary particle in the Standard model (SM). Due to its large mass, the top-quark decays before it hadronises and thus, allows in principle to directly measure its properties. Further, top-quark processes are crucial for searches for Beyond the Standard Model (BSM) processes, as BSM contributions could alter the probability of top-quark-involved processes. Also, top-quark processes, especially with associated particles (e.g. bosons), are important background processes not only to BSM searches but for measurements of Higgs-Boson procsses. Measuring these processes allows to test the SM and BSM theories. Therefore, the top-quark properties, e.g. mass, spin or couplings, and production modes via strong and electroweak interaction need to be measured and known with high precision. The top-quark working group in the ATLAS experiment presents some of the recent highlights of top-quark measurements. Using the full Large Hadron Collider (LHC) Run-2 dataset at a centre-of-mass energy of \(\sqrt{s}=13\) TeV amounting to a measured integrated luminosity of \(\mathcal{L}=140\) fb\({}^{-1}\)[1], allows for the most precise measurements to date. Top-quark properties measurement are shown, as well as inclusive and differential cross-sections measurements. For all measurements, it is crucial, that theory predictions for simulated Monte Carlo (MC) events are well modelled to obtain precise results. Today, the uncertainties in signal and background processes are often limiting the overall precision of the measurements. Using differential cross-section measurements help understanding and comparing different theory predictions. Further, first observations of rare processes are highlighted in the following selected results. ## 2 Highlights on top-quark analyses ### Inclusive and differential \(t\bar{t}\) production cross-section measurement The measurement of the inclusive and differential \(t\bar{t}\) production cross-section was done using the full Run-2 dataset obtained with the ATLAS detector at \(\sqrt{s}=13\) TeV accounting to an integrated luminosity of \(\mathcal{L}=140\) fb\({}^{-1}\)[2]. Events are selected with exactly one electron and exactly one muon and either exactly one or two \(b\)-tagged jets. These selection criteria ensure are minimal level of background. This is shown in Figure 1, where events with one electron and one muon in the final state are split into number of \(b\)-tagged jets. For events with one or two \(b\)-tagged jets, the distribution is very pure in signal events. Misidentified leptons and \(Z\to\tau\tau+\)jets background are calculated using data-driven methods. Both, the inclusive and the differential cross-section measurements use a log-likelihood fit to the number of selected events \(N\). The two equations used in the likelihood-formula are shown in Equations (1),(2): \[N_{1}^{i} =\mathcal{L}\sigma_{t\bar{t}}^{i}G_{e\mu}^{i}2\epsilon_{b}^{i} \left(1-\epsilon_{b}^{i}C_{b}^{i}\right)+N_{1,\text{bkg}}^{i} \tag{1}\] \[N_{1}^{i} =\mathcal{L}\sigma_{t\bar{t}}^{i}G_{e\mu}^{i}\left(\epsilon_{b}^ {i}\right)^{2}C_{b}^{i}+N_{1,\text{bkg}}^{i} \tag{2}\] For the differential measurement, the likelihood fit is done in each bin \(i\), whereas for the inclusive measurement, the fit is performed in two inclusive bins. Performing the fit allows a simultaneous determination of the cross-section \(\sigma_{t\bar{t}}^{i}\) and the combined jet selection and \(b\)-tagging efficiency \(\epsilon_{b}^{i}\). The reconstruction efficiency, \(G_{e\mu}^{i}\), is defined as the number of selected lepton pairs in the \(t\bar{t}\) sample, which are reconstructed in bin \(i\), divided by the total number of lepton pairs generated in bin \(i\). The \(b\)-tagging correlation coefficient, \(C_{b}^{i}\), corrects the probability of tagging the second jet after having tagged the first one. It is also determined by simulation and is found to be close to unity. The inclusive cross-section is measured to be \[\sigma_{t\bar{t}}=829\pm 1(\text{stat})\pm 13(\text{syst})\pm 8(\text{lumi}) \pm 2(\text{beam})\,\text{pb} \tag{3}\] The newest luminosity measurement of ATLAS [1] allows for an incredible precision, as it lowers the relative uncertainty below 2% for this measurement. The overall uncertainty of below 2% shows the strength of the LHC as a precision machine. For the differential cross-section measurement eight single lepton kinematic variables (\(p_{\rm T}^{\ell}\),\(|\eta^{\ell}|\),\(m^{e,\mu}\), \(p_{\rm T}^{e,\mu}\), \(|y^{e,\mu}|\), \(E^{e}+E^{\mu}\), \(p_{\rm T}^{\mu}+p_{\rm T}^{e}\), \(|\Delta\phi^{e,\mu}\)) were chosen and four double differential distributions (\(|y^{e,\mu}|\) in five bins of \(m^{e,\mu}\),\(|\Delta\phi^{e,\mu}|\) in five bins of \(m^{e,\mu}\),\(|\Delta\phi^{e,\mu}|\) in three bins of \(p_{\rm T}^{e,\mu}\),\(|\Delta\phi^{e,\mu}|\) in five bins of \(E^{e}+E^{\mu}\)). Again, this measurement benefits from exploiting the full Run-2 dataset, as a wider range in distributions is possible, as well as a finer granularity for the chosen bin sizes. Two example distributions are shown in Figure 2. The results show that a more refined theoretical modelling is needed, as discrepancies with the results from several next-to-leading order (NLO) event-generators are visible in the distributions. ### \(t\bar{t}\) and \(Z\)-boson cross sections and their ratio at \(\sqrt{s}=13.6\) TeV The measurement of the \(t\bar{t}\) and \(Z\)-boson cross-section and their ratio at \(\sqrt{s}=13.6\) TeV denotes the first measurement using data from Run 3 [3]. The data used in this analysis amount to an integrated luminosity of 11.3 fb\({}^{-1}\). Due to the early stage of data taking, many detector uncertainties are large, while ATLAS is deriving precise measurements of the luminosity and in-situ calibrations for leptons and jets. The measurement is already limited by systematic uncertainties, with luminosity uncertainties and lepton efficiency uncertainties being the largest sources. Events with an opposite-charged lepton pair and either one or two \(b\)-tagged jets are selected to measure the \(t\bar{t}\) cross-section. The \(Z\)-boson production cross-section is measured in a fiducial space in \(ee\) and \(\mu\mu\) final states, where the invariant mass of the leptons is required to be within the \(Z\)-boson mass window. Additionally, a measurement of the ratio of the \(t\bar{t}\) and the \(Z\)-boson production cross-sections is performed, which benefits from cancellation of several systematic uncertainties. Since \(t\bar{t}\) and \(Z\)-boson production dynamics are driven to a large extent by different proton constituents at the LHC, the ratio of these cross-sections has a significant sensitivity to the gluon-to-quark PDF ratio. A profile likelihood fit is used to extract the \(t\bar{t}\) and \(Z\)-boson cross-sections and - similarly to the measurement shown in Section 2.1 - combined jet selection and \(b\)-tagging efficiency. A second profile likelihood fit is performed afterwards in which the parameter of interest is the ratio \(R_{t\bar{t}/Z}\), rather than the \(t\bar{t}\) cross-section. This procedure ensures that all correlations among systematic uncertainties in the \(t\bar{t}\) and \(Z\)-boson cross-section measurement are taken into account. In Figure 3, the leading lepton \(p_{\rm T}\) distribution for both fiducial spaces are shown. Simulated events are in good agreement within the overall uncertainties in the low \(p_{\rm T}\) regions. This is important, as the lepton uncertainties impact the Figure 1: Distribution of the number of \(b\)-tagged jets in selected opposite-sign \(e\mu\) events. The coloured distributions show the breakdown of the predicted background contributions from single top-quarks (\(Wt\) and \(t\)-channel), misidentified leptons, \(Z(\rightarrow\tau\tau)\) + jets and other sources of background (diboson, \(t\bar{t}W\), \(t\bar{t}Z\), and \(t\bar{t}H\)). The bottom panel shows the ratio of the prediction to the data with an uncertainty band covering both the statistical and systematic uncertainties, except for \(t\bar{t}\) generator uncertainties. [2] profile likelihood fit via their acceptance in this region. The results of this measurement are \[\sigma_{t\bar{t}}=859\pm 4\ (\text{stat.})\pm 22\ (\text{syst.})\pm 19 \ (\text{lumi.})\ \text{pb} \tag{4}\] \[\sigma_{Z}^{\text{fid.}}=751\pm 0.3\ (\text{stat.})\pm 15\ (\text{syst.})\pm 17 \ (\text{lumi.})\ \text{pb}\] (5) \[R_{t\bar{t}/Z}=1.114\pm 0.006\ (\text{stat.})\pm 0.022\ (\text{syst.}) \pm 0.003\ (\text{lumi.}) \tag{6}\] As illustrated in Figure 4, the measured \(t\bar{t}\) cross-section is very close but in agreement with the SM prediction using the PDF4LHC21 PDF set [4]. The largest uncertainty for the \(t\bar{t}\) cross-section is the theory uncertainty on the modelling of the signal parton shower. ### Measurement of total and differential \(t\bar{t}W\) cross-section The measurement of the total and differential \(t\bar{t}W\) cross-section is a very important analysis, as this process is a large background for many BSM searches but also other Higgs- and top-quark measurements. Further, both, CMS and ATLAS, observed an excess in \(t\bar{t}W\) events in many analyses. Using the full Run-2 dataset amounting to an integrated luminosity of 140 fb\({}^{-1}\) allows for the first differential measurement of this process [5]. Events are required to have exactly two or three leptons, at least two jets and at least one or two \(b\)-tagged jets. Thus, the main background processes are \(t\bar{t}Z/\gamma\), diboson and \(t\bar{t}H\) production. For these, as well as for backgrounds from charge mis-identified leptons, control regions are assigned. A maximum likelihood fit is used to extract the cross-section of \(t\bar{t}W\). The result, illustrated in Figure 5, is within \(1.5\sigma\) with the SM prediction. \[\sigma(t\bar{t}W)=890\pm 50\,(\text{stat.})\pm 70\,(\text{syst.})\ \text{fb} \tag{7}\] The largest uncertainties are signal modelling uncertainties and prompt lepton uncertainties. Nevertheless, this result improves the relative uncertainty by more than a factor of two with respect to the previous Figure 2: (a) Absolute differential cross-sections as a function of \(p_{\text{T}}^{\ell}\) with statistical (orange) and statistical plus systematic uncertainties (yellow) and (b) differential cross-sections as a function of \(|\Delta\phi(e,\mu)|\) in bins of \(p_{\text{T}}^{e\mu}\). The data points are shown as black dots and are placed at the centre of each bin. The results are compared with the predictions from different Monte Carlo generators normalised to the Top++ NNLO+NNLL prediction: the baseline Powheg +Pythia 8.230 \(t\bar{t}\) sample (blue), \(\text{AMC@NLO}\) +Herwig 7.1.3 (red), Powheg +Herwig 7.0.4 (green), Powheg +Herwig 7.1.3 (purple), \(\text{AMC@NLO}\) +Pythia 8.230 (cyan) and Powheg +Pythia 8.230 rew. (dark green), which refers to Powheg +Pythia 8.230 reweighted according to the top-quark \(p_{\text{T}}\). The lower panel shows the ratios of the predictions to data, with the bands indicating the statistical and systematic uncertainties. The last bin in also contains overflow events. [2] Figure 4: (a) Comparison of the measured \(t\bar{t}\) cross-sections at various centre-of-mass energies and the theory predictions using the PDF4LHC21 PDF set. The bottom panel shows the ratio of the measured values and three predictions that either contain only the uncertainties originating from the QCD scale variations (black), only the variations in the PDF uncertainties (red) or the total uncertainty in the prediction (blue). (b) Ratio of the \(t\bar{t}\) to the Z-boson cross-section compared to the prediction for several sets of parton distribution functions. For the PDF4LHC21 PDF set, predictions for different assumptions about the top-quark mass are also displayed. [3] Figure 3: Comparison of observed data and predictions for the \(p_{\mathrm{T}}\) of the leading lepton in (a) the \(\mu\mu\) channel and (b) the \(p_{\mathrm{T}}\) of the leading lepton in the \(e\mu\) channel, in the Run-3 \(t\bar{t}\) and \(Z\)-boson cross-section measurement. The expected yields are calculated by normalising the MC prediction using the cross-section for each process and the estimate of the data integrated luminosity. The ”Mis-ID” label represents fake and non-prompt leptons. The hashed band represents the total uncertainty. The bottom panel shows the ratio of data to prediction. The rightmost bins contain the overflow events. [3] analysis [6]. CMS also has done a precise measurement of this process [7], observing the same discrepancies between data and the MC prediction. \(t\bar{t}W\) also allows to measure a production asymmetry for the signal process. In this measurement, the production asymmetry and its ratio, \(R(t\bar{t}W)\), are measured to be \[\sigma_{t\bar{t}W^{+}} =585^{+35}_{-34}\,(\text{stat.})^{+47}_{-44}\,(\text{syst.})\;\text {fb} \tag{8}\] \[\sigma_{t\bar{t}W^{-}} =301^{+28}_{-27}\,(\text{stat.})^{+35}_{-31}\,(\text{syst.})\;\text {fb}\] (9) \[R(t\bar{t}W) =1.95^{+0.21}_{-0.18}\,(\text{stat.})^{+0.16}_{-0.13}\,(\text{syst. })\,, \tag{10}\] where the ratio \(R(t\bar{t}W)\) is in good agreement with the prediction from Sherpa[8][9]. Absolute and normalised differential cross-section measurements are performed. Profile-likelihood unfolding in seven observables was done. The differential measurement is limited by statistical uncertainties. Again, the absolut differential cross-sections are larger than the theoretical predictions, which is in agreement with the inclusive cross-section result. This shows, that future theoretical developments are needed (e.g. predictions at NNLO in QCD) to understand the discrepancy better. Also, the Run 3 dataset of the LHC will provide more data to further probe this final state. ### Observation of \(t\bar{t}t\bar{t}\) The observation of four-top-quark production (\(t\bar{t}t\bar{t}\)) [13] is a very important measurement, as new particles or even new forces could alter the probability of this process. The predicted SM cross-section with \(\sigma_{t\bar{t}t\bar{t}}^{SM}=13.4^{+1.0}_{-1.8}\) fb [14] is very small, compared to the cross-section of \(t\bar{t}\) production. Many improvements were implemented with respect to the previous analysis [15], which found a strong evidence for this process. An improved particle identification allows for lower \(p_{\text{T}}\) requirements, which then allow to select more events. Data-driven estimates for background processes as \(t\bar{t}W\), but also for mis-identified or non-prompt leptons are introduced and also the treatment of the \(t\bar{t}t\) background is improved. For this analysis, events with exactly two same-charge leptons or at least three leptons are selected. Additionally, events are required to have at least six jets, of which two need to be \(b\)-tagged. A Graph neural network (GNN) is used to distinguish the signal from the background. The GNN output distribution, which is used to extract the cross-section, is shown in Figure 6. A binned profile likelihood fit is used to determine the normalisation of the largest backgrounds and the cross-section. The result, \(\sigma_{t\bar{t}t\bar{t}}=22.5^{+6.6}_{-5.5}\) fb, is observed with \(6.1\sigma\) significance. It is \(1.8\sigma\) above the SM prediction [14]. This measurement is further used to set limits on the cross-section of three top-quark production \(t\bar{t}t\), which are illustrated in Figure 7. Three further interpretations were done, setting limits on four heavy flavour fermion EFT operators as well as on the top-quark Yukawa coupling and the Higgs oblique parameter. Figure 5: Comparison of the measured inclusive \(t\bar{t}W\) cross-section to the theoretical predictions from Sherpa, the MC@NLO FxFx prescription including EWK corrections from Ref. [10], the NLO+NNLL prediction from Ref. [11] and the measurement from CMS [12]. [5] Figure 6: Comparison between data and the predictions after a fit to data for the GNN distribution in the signal region. The first bin contains underflow events. The ratio of the data to the total post-fit prediction is shown in the lower panel. The dashed blue lines show the pre-fit prediction in the upper panel and the ratio of the data to the total pre-fit prediction in the lower panel. The shaded band represents the total post-fit uncertainty in the prediction. [13] Figure 7: Two-dimensional negative log-likelihood contour for the \(t\bar{t}t\) cross-section (\(\sigma_{t\bar{t}t}\)) versus the \(t\bar{t}t\bar{t}\) cross-section (\(\sigma_{t\bar{t}t\bar{t}}\)) when the normalisation of both processes are treated as free parameters in the fit. The blue cross shows the SM expectation of \(\sigma_{t\bar{t}t\bar{t}}\)=12 fb and \(\sigma_{t\bar{t}t\bar{t}}\)=1.67 fb, both computed at NLO, while the black cross shows the best-fit value. The observed (expected) exclusion contours at 68% (black) and 95% CL (red) are shown in solid (dashed) lines. The gradient-shaded area represents the observed likelihood value as a function of \(\sigma_{t\bar{t}t}\) and \(\sigma_{t\bar{t}t\bar{t}}\). [13] ### Observation of single-top-quark production with a photon Another rare process is observed in single-top-quark production together with a photon [16]. Using the full Run-2 dataset, events were selected with exactly one photon, exactly one lepton, exactly one \(b\)-tagged jet and either zero or one forward jet. This forward jet is characteristic for the signal process. Photon fakes are a crucial background for this analysis and it is estimated using data-driven methods. The main backgrounds in this measurement are \(t\bar{t}\gamma\) and \(W\gamma\). A deep neural network (DNN) is used to separate the signal from the backgrounds. The output distribution is used in the profile likelihood fit, which is done to extract the cross-sections. The post-fit distribution of the DNN output is shown in Figure 8. Two fiducial cross-sections are measured, one on parton level, \(\sigma_{tq\gamma}\times\mathcal{B}\left(t\to\ell\nu b\right)\) and one on particle level, \(\sigma_{tq\gamma}\times\mathcal{B}\left(t\to\ell\nu b\right)\). The first one includes only events where the photon originates from the top-quark itself, while the latter one also includes photons which originate from the top-quark charged decay products. The measured cross-sections are: \[\sigma_{tq\gamma}\times\mathcal{B}\left(t\to\ell\nu b\right)=688 \pm 23\,(\text{stat.})^{+75}_{-71}\,(\text{syst})\,\text{fb} \tag{11}\] \[\sigma_{tq\gamma}\times\mathcal{B}\left(t\to\ell\nu b\right)+ \sigma_{t(\to\ell\nu b\gamma)q}=303\pm 9\,(\text{stat})^{+33}_{-32}\,(\text{syst })\,\text{fb} \tag{12}\] This process is observed with an observed (expected) significance of \(9.3\sigma\) (\(6.8\sigma\)). The dominant uncertainties in this measurement arise from the signal modelling. ### Measurement of top-quark mass using a template method The top-quark mass is measured using the full Run-2 dataset [17]. Events with exactly two leptons and at least two jets are selected. Further, exactly two of the jets need to be \(b\)-tagged. A DNN is used to pair the \(b\)-jet and lepton from one top-quark decay, where only events with a DNN score \(<0.6\) are selected. This improved method extracts the invariant mass \(m_{\ell b}\), which is very sensitive to the top-quark mass, for each event and helps reducing modelling and jet related uncertainties. For the template method, distributions are constructed for a number of discrete values of the top-quark mass. These are interpolated, such that the final template function only depends on one free parameter, which represents the top-quark mass. An unbinned likelihood fit to data is done with this template function, where the fit range is optimised to minimise the total uncertainty. The post-fit template distribution is shown in Figure 9 The top-quark mass is measured to be \(m_{\text{top}}=172.21\pm 0.20(\text{stat.})\pm 0.67(\text{syst.})\pm 0.39( \text{recoil})\) GeV. Overall, signal modelling uncertainties are significantly improved, but a new systematic uncertainty needed to be introduced. The recoil uncertainty describes gluon radiation recoiling against the top-quark. New parton shower algorithm such as VINCIA are expected to mitigate this uncertainty in the future. ### Evidence for charge asymmetry in \(pp\to t\bar{t}\) Due to using proton-proton collision, a central-forward asymmetry is a very small effect at the LHC (\(\mathcal{O}(1\%)\)). In \(t\bar{t}\) events, this asymmetry was measured using the difference between the absolute value of the top-quark rapidity, \(|y_{t}|\), and the absolute value of the top-antiquark rapidity \(|y_{\bar{t}}\), to construct a charge asymmetry \(A_{C}^{t\bar{t}}\)[18]: \[A_{\text{C}}^{t\bar{t}}=\frac{N\left(\Delta|y_{t\bar{t}}|>0\right)-N\left( \Delta|y_{t\bar{t}}|<0\right)}{N\left(\Delta|y_{t\bar{t}}|>0\right)+N\left( \Delta|y_{t\bar{t}}|<0\right)} \tag{13}\] Events with single or dilepton final states are targeted, as well as with high \(p_{\text{T}}\) hadronic top-quark decays. Data-driven methods are used for fake lepton background and a BDT is used to match quarks to jets in order to enhance the reconstruction. To compare the data to a fixed-order theory prediction, fully bayesian unfolding is used. In Figure 10, the measured charge asymmetry in the single-lepton, dilepton and combined channel is shown. The measured charge asymmetry value is \(A_{C}^{t\bar{t}}=0.0068\pm 0.0015\,(\text{stat.}+\text{syst.})\), which differs \(4.7\sigma\) from zero and thus gives a strong evidence for the charge asymmetry. The measurement is limited by the statistical uncertainties. Figure 8: Distribution of the DNN output in the \(\geq 1\) forward jet SR in data and the expected contribution of the signal and background processes after the profile-likelihood fit. The hashed band represents the uncertainties in the SM prediction. [16] Figure 9: The High \(m_{\ell b}\) distribution in data compared to the predicted distribution for the measured top-quark mass value. The data points and the template fit function to this data are shown in black. The blue uncertainty band is constructed by varying the template fit function within the full uncertainty of the measurement. The lower panel shows the ratio of data and the template fit function. The \(\chi^{2}\)/ndf of the best-fit result is 55.3/44 with a probability of \(P(\chi,ndf)=0.24\). [17] ## 3 Conclusion The top-quark working group in ATLAS presented many new and improved measurements using the full Run-2 dataset amounting to an integrated luminosity of \(\mathcal{L}=140\) fb\({}^{-1}\). The statistical precision of the full Run-2 dataset is exploited and thus, the top-quark properties, such as the top-quark mass, are measured with good precision. Accordingly, further rare processes (four-top-production, \(\mathrm{tq}\gamma\)) are observed for the first time. Different types of machine-learning approaches and models are used in various ways in many analyses, e.g. to reconstruct the signal process or separate the signal and background processes. New and improved data-driven methods to determine background processes which are not modelled well enough in simulations, help to gain more precise results and a better understanding of these processes. As every measurement depends on well-simulated physics processes, differential cross-section measurements are required to help understanding MC generator predictions and also show that theoretical progress is needed, e.g. for \(t\bar{t}W\) modelling. For the recoil from gluons directly from the top-quark, a new systematic uncertainty is taken into account. Most analyses are also limited by systematic uncertainties now. This is also expected for measurements using data from the ongoing LHC Run 3 using a centre-of-mass energy of \(\sqrt{s}=13\) TeV. The first measurement using Run 3 data of \(t\bar{t}\) and \(Z\)-boson cross-sections is also shown in this document. The results are already limited by systematic uncertainties in this early stage of data taking, and are in agreement with the SM prediction. In general, using the data obtained with the LHC Run 3 will allow for even higher precision measurements and for improving analyses, which are currently limited by data statistics, e.g. asymmetry measurements. In this document, only a very small fraction of recent results are shown. Many more results are already obtained and can be found here: ATLAS Top Public results. For LHC Run-2, but especially for Run 3, further improved measurements and even more precise results are yet to come.
2301.10481
FewShotTextGCN: K-hop neighborhood regularization for few-shot learning on graphs
We present FewShotTextGCN, a novel method designed to effectively utilize the properties of word-document graphs for improved learning in low-resource settings. We introduce K-hop Neighbourhood Regularization, a regularizer for heterogeneous graphs, and show that it stabilizes and improves learning when only a few training samples are available. We furthermore propose a simplification in the graph-construction method, which results in a graph that is $\sim$7 times less dense and yields better performance in little-resource settings while performing on par with the state of the art in high-resource settings. Finally, we introduce a new variant of Adaptive Pseudo-Labeling tailored for word-document graphs. When using as little as 20 samples for training, we outperform a strong TextGCN baseline with 17% in absolute accuracy on average over eight languages. We demonstrate that our method can be applied to document classification without any language model pretraining on a wide range of typologically diverse languages while performing on par with large pretrained language models.
Niels van der Heijden, Ekaterina Shutova, Helen Yannakoudakis
2023-01-25T09:30:32Z
http://arxiv.org/abs/2301.10481v2
# FewShotTextGCN: K-hop neighborhood regularization for ###### Abstract We present FewShotTextGCN, a novel method designed to effectively utilize the properties of word-document graphs for improved learning in low-resource settings. We introduce K-hop Neighborhood Regularization, a regularizer for heterogeneous graphs, and show that it stabilizes and improves learning when only a few training samples are available. We furthermore propose a simplification in the graph-construction method, which results in a graph that is \(\sim\)7 times less dense and yields better performance in low-resource settings while performing on-par with the state of the art in high-resource settings. Finally, we introduce a new variant of Adaptive Pseudo-Labeling tailored for word-document graphs. When using as little as 20 samples for training, we outperform a strong TextGCN baseline with 17% in absolute accuracy on average over eight languages. We demonstrate that our method can be applied to document classification without any language model pretraining on a wide range of typologically diverse languages while performing on par with large pretrained language models. ## 1 Introduction Text classification, a key task in natural language processing (NLP), has many real-world applications, including toxic comment identification, news categorization, spam detection and opinion mining. One popular approach to this problem relies on large-scale pretraining of Transformer models Devlin et al. (2018); Conneau et al. (2019); Raffel et al. (2020), which have shown to be able to approach or even surpass human performance on many natural language understanding (NLU) benchmarks Rajpurkar et al. (2016); Wang et al. (2019); Liang et al. (2020). While these results are impressive for the languages on which models are pretrained, performance tends to deteriorate on languages where no or little data is available Chau and Smith (2021); van der Heijden et al. (2020). In practice, this means that these models are effective on a set of approximately 100 out of the 7000+ spoken languages in the world. Next to the requirement for vast amounts of data for pretraining, Transformer language models tend to be impractically large in terms of their number of parameters and have a high environmental footprint Strubell et al. (2019). Recently, Graph Neural Networks (GNNs) have shown to be effective for text classification in both transductive Yao et al. (2019); Liu et al. (2020); Lin et al. (2021) and inductive Nikolentzos et al. (2020); Ding et al. (2020) learning settings - with promising results in both high- and low-resource settings. Particularly in the transductive setting, the authors of TextGCN Yao et al. (2019) show that Graph Convolutional Networks (GCNs) Kipf and Welling (2016) can outperform state-of-the-art methods for document classification on English datasets without any language model pretraining. They do so by modeling an entire corpus of documents simultaneously as one heterogeneous word-document graph. The document classification task is formulated as a node-classification task over this graph. Later work shows that (multilingual) Pretrained Language Models (mPLMs) can be used to provide GNNs used in transductive setting with rich representations of both words and documents, improving results further in both monolingual Lin et al. (2021) and cross-lingual settings Wang et al. (2021); Li et al. (2020). These works focus solely on high-resource settings and do not report any results on performance in low-resource settings. In this work, we propose a novel GNN-based method for learning document classification tasks on a range of languages without the need for any pretraining data (i.e., without utilizing any pre-trained word embeddings or language models), and from few labeled samples only. To the best of our knowledge, we are the first to investigate few-shot graph-based transductive document classification in a range of languages other than English. We present FewShotTextGCN, an improved version of the original TextGCN model, where we exploit properties of the heterogeneous word-document graph for improved learning from scratch and with only a few labels. More specifically, we: (1) Introduce K-hop Neighborhood Regularization (K-NR), an unsupervised learning technique for heterogeneous graphs, and use it in its \(K=2\) instantiation as a regularizer tailored to word-document graphs, and show that it consistently leads to performance gains in low-resource settings; (2) Propose a simplification of the graph-construction method, which results in improved performance in the low-resource setting while reducing the graph density by a factor of approximately 7 on average, therefore substantially speeding up computations and reducing memory requirements; (3) Present a variant of adaptive pseudo-labeling Zhou et al. (2019) on word-document graphs and show that it leads to consistent gains over the original TextGCN approach Yao et al. (2019), particularly when combined with K-NR. We compare FewShotTextGCN to its predecessor and two strong PLMs on ten topic classification benchmarks comprising eight typologically diverse languages, and experiment with a range of low-resource settings, including using as little as 20 labeled samples to learn from, and without any other form of (pre-trained) knowledge about a language except for what constitutes a word (using word boundaries or a tokenizer). In our lowest-resource setting, our method outperforms TextGCN with 4.6% and 17% points in absolute accuracy on average for Reuters and MLDoc, respectively - while having a substantially smaller computational and memory footprint. FewShotTextGCN performs on par with large PLMs on the great majority of the considered benchmarks, without the need for any large-scale pretraining and at only a fraction of the parameter count of these PLMs - indicating that graph-based methods are an attractive alternative to using large PLMs for topic classification. All our code and models are released to facilitate further research on this topic.1 Footnote 1: [https://github.com/mrvoh/FewShotTextGCN](https://github.com/mrvoh/FewShotTextGCN) ## 2 Graph Neural Networks Graph Neural Networks (GNNs) are a class of neural models designed to facilitate representation learning on geometric data - data that naturally occur in many situations/fields, such as chemistry, social networks, maps, visual meshes, etc. Recently, there has been a great surge in research on GNNs. GNNs create new feature representations of nodes by aggregating the nodes' own feature representation and a message passed on from neighboring nodes. A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is defined as a set of nodes \(\mathcal{V}\) with edges \(\mathcal{E}\) between them, typically represented as a square adjacency matrix A, where each entry holds the weight of the edge between node \(j\) and \(i\). Locality in graphs is defined by neighborhoods, where the neighbors of node \(i\) are defined as \(\mathcal{N}_{i}=\{j:(i,j)\in\mathcal{E}\ \forall(j,i)\in\mathcal{E}\}\). Let \(\bigoplus\) be some permutation invariant aggregator such as \(sum\), \(average\) or \(max\), and let \(\psi\) and \(\phi\) be two differentiable, learnable functions such as an MLP. Using these ingredients, we can describe GNNs by the way they do message passing. Convolutional GNNs use the weights \(c_{ij}\) of the edge between nodes \(j\) and \(i\) to weigh the incoming messages. These weights are part of the definition of the graph, meaning they are statically defined. The input feature \(x_{i}\) of node \(i\) is transformed to a latent representation \(h_{i}\) by taking \[h_{i}=\phi(x_{i},\bigoplus_{j\in\mathcal{N}_{i}}c_{ij}\psi(x_{j})) \tag{1}\] The first and most well-known convolutional GNN is the Graph Convolutional Network (GCN) Kipf and Welling (2016). ## 3 Related work Our work is based on TextGCN Yao et al. (2019), which also serves as our baseline for all experiments. To the best of our knowledge, we are the first to investigate few-shot graph-based transductive learning from scratch for document classification in a range of languages other than English. Since the scope of our work is few-shot document classification in many languages by learning from scratch, we do not consider CLHG Wang et al. (2021) directly related work. The reasons being it models corpora in multiple languages jointly, whereas we learn each task in isolation, and relies on machine translation and mPLMs. Similarly, MGL Li et al. (2020) relies on mPLMs for encoding similar corpora in different languages into one embedding space, where consecutively a graph is dynamically constructed based on the similarity of the documents in the respective embedding space. Finally, meta-learning is applied to learn to classify documents in one language, given a limited set of documents in at least three other languages. Hence, we do not review these works in-depth. TextGCNTextGCN Yao et al. (2019) is the first application of GNNs to transductive text classification, applied on English datasets without any language model pretraining. The great majority of experiments is performed in high-resource settings, but a small set of results on performance in low-resource settings is also provided - motivating us to further explore and expand upon this subject. The authors construct a heterogeneous graph containing both word and document nodes. Word-word edges are weighed based on the pointwise mutual information (PMI) between the respective words, and word-document edges are created based on the TF-IDF score of the word in the respective document. More specifically, the adjacency matrix A is defined as: \[A_{ij}=\left\{\begin{array}{ll}\text{PMI}(i,j)&i,j\text{ words, PMI}(i,j)>0\\ \text{TF-IDF}_{ij}&i\text{ document, }j\text{ word}\\ 1&i=j\\ 0&\text{otherwise}\end{array}\right. \tag{2}\] Document-document links are not considered. A one-hot encoding is used as input features for the nodes and a two-layer GCN is used to classify the document nodes. While this setup is relatively simple in terms of preprocessing, pretraining and the number of parameters in the model, the authors show that their method performs on par with state-of-the-art methods, even improving the state of the art for the 20News2 dataset. Footnote 2: [http://qwone.com/~jason/20Newsgroups/](http://qwone.com/~jason/20Newsgroups/) BertgcnFollow-up work on TextGCN is that of BERTGCN Lin et al. (2021), where the authors leverage PLMs to initialize document-node features. More specifically, a BERT-based model is used to encode the documents, and all other nodes are initialized with a one-hot vector. The BERT model used for encoding documents is optimized both via gradients propagated through the GCN and via an auxiliary classifier that directly uses the BERT embeddings to classify the documents. Using BERTGCN, the authors improve over TextGCN on a variety of text classification tasks - especially on a sentiment analysis task, for which word order information is crucial for good performance Johnson and Zhang (2014). To be able to use BERTGCN in a full-batch gradient descent method, the authors use a memory bank that allows decoupling the dictionary size from the mini-batch size. Although the presented results are promising, a drawback of using large PLMs is the need for vast amounts of pretraining data, making these methods inaccessible for low-resource languages. ## 4 Data In this section, we give an overview of the datasets we use and the respective classification tasks. MLDocSchwenk and Li (2018) published an improved version of the Reuters Corpus Volume 2 Lewis et al. (2004) with balanced class priors for all languages. MLDoc consists of news stories in 8 languages: English, Spanish, French, German, Italian, Russian, Japanese and Chinese. Each news story is manually classified into one of four classes: _Corporate/Industrial (CCAT), Economics (ECAT), Government/Social (GCAT)_ and _Markets (MCAT)_. Per language, the train and test datasets contain 1k and 4k samples respectively. Reuters 21578From the Reuters-21578 dataset, a dataset of English news articles on a wide variety of topics, we use the **R8** and **R52** subsets (all-terms versions3). R8 has 8 categories and consists of 5485 and 2189 samples for training and testing respectively. R52 has 52 categories and 6532 and 2568 samples for training and testing respectively. The distribution of samples over the respective categories is highly skewed. Footnote 3: [https://ama.cachopo.org/datasets-for-single-label-text-categorization](https://ama.cachopo.org/datasets-for-single-label-text-categorization) During preprocessing on both datasets for all GNN-based models, we remove words with a frequency of less than 5, and tokenize the data. For all languages except Japanese and Chinese, we split sentences based on whitespace. For Chinese, we use the Jieba4 tokenizer, and for Japanese, the Fugashi one McCann (2020). For Transformer-based models, solely their respective tokenizers are used. Footnote 4: [https://github.com/fxsjy/jieba](https://github.com/fxsjy/jieba) ## 5 Approach Graph constructionWe follow the graph construction method as described in the original TextGCN (Yao et al., 2019) work except we deviate in two different directions. Stopword removal is omitted, as this assumes knowledge of the language, whereas we aim for an approach that assumes no prior knowledge. Furthermore, word-word edges are omitted too. Omitting such edges results in a much less densely connected graph, making learning substantially less memory intensive. We argue that the added value of word-word edges in a word-document graph is minimal given 1) only global information of word co-occurrence is considered (i.e., co-occurrence over the whole corpus as opposed to within document co-occurrence), and 2) over the course of training, words co-occurring in a document can influence each other's representation through an \(N\)-layer GNN where \(N>1\). We illustrate this with an example in Appendix E, while in Section 8, we experimentally demonstrate the limited effect of word-word edges using ablation studies. **K-hop Neighborhood Regularization (K-NR)** We propose a new method that can exploit the properties of a word-document graph, inspired by approaches such as GraphSage (Hamilton et al., 2017) that shows that meaningful node representations can be learned in an unsupervised manner with contrastive learning methods like Node2Vec (Grover and Leskovec, 2016). These methods typically consist of two components: a sampling technique for deciding what nodes are regarded as positive or negative samples, and a loss function. Let \(u\) be the anchor node, \(P_{p}\) the positive sample sampling method, \(P_{n}\) the negative sample sampling method, and \(\mathcal{J}_{\mathcal{G}}(u,P_{p},P_{n})\) the contrastive loss function. In the case of GraphSage, \(P_{p}\) is defined as a random walk starting from the anchor node, and \(P_{n}\) is defined as uniformly sampling from all available nodes. This contrastive learning approach on graphs assumes homogeneity and \(P_{p}\) always samples in the 1-hop neighborhood from the anchor node. Herein, we propose a contrastive learning regularization method tailored on heterogeneous graphs instead, where nodes of the same type are K-hops away from each other on the graph. In what follows, we describe our approach in detail for heterogeneous word-documents graphs for the specific case of K-NR with \(K=2\). Driven by the intuition that documents (within a language) that share large parts of their vocabulary are more likely to be about the same topic, we introduce 2-hop Neighborhood Regularization (2-NR), a novel unsupervised learning method which can be used as a regularization technique. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be the graph defined by the vertices \(\mathcal{V}\) and edges \(\mathcal{E}\). Let \(\mathcal{V}_{d},\mathcal{V}_{w}\) be the document and word nodes respectively. Given anchor node \(u\in\mathcal{V}_{d}\), we first sample a word node \(v\in\mathcal{V}_{w}\) connected to \(u\) by sampling from a multinomial distribution weighted by the edge attribute values (the TF-IDF scores): \[v\sim Multinomial(1,A_{u,\{w|w\in\mathcal{V}_{w}\wedge w\in \mathcal{N}(v)\}}) \tag{3}\] Then, a positive document node \(u_{p}\) and negative document node \(u_{n}\) are sampled as follows: \[u_{p}\sim U(\mathcal{N}(v)) \tag{4}\] \[u_{n}\sim U(\mathcal{V}_{d}\setminus\mathcal{N}(v)) \tag{5}\] Let \(z_{u}\) be the final hidden representation of node \(u\), the 2-NR loss, \(\mathcal{L}_{2\cdot NR}\), is then defined as: \[\mathcal{L}_{2\cdot NR}(u,u_{p},u_{n})=\\ max\{d(u,u_{p})-d(u,u_{n})+m,0\} \tag{6}\] for some distance function \(d\) and margin \(m\). This represents a triplet margin loss (Balntas et al., 2016), which forces \(u\) to be closer to \(u_{p}\) than \(u_{n}\) by at least a margin \(m\). See Appendix A for an elaboration on the intuition of 2-NR as well as a visualization. In the word-document graph case, \(K=2\) works specifically because we know that document nodes are only connected to word nodes and vice versa (see Section 5 for a description of our graph construction method). Hence, when starting at a document node, all nodes in its neighborhood are word nodes and similarly, all those word nodes do exclusively have edges to document nodes. Hence any walk of two steps starting at some document, will end up at another document via a word (node) they both contain. This simplifies the implementation for our specific word-document graph, but one can easily imagine generalizing the method to situations where taking \(K\) hops on the graph does not guarantee ending up at a node of the same type as the start node by restricting the sampling methods to a subset of the desired nodes. **Adaptive pseudo-labeling** Pseudo-labeling is a well-explored technique for improving performance in semi-supervised learning settings (Lee et al., 2013), which, recently, has also been successfully applied to graphs (Zhou et al., 2019; Chen et al., 2019). et al., 2021). We argue this technique can be particularly powerful for heterogeneous word-document graphs based on three premises: (1) Different topics/classes have a different distribution of words in their vocabulary. So it can be assumed that there exist words per class that occur more often in documents corresponding to that respective class - i.e. these words are more distinctive for that given class, which in the word-document graph translates to that word node having relatively more edges to documents of the class the respective word is distinctive for. (2) Document nodes are always at least two hops away from each other in the graph, meaning that only the input features of one document can influence the final feature representation of another document via message passing on the graph. This is assuming a two-layer GNN. (3) The most effective way of encoding label information in the input document embedding is by directly optimizing for that respective class on the node, as opposed to relying on indirect optimization via backpropagating through the message-passing computational graph. Instead of applying adaptive pseudo-labeling to the whole graph, we propose to only apply it to a subset of unlabeled document nodes, \(\mathcal{U}_{d}\), that are not part of our train or test split. By doing this, we can directly optimize an unlabeled document embedding to be a good predictor for a certain class (premise (3)). This class-tailored document embedding can now be propagated over the graph to be used in the final feature representation of other document nodes via message passing on the graph (premise (2)). Finally, we can assume that there exist word nodes in the graph which are characteristic of a topic/class and via which the class-specific features can be propagated to other documents without losing information due to over-smoothing (premise (1)). We implement adaptive pseudo-labeling as described by Zhou et al. (2019), which adds an extra component to the total loss, the pseudo-label loss \(\mathcal{L}_{pse}\): \[\mathcal{L}_{pse}=\sum_{v_{i}\in U^{\prime}}\frac{1}{N_{i}}CE(\tilde{Y}_{i},F _{i}) \tag{7}\] With \(CE\) representing the cross-entropy loss, \(\tilde{Y}_{i}\) the pseudo-label and \(F_{i}\in\mathbb{R}^{C}\) the predicted probability per class. The pseudo-label is generated by taking the argmax over \(F_{i}\), which results in the pseudo-label loss optimizing for high-confidence predictions on the most certain class. The set of unlabeled samples \(U^{\prime}\) used for this loss is defined as: \[U^{{}^{\prime}}=\{u_{i}:u_{i}\in U_{d}|F_{i,j}\leq\beta\},j= \underset{j^{{}^{\prime}}}{argmax}F_{i,j^{{}^{\prime}}}\} \tag{8}\] Some minimum confidence threshold \(\beta\) is used to filter out predictions, and the pseudo-loss per node is weighted by dividing it by \(N_{i}\), the amount of nodes which have the same predicted label as node \(u_{i}\) and are part of \(U^{\prime}\). ## 6 Experimental setting Throughout our experiments, TextGCN is used as a directly comparable baseline. Since our main goal is to develop a method that performs well in low-resource scenarios for many languages without the need of any knowledge of that language - apart from the ability to identify word boundaries in a sentence - our setup deviates from the original TextGCN work. Unlike the original work, we do not perform a grid-search of hyperparameter settings per experiment/language, but rather keep them fixed - which make our results not directly comparable to the original. Similarly to the original TextGCN work, we also consider the R8 and R52 datasets for an analysis of our approach on English (see Section 4). Additionally, we also provide results for two PLMs trained with the same amount of data. These results are not directly comparable, since the PLMs are trained in an inductive setting, but are included to provide better insight into the positioning of our method in the context of broader literature. PLM baselinesWe introduce both multilingual BERT (mBERT) Devlin et al. (2018) and XLM-R Conneau et al. (2019) as strong baselines based on the Transformer Vaswani et al. (2017) architecture. These baselines are fine-tuned in the same data settings, with their architecture settings kept as their defaults as defined in the HuggingFace Transformers library Wolf et al. (2020). For training, a learning rate of 5e-5 and a batch-size of 20 is used. Learning settingsWe investigate the effectiveness of our approach when learning from 1%, 2%, 5%, 10% and 90% of the available training samples. The 1% setting is only considered for the R8 and R52 datasets, due to the already relatively small training set size in the MLDoc datasets. For all settings except the 90% one, the size of the validation set is equal to the size of the training set (see Appendix D for a background experiment on the influence of the division of a limited set of labeled samples over the train and validation sets). The remaining documents are then added to the word-document graph as unlabeled nodes. For the high-resource setting (90%), the remaining 10% of the training set is used for validation (i.e., no unlabeled nodes). We train all GNN models from scratch for each language and do not rely on any form of transfer- or multi-task learning. Training setup and hyperparametersWe use the Ranger optimizer (Liu et al., 2019; Zhang et al., 2019; Yong et al., 2020), an adapted version of Adam (Kingma and Ba, 2014). All experiments run for 1000 epochs and the model with the lowest validation loss is used at test time. A learning rate of 0.01 and dropout of 0.5 are used throughout all experiments except when mentioned otherwise. All hidden dimensions are set to 64 and in line with the original TextGCN work, we use two layers of GCN followed by one linear layer for classification. The log schedule for training signal annealing as per Appendix A.2 in Xie et al. (2020) is used whenever 2-NR is applied. For pseudo-labeling, we set the confidence threshold \(\beta=0.75\) following the original paper. ## 7 Results ### Comparison to TextGCN MLDocTable 1 shows the results of our experiments. In the 2% training data setting, FewShotTextGCN outperforms TextGCN by 17% points on average (\(\Delta\)) on the eight languages of the MLC-Doc dataset, showing that we can effectively utilize the properties of heterogeneous word-document graphs to improve learning in low-resource settings in many languages. For MLDoc, which is a dataset with uniform class priors, we see the difference in performance between original TextGCN and TextGCN combined with 2-NR grows larger as the amount of training data decreases, demonstrating that our proposed 2-NR regularizer helps to combat overfitting. Comparing the '+2-NR' results to those of FewShotTextGCN (that uses both 2-NR and adaptive pseudo-labeling), it can be seen that, overall, our regularizer is the primary contributor in outperforming the TextGCN baseline. Our version of adaptive pseudo-labeling also outperforms the TextGCN baseline, with the largest margins in the low-resource settings, indicating the effectiveness of utilizing unlabeled document nodes in the word-document graph. In the high-resource (90%) setting of MLDoc, FewShotTextGCN performs on a par with the original TextGCN. This can be explained by the fact that 2-NR is a regularization method and the training data set is relatively large in the high-resource setting, which makes that adding regularization to the learning process can be redundant. Furthermore, our version of adaptive pseudo-labeling works based on a set of unlabeled documents not belonging to either the train or the test set, which is a relatively small set of documents in this setting, namely only 10% of the documents of the total training set. ReutersInterestingly, FewShotTextGCN outperforms TextGCN consistently in all data settings for the English Reuters datasets, which are highly skewed in their class distribution. This can be seen as supporting evidence for the hypothesis that 2-NR forces the learned feature representations of documents to contain information of all words it contains, which helps to learn distinguishing features for documents of minority classes. ### Comparison to PLMs MLDocAlthough FewShotTextGCN only uses \(\approx 1\%\) of the parameters, has no pretrained knowledge of the considered languages, has no notion of word order in the documents and does not make use of a shared subword vocabulary, it performs on par with large PLMs across all settings for MLDoc. In the lowest resource setting, FewShotTextGCN outperforms all considered PLMs, whereas both PLMs start performing on par as the amount of available data increases. We hypothesize that the somewhat larger difference in performance for the \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{\% train} & \multirow{2}{*}{**Edge types**} & \multicolumn{8}{c}{**MLDoc**} \\ \cline{3-11} & & **de** & **en** & **es** & **fr** & **it** & **ja** & **ru** & **zh** & \(\Delta\) \\ \hline \multirow{2}{*}{2\%} & \(\mathcal{V}_{d}-\mathcal{V}_{w}\)+\(\mathcal{V}_{w}-\mathcal{V}_{w}\) & 60.7 & 68.2 & 71.3 & 35.6 & 59.3 & 63.8 & 55.2 & 73.4 & 60.9 \\ & \(\mathcal{V}_{d}-\mathcal{V}_{w}\) & **74.6** & **71.7** & 71.3 & **76.7** & **59.9** & **65.1** & **59.9** & **75.4** & **69.3** \\ \hline \multirow{2}{*}{5\%} & \(\mathcal{V}_{d}-\mathcal{V}_{w}\)+\(\mathcal{V}_{w}-\mathcal{V}_{w}\) & 88.4 & 73.7 & 77.2 & **84.9** & 70.5 & **79.6** & 59.0 & **80.4** & 76.7 \\ & \(\mathcal{V}_{d}-\mathcal{V}_{w}\) & **88.5** & **80.7** & **78.1** & 82.7 & **71.5** & 78.4 & **60.2** & 80.0 & **77.5** \\ \hline \multirow{2}{*}{10\%} & \(\mathcal{V}_{d}-\mathcal{V}_{w}\)+\(\mathcal{V}_{w}-\mathcal{V}_{w}\) & **90.7** & 85.5 & **87.2** & 86.4 & 72.4 & 81.0 & 72.7 & 83.7 & 82.5 \\ & \(\mathcal{V}_{d}-\mathcal{V}_{w}\) & 90.3 & **86.8** & 87.0 & 86.4 & **75.8** & **82.3** & **74.7** & **85.1** & **83.6** \\ \hline \multirow{2}{*}{90\%} & \(\mathcal{V}_{d}-\mathcal{V}_{w}\)+\(\mathcal{V}_{w}-\mathcal{V}_{w}\) & **94.5** & 91.9 & 94.2 & **93.4** & 85.8 & **89.1** & 82.8 & **89.5** & 90.2 \\ & \(\mathcal{V}_{d}-\mathcal{V}_{w}\) & 94.1 & 91.9 & **94.4** & 93.0 & **86.6** & 88.7 & **85.0** & 89.4 & **90.4** \\ \hline \multirow{2}{*}{**\#edges**} & \(\mathcal{V}_{d}-\mathcal{V}_{w}\)+\(\mathcal{V}_{w}-\mathcal{V}_{w}\) & 7.4M & 11M & 5.5M & 8.4M & 5.2M & 4.9M & 10.2M & 5.2M & 7.2M \\ & \(\mathcal{V}_{d}-\mathcal{V}_{w}\) & 1M & 1.3M & 900K & 1.1M & 758K & 1.1M & 1.1M & 889K & 1M \\ \hline \hline \end{tabular} \end{table} Table 2: Average accuracy of 5 different seeds on the test set, with a different number of training samples available. Here, the original TextGCN model is used and only the graph-construction method is varied. \(\Delta\) corresponds to the average accuracy across seeds. Highest scoring method per language is marked in **bold**. Russian language is attributable to the fact that Russian is a highly inflective language, resulting in many unique words to learn a representation for. The PLMs have the advantage of using a sub-word vocabulary which serves as a remedy for the formerly described sparsity challenge. ReutersFor R8 holds that similarly to the results on MLDoc, FewShotTextGCN outperforms the PLM baselines in the two lowest-resource settings, whereas the PLMs perform better when more training data is available. The results on R52 are more notable, as the gap in performance between FewShotTextGCN and the PLMs grows relatively larger with more available training data. We hypothesize this could be due to the fact for FewShotTextGCN we use only a 64 dimensional hidden size to encode the 52 classes of the dataset, whereas the PLMs use a hidden size of 768. ## 8 Ablation experiments The original TextGCN implementation proposes to use edges between words based on their respective PMI. Since PMI is calculated using a window size of 20, many extra edges are introduced. For the MLDoc dataset, omitting word-word edges results in a graph that has, on average, only 15% of the amount of edges compared to the original graph (see Table 2 for statistics on the number of edges per graph construction method). To analyse the effect of word-word edges, we evaluate the original TextGCN method across the different graph construction methods in the same data availability settings as our main set of experiments (Table 2). The results provide empirical evidence that, on average, word-word edges are redundant in topic classification problems. The average performance using graphs without word-word edges is always higher; however, performance difference between the two graph construction methods does get smaller as more data is added. In Appendix E we present a visual walk-through of how words can still influence each other's feature representations in a graph without word-word edges. ## 9 Discussion K-NR for K > 2Here, we argue by example that K-NR can also be applied to other heterogeneous graphs with two or more different kinds of nodes. Consider a network with three kinds of nodes: venue nodes, paper nodes and author nodes (Shi et al., 2016). Venue nodes have a connection to a paper node if the paper is published at that venue and authors have a connection to the paper node when they are a contributor to that respective paper. No other edges exist on this graph and the classification task concerns the author nodes. In this case, we could apply K-NR on the author nodes based on the intuition that authors that publish a paper at the same venue are more similar to each other than authors that do not publish at the same venue. In order to get from the anchor author node to a positive author node, one has to traverse the graph by hopping to a neighboring paper, venue, paper and finally author node in that respective order - resulting in \(K=4\). On this same graph, K-NR can be applied for paper nodes as well for \(K=2\) and traversing via the venue node. In general, considering a graph with \(M\) different node types, K-NR can be applied if in terms of node types a symmetrical path with an odd number of nodes can be traversed. In this case, \(K=2(M-1)\). ## 10 Conclusion We introduced K-hop Neighborhood Regularization (K-NR), a contrastive learning method for heterogeneous graphs, and showed its implementation for word-document graphs (2-NR) is highly effective in improving learning from scratch in low-resource settings for a range of languages on topic classification tasks. We also showed that we can exploit properties of word-document graphs for improved learning in few-shot settings. We demonstrated that by simplifying the graph construction method via omitting word-word edges we can improve performance while reducing memory requirements in terms of total number of edges. Additionally, we showed how pseudo-labeling can be successfully applied to word-document graphs. All approaches combined together form part of our new proposed method, FewShotTextGCN, an improvement over TextGCN for few-shot graph learning. FewShotTextGCN performs on par with large PLMs across the considered benchmarks using only a fraction of the parameters and no pre-training whatsoever, showing that GNNs are an attractive alternative for these Transformer-based models. Finally, using this method, we showed that transductive document classification can be performed successfully on a wide range of typologically diverse languages without any language model pretraining. In future work, we plan to ex plore the effectiveness of 2-NR on a large range of graphs, such as social networks, citation networks and product-user networks as well as adaptations of K-NR for \(K>2\). ## 11 Limitations Our work focused on a subset of the text-classification field, namely topic classification. In order to generalize our contributions to other subsets such as sentiment classification, our method might benefit from incorporating word order Johnson and Zhang (2014). Secondly, adding 2-NR to the training process does slow down the convergence rate of training. Exemplified: regular TextGCN would often reach its lowest validation loss in the range of 50 to 200 update steps, whereas TextGCN + 2-NR would often reach its lowest validation loss in the range of 700 to 900 update steps. We do not consider this a major limitation as all experiments can still be performed on a single GPU with 8Gb of RAM.
2307.14569
HDR: Interfaces in crystalline materials
Interfaces such as grain boundaries in polycrystalline as well as heterointerfaces in multiphase solids are ubiquitous in materials science and engineering. Far from being featureless dividing surfaces between neighboring crystals, elucidating features of solid-solid interfaces is challenging and requires theoretical and numerical strategies to describe the physical and mechanical characteristics of these internal interfaces. The first part of this manuscript is concerned with interface-dominated microstructures emerging from polymorphic structural (diffusionless) phase transformations. Under high hydrostatic compression and shock-wave conditions, the pressure-driven phase transitions and the formation of internal diffuse interfaces in iron are captured by a thermodynamically consistent framework for combining nonlinear elastoplasticity and multivariant phase-field approach at large strains. The calculations investigate the crucial role played by the plastic deformation in the morphological and microstructure evolution processes under high hydrostatic compression and shock-wave conditions. The second section is intended to describe such imperfect interfaces at a finer scale, for which the semicoherent interfaces are described by misfit dislocation networks that produce a lattice-invariant deformation which disrupts the uniformity of the lattice correspondence across the interfaces and thereby reduces coherency. For the past ten years, the constant effort has been devoted to combining the closely related Frank-Bilby and O-lattice techniques with the Stroh sextic formalism for the anisotropic elasticity theory of interfacial dislocation patterns. The structures and energetics are quantified and used for rapid computational design of interfaces with tailored misfit dislocation patterns, including the interface sink strength for radiation-induced point defects and semicoherent interfaces.
Aurélien Vattré
2023-07-27T01:06:13Z
http://arxiv.org/abs/2307.14569v2
# Interfaces in crystalline materials ###### Abstract We present a new method for computing the \(\mathcal{O}(\log^{2}n)\)-norm of the \(\mathcal{ ## Abstract Interfaces such as grain boundaries in polycrystalline as well as and heterointerfaces in multiphase are ubiquitous in materials science and engineering with wide-ranging properties and applications. Therefore, understanding the basics of interfaces is key in optimization of interface-dominated materials for a wide range of applications including electrochemical energy conversion and storage, optical, magnetic, and mechanical applications, thermal applications including thermal and environmental barrier coatings in automobile and aeronautical industries. Far from being featureless dividing surfaces between neighboring crystals, elucidating features of solid-solid interfaces is challenging and requires theoretical and numerical strategies to describe the physical and mechanical characteristics of these internal interfaces. The first part of this manuscript is concerned with interface-dominated microstructures emerging from polymorphic structural (diffusionless) phase transformations. Under high hydrostatic compression and shock-wave conditions, the pressure-driven phase transitions and the formation of internal diffuse interfaces in iron are captured by a thermodynamically consistent framework for combining nonlinear elastoplasticity and multivariant phase-field approach at large strains. The calculations investigate the crucial role played by the plastic deformation in the morphological and microstructure evolution processes under high hydrostatic compression and shock-wave conditions. The second section is intended to describe such imperfect interfaces at a finer scale, for which the semicoherent interfaces are described by misfit dislocation networks that produce a lattice-invariant deformation which disrupts the uniformity of the lattice correspondence across the interfaces and thereby reduces coherency. For the past ten years, the constant effort has been devoted to combining the closely related Frank-Bilby and O-lattice techniques with the Stroh sextic formalism for the anisotropic elasticity theory of interfacial dislocation patterns. The structures and energetics are quantified and used for rapid computational design of interfaces with tailored misfit dislocation patterns, including the interface sink strength for radiation-induced point defects and semicoherent interfaces. ###### Contents * 1 Introduction * 2 Crystalline interfaces during solid-solid phase transitions in iron * 2.1 Motivation * 2.2 A phase-field model coupled with finite elastoplasticity * 2.2.1 Kinematics * 2.2.2 Balance laws * 2.2.3 The Clausius-Duhem inequality * 2.2.4 Constitutive equations * 2.2.5 Multiple reaction pathways and energy landscape * 2.2.6 Computational framework * 2.3 Pure hydrostatic compression * 2.3.1 Material and model inputs * 2.3.2 Analysis of the pressure-volume responses * 2.3.3 Microstructure and variant selection * 2.4 Shock wave propagation * 2.4.1 The internal structure of shock waves * 2.4.2 Effect of plasticity in shock-loaded iron * 2.4.3 Residual stresses in the plastically-deformed microstructure * 2.4.4 Dynamical instability in structural phase transitions * 2.5 Limitations * 3 Dislocation structures and energetics at heterophase interfaces * 3.1 Motivation * 3.2 Determining the Burgers vectors of interface dislocation arrays * 3.2.1 Planar interfaces in linear elastic bicrystals * 3.2.2 Volterra dislocations in the reference state * 3.2.3 Crystallographic constraints on interface dislocations * 3.2.4 Solution strategy * 3.2.5 Elastic fields of interface dislocation arrays * 3.2.6 Interface elastic strain energy * 3.3 Symmetric example applications * 3.3.1 Pure tilt grain boundary * 3.3.2 Twist grain boundary * 3.3.3 Pure misfit interface * 3.4 Partitioning of elastic distortions at fcc/bcc interfaces * 3.4.1 Mapping between states in the Nishiyama-Wassermann orientations * 3.4.2 Far-field strains and rotations * 3.4.3 Spurious fields from incorrect reference states * 3.4.4 Orientations differing from the Nishiyama-Wassermann relations * 3.4.5 Short-range elastic fields * 3.4.6 Comparison with atomistic simulations * 3.5 Application to the sink strength of semicoherent interfaces * 3.5.1 Computational multi-model strategy * 3.5.2 Kinetic Monte Carlo simulations with elastic interactions * 3.5.3 Effect of elastic interactions on interface sink strength * 3.6 Elastic strain relaxation in interfacial dislocation patterns * 3.6.1 General considerations on hexagonal-shaped dislocation patterns * 3.6.2 Solution methodology for strain-relaxed rearrangements * 3.6.3 Parametric energy-based framework * 3.6.4 Boundary conditions with surface/interface constitutive relations * 4 3.6.5 Application to Au/Cu heterosystems * 3.6.6 Comparison with atomistic simulations * 3.7 Interaction with extrinsic dislocations in bimetarials * 3.7.1 Extrinsic dislocation arrays and loops * 3.7.2 Internal forces on intrinsic and extrinsic dislocations * 3.7.3 On the piled-up dislocations in the (111)Cu/(011)Nb bimaterial * 3.7.4 Limitations * 3.8 Extension to non-singular fields in multilayered magneto-electro-elastic plates * 3.8.1 Boundary-value problem and singularity-free field solutions * 3.8.2 A primary case: 2D bilayered composites * 3.8.3 Energy-based criterion for interlayers in A/B/A trilayers * 3.8.4 Dislocation-induced response under applied external loading * 4 Conclusion and future works * 4.1 Concluding remarks * 4.2 Perspectives * 4.2.1 Thermolasticity of semiconherent interfaces * 4.2.2 Distributed dislocations for periodic networks of cracks * 4.2.3 Towards a general treatment for {interfaces, dislocations, cracks} ## Remerciements Je tiens a remercier les membres du jury, qui ont accepte d'evaluer ce memoire : Brigitte Bacroix, Stephane Berbenni, Renald Brenner, Marc Fivel et Ioan Ionescu. Je remercie plus particulierement les rapporteurs d'avoir pris le temps si precieux de rapporter ce travail dans les moindres details. Merci pour vos retours positifs si encouragants! Bien entendu, le contenu de ce travail aurait ete reduit a une peau de chagrin sans les changes constants avec mes aciens colleagues de la Direction des Applications Militaires du Commissariat a l'Energie Atomique, Christophe Denoual, Jean-Lin Dequiedt, Yves-Patrick Pellegrini et Ronan Madec. Outre-Atlantique, je mesure la chance d'avoir fait des rencontres inspirantes, en particulier avec Robert Balluffi, David Barnett, Michael Demkowicz, John Hirth, Erinan Pan, et j'en oublie, Niaz Abdorrahim, Tom Arsenlis, Sylvie Aubry, Nicolas Bertin, Wei Cai, Christian Brandl, Kedar Kolluri, Enrique Martinez, Ryan Sills, et j'en oublierai encore! Je tiens aussi a remercier mes collegues les plus proches de l'Office, Christophe Bovet, Jean-Didier Garaud, Serge Kruch, Johann Rannou, sans desir d'exhaustivite. Je remercie Anne Tanguy d'avoir regulierement soutenu cette habilitation, et ce depuis le debut de l'aventure. Une attention particuliere et amicale se tourne vers Vincent Chiavuttini, la seule personne disponible a 3h au mat\({}^{\prime}\) pour discuter, en partie, des correspondances theoriques et numeriques entre une fissure et une dislocation... bienvenue dans le monde de cette derniere, mais art\({}^{\prime}\)d'echanger si tard (quoique, continuons, mais n'en parlons ni a Aurelie, ni a Aurelie...). Je souhaite chaleureusement remercier toute l'equipe du secretariat du Departement Materiaux et Structures : votre aide a resoudre quotiiennement des problemes administtafis est precieuse. Merci enfin a un groupe special A\({}^{3}=\{\) Achille (4 mois), Anton (2 ans), Aurelie \(\}^{\ast}\) pour le bonheur non borne qu'il m'apporte au quotiien. Cette habilitation, qui contient les << mille-feuilles << et autres << Rubik's cubes << deja contemples, est aussi la votre! Figure 1: \({}^{\ast}\)Solutions régularisées et anisotropes de la contrainte normale, de cisaillement et de la densité d’energie d’une boucle prismatique de dislocation simplement connexe, plongee dans un << mille-feuille >> ## Chapter 1 Introduction Interfaces in polycrystalline as well as multiphase solids of natural and synthetic origin have found their places in various applications, ranging from semiconductor devices to advanced multifunctional coatings in automobile and aeronautical industries. Remarkably, the behavior of polycrystalline materials is often reduced to the analysis of their inherent grain boundaries, while the most recent roadmaps on photonics and phononics propose to design on-demand bandgaps by tailoring the topological interface states in metamaterials. As claimed by Wolfgang Pauli, however, because "God made the bulk; the surface was invented by the devil", the interface engineering of solid-state materials inevitably requires specific experimental and numerical contributions to describe the physical and mechanical characteristics of these internal interfaces. Far from being featureless dividing surfaces between neighboring crystals, the study of the structure and properties of homo- and hetero-phase interfaces has thus become as a central area in a broader field of the materials science and engineering. The manuscript is divided into two chapters, considering first the thermodynamics of diffuse interfaces in chapter 2, which was developed more than a hundred years ago by Gibbs. The description of the structures and energetics of imperfect interfaces, namely semicoherent interfaces, is then treated in chapter 3. These semicoherent interfaces are also described by misfit dislocation networks that produce a lattice-invariant deformation which disrupts the uniformity of the lattice correspondence across the interfaces and thereby reduces coherency. This topic has more recently received considerable attention due to the development of high-resolution techniques and increased computational resources in recent decades. The first introductive chapter 2 is thus concerned with the internal interfaces emerging from polymorphic structural (diffusionless) phase transformations. The formation of these solid-solid interfaces during the pressure-driven phase transitions in iron is captured by a thermodynamically consistent framework for combining nonlinear elastoplasticity and multivariant phase-field approach at large strains. Treatments of thermodynamics and kinetic relations of the phase transitions are formulated by the free energy landscape that involves the concept of reaction pathways with respect to the point group symmetry properties of both low- (cubic) and high- (hexagonal) pressure crystal lattices of iron. The phase-field formalism coupled with finite elastoplastic deformations is implemented into a three-dimensional finite element scheme and is applied to the body-centered cubic into hexagonal close-packed phase transitions under high hydrostatic compression and shock-wave conditions. The calculations exhibit the crucial role played by the plastic deformation in the morphological and microstructure evolution processes. However, the coexistence over a wide range of pressure of both cubic and hexagonal lattice structures in the interface-dominated microstructure leads, in general, to the loss of lattice coherence at the interfaces, for which the lattice correspondence across the grain boundaries and heterophase interfaces require a fine dislocation-based description of internal interfaces. It is this last objective that is covered by the main chapter 3. Chapter 3 is therefore dedicated to the structures and energetics of heterophase interfaces. Although the simplest interface is a single isolated planar interface separating two adjacent crystals, also viewed as a planar interface in bimaterials, such an idealized interface between two dissimilar crystals provides the essential basis for understanding the properties of interface-dominated materials. For the past ten years, the constant effort has been devoted to combining the closely related Frank-Bilby and O-lattice techniques with the Stroh sextic formalism for the anisotropic elasticity theory of interfacial dislocation patterns. The key formalism is used by means of a Fourier series-based analysis to determine the reference states of semi-coherent interfaces that gives rise to dislocation arrays whose far-field elastic fields meet the condition of vanishing far-field strains and prescribed misorientations. In accordance with the quantized Frank-Bilby equation, these interface dislocation structures, which are also viewed as Volterra dislocations that have been inserted into the reference state, generate persistent short-range elastic stresses near the interfaces. The corresponding energetics have been quantified and used for rapid computational design of interfaces with tailored misfit dislocation patterns. In particular, a coupled approach with an object kinetic Monte Carlo code has revealed that elastic interactions between radiation-induced point defects and semicoherent interfaces lead to significant increases in interface sink strength, compared to the case with no defect interface interactions. The original work has also been extended to bilayers of finite thickness terminated with free surfaces, layered superlattices with differing layer thicknesses as well as multilayered magneto-electro-elastic plates for semicoherent interfaces with relaxed dislocation patterns at semicoherent interfaces including core-spreading effects. Overall, the elastic full-field solutions have been compared with atomistic calculations for many specific lattice structures, which provide an opportunity for rigorous validation of the anisotropic elasticity theory of interfacial dislocations as well as for collaborations with individuals outside the home laboratory. Although the reader may be disappointed (I understand it...) not to find the content of the two chapters combined together in a unified formalism, chapter 4 provides concluding remarks and further directions for near future developments. ## Chapter 2 Crystalline interfaces during solid-solid phase transitions in iron ### 2.1 Motivation The high-pressure and high-deformation states of iron (Fe) are of vital importance in many technological and sociological applications [33] as well as in geophysics due to the role of Fe properties in the Earth and telluric exoplanet internal structure [235]. Fundamental understanding of the physical and mechanical properties of Fe under extreme conditions, where the deformation state is caused by various irreversible processes (e.g. plasticity and polymorphic structural (diffusionless) solid-solid phase transformations), is therefore crucial in both materials science and condensed matter physics. The first indirect evidence of polymorphic phase transitions in iron has been discovered by [17] under shock compression. The authors reported a series of three discontinuous jumps in the velocity of the free surface and postulated that the three shock-wave structure is produced by a compressive elastic precursor (Ep wave) followed by a plastic wave (P wave), and, a third wave attributed to a phase transformation (PT wave). Wave profile measurements indicate that the onset of the phase transition occurred at a pressure of \(\sim 13\) GPa and room temperature on the Hugoniot. Since the pioneering experiments, efforts succeeded in acquiring static high pressure X-ray diffraction analysis, where the stable ferromagnetic body-centered cubic ground state (bcc \(\alpha\)-Fe) has shown a magnetic and structural transition to the nonmagnetic hexagonal close-packed phase (hcp \(\epsilon\)-Fe) at about \(13\) GPa, revealing the same transition as in shock experiments. Therefore, both bcc and hcp phases have been observed to coexist over a wide range of pressure, which captures the signature of a diffusionless solid-to-solid martensitic transition in iron. While the phase diagram of iron under hydrostatic pressure is well established [220], detailed in situ observations via dynamic X-ray diffraction techniques during shock-loading have supported unambiguously that the high pressure phase has hcp crystal structure [143, 289]. However, due to the considerable experimental difficulties of quantifying plasticity with respect to the polymorphic phase transformations during shock wave propagation in solids, the complete irreversible deformation mechanism still remains poorly investigated. The high pressure-induced transition in iron has been intensively described using ab-initio electronic structure calculations, where some simulation results remain debated. Although the broad outline of the transition has been settled by crystallographic considerations [48, 182, 23], a major problem deals with the accuracy in determining the energy landscape for the bcc-to-hcp transition [83, 171]. Furthermore, ab-initio computational resources are limited to small system sizes, for which plasticity-induced effects in iron cannot be captured by first-principles calculations. Alternative approaches are based on large-scale molecular dynamics simulations that give insight into the motion of multi-million-atoms. Shock waves have also been simulated by employing embedded atom method potentials and varying initial shock strength [140, 141, 142]. For low particle velocities, an elastic shock wave of uniaxially compressed bcc was observed. With increasing shock strength, a two-wave shock structure was identified with an elastic precursor followed by a slower phase-transition wave. No direct evidence of plastic wave profile was observed, certainly due to the small time scale compared to experiments that exhibit a three-wave structure at the nanosecond scale [17, 19]. While further work is needed to understand the detailed mechanisms of plasticity under shock conditions, phase-field models provide a companion approach to shock response of crystalline materials at higher time and length scales. Various continuum mechanics approaches to simulate martensitic phase transitions in the context of plasticity theory have been developed and can be categorized by the nature of the scale description of the constitutive relations. A first micromechanical class of models aims to deliver predictions of macroscopic observables, e.g. stress-strain curves, by including microstructural aspects via homogenization and averaging techniques. In a multiscale strategy, relevant approaches track the volume fraction of martensite phase in the small [133, 218] and large [151, 180] strain formulations. However, these models are generally unable to predict detailed microstructural changes and spatial arrangements of parent-product interfaces during phase transformations at the nanometer scale. A second class of models for displacive transformations has pushed toward smaller scales in an effort to capture transformational processes by tracking the kinetics of interface orientations and variants with respect to the associated configurational forces. Thus, structural phase-field approaches have been successfully applied to model microstructure evolution by formulating thermodynamic driving forces for martensitic transitions between stable states [160, 9, 145, 74, 293]. Treatments of thermodynamics and kinetic relations in phase-field approaches are related to the pioneering works by [49] and [7], within which a material system tends to evolve towards a minimum state of free energy. Chapter 2 introduces a thermodynamically consistent framework for combining nonlinear elastoplasticity and multivariant phase-field approach at large strains [257]. In accordance with the Clausius-Duhem inequality in section 2.2, the Helmholtz free energy and time-dependent constitutive relations give rise to displacive driving forces for pressure-induced martensitic phase transitions in materials. Inelastic forces are obtained by using a representation of the energy landscape that involves the concept of reaction pathways with respect to the point group symmetry operations of crystal lattices [75]. Using the element-free Galerkin method with high-performance computing resources, the finite deformation framework is used to analyze the polymorphic \(\alpha\)- into \(\epsilon\)-Fe iron phase transitions under high hydrostatic compression [257] and shock-wave [264] loadings, as detailed in sections 2.3 and 2.4, respectively, while a recent application to twinning and retwinning in tantalum can be found in Ref. [44]. The three-dimensional nonlinear simulations accurately reproduce observable characteristics reported by the experimental literature, for which the crucial role played by the plastic deformation is analyzed with respect to the peculiar formation of interface-dominated microstructure with a specific selection of high-pressure variants. ### A phase-field model coupled with finite elastoplasticity This section is concerned with a thermodynamically consistent phase-field formalism for solid-state transitions. The model is formulated in a Lagrangian framework for finite strains, motivated by obtaining isothermal driving forces and constitutive relations at a material point. #### 2.2.1 Kinematics An arbitrary material point \(\mathbf{X}\) is defined in a homogeneous reference configuration \(\Omega_{0}\subset\mathbb{R}^{3}\), for which the motion of \(\Omega_{0}\) is given by the mapping \(\mathbf{x}=\mathbf{\chi}\left(\mathbf{X},t\right):\Omega_{0}\to\Omega\subset\mathbb{R}^{3}\) with respect to time \(t\). The total deformation gradient \(\mathbf{F}\) is related to the following multiplicative decomposition [161, 162, 151, 165], i.e., \[\mathbf{F}=\left.\frac{\partial\mathbf{\chi}}{\partial\mathbf{X}}\right|_{t}=\mathbf{ \nabla}\mathbf{\chi}=\mathbf{F}\mathbf{\cdot}\mathbf{F}\mathbf{\cdot}\mathbf{F}\mathbf{\mathrm{ p}}\,, \tag{2.1}\] with \(\mathbf{\nabla}\) the material gradient with respect to \(\mathbf{X}\). Here, the reference configuration is associated with the initial single-crystal bcc iron, and, the total deformation gradient is decomposed into elastic \(\mathbf{F}\mathbf{\mathrm{e}}\), plastic \(\mathbf{F}\mathbf{\mathrm{p}}\), and, transformational \(\mathbf{F}\mathbf{\mathrm{t}}\) distortions, leading to the pressure-induced phase transformation from the bcc to hcp phases. Similarly to classical crystal elastoplasticity theories [154, 159], the decomposition eq. (2.1) is not uniquely defined and different ordering relations have been taken into account in the literature [244]. Because the local irreversible plastic deformation \(\mathbf{F}\mathbf{\mathrm{p}}\) of the neighborhood of \(\mathbf{X}\), e.g. caused by dislocation glides, does not alter the crystal orientation and structure of the lattice vectors, the transformational component \(\mathbf{F}\mathbf{\mathrm{p}}\) occurs between \(\mathbf{F}\mathbf{\mathrm{p}}\) and \(\mathbf{F}\mathbf{\mathrm{e}}\), where the elastic contribution accounts for the lattice stretching \(\mathbf{U}\mathbf{\mathrm{e}}\) and rotation \(\mathbf{Re}\). The polar decomposition to \(\mathbf{Fe}\) reads: \(\mathbf{Fe}=\mathbf{Re}\cdot\mathbf{U}\mathbf{e}\), with \(\mathbf{U}\mathbf{e}^{2}=\mathbf{Fe}^{\dagger}\cdot\mathbf{Fe}\), and, \(\det\mathbf{Fe}=\det\mathbf{U}\mathbf{e}=j_{\mathbf{e}}\). The superscript \({}^{\dagger}\) denotes the transpose operation. Although the controversy regarding the decomposition is beyond the scope of this paper, both tensors \(\mathbf{F}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 2.2.3 The Clausius-Duhem inequality The martensitic phase-field approach coupled with large elastoplastic deformations is derived within a thermodynamic framework in which the second law of thermodynamics plays a crucial role. When the thermal effects are ignored, the fundamental Clausius-Duhem inequality is expressed in terms of stress power per unit reference volume [64] as \[\int_{\Omega_{0}}\left(\mathbf{P}\colon\dot{\mathbf{r}}-\rho_{0}\,\dot{\psi} \right)\,\mathrm{d}\Omega_{0}\geq 0\,, \tag{2.7}\] where \(:\) denotes the double inner tensor product, and, \(\dot{\psi}\) the specific Helmholtz free energy. Equation (2.7) shows that the first Piola-Kirchhoff stress tensor \(\mathbf{P}\) and the deformation gradient \(\mathbf{F}\) are work-conjugate variables, while \(\mathbf{P}\colon\dot{\mathbf{F}}\) defines the mechanical stress power per unit volume in the Lagrangian formulation. Within the model of the multiplicative decomposition in finite strains, it is conveniently postulated that the Helmholtz free energy can be written in the following form: \[\dot{\psi}\doteq\psi\left(\mathbf{F}\mathbf{F},\mathbf{F}\mathbf{t}\right)\,, \tag{2.8}\] where \(\boldsymbol{\nabla}\mathbf{F}\) is a phenomenological third-order gradient term that acts as a penalty for spatial nonuniformity to produce diffuse interfaces. Because the elastic response is not affected by the plastic activities, the elastic part of the Helmholtz free energy is supposed to depend on the elastic and transformational distortions only. Moreover, it is assumed that both transformational and plastic works are not dependent on each other, so that the free energy may be additively decomposed into elastic \(\dot{\psi}_{\mathrm{e}}\), transformational \(\psi_{\mathrm{b}}\), and, purely empirical gradient penalty \(\psi_{\mathrm{v}}\) contributions. With the aforementioned considerations, the Helmholtz free energy can thus be written as \[\dot{\psi}\doteq\psi_{\mathrm{e}}\left(\mathbf{F}\mathbf{F},\mathbf{F}\right) +\psi_{\mathrm{t}}\left(\mathbf{F}\right)+\psi_{\mathrm{v}}\left(\boldsymbol{ \nabla}\mathbf{F}\right)\,, \tag{2.9}\] which, in contrast with ab-initio electronic structure calculations, is not uniquely defined. However, such elastic/inelastic splitting, comparable to the classical phase-field models with elastic and chemical potentials [272, 9], is fundamental for applications that exhibit a strong coupling between acoustic waves and phase transformations, e.g. wave propagation influencing the early stages of the phase transitions induced by shock loadings. Thus, eqs. (2.1) and (2.9) yield to the rates of the total deformation and free energy, i.e., \[\dot{\psi} =\frac{\partial\psi_{\mathrm{e}}}{\partial\mathbf{F}}\Big{|}_{ \mathbf{R}}\dot{\cdot}\mathbf{F}\mathbf{e}+\frac{\partial\psi_{\mathrm{e}}}{ \partial\mathbf{F}}\Big{|}_{\mathbf{F}\dot{\mathbf{r}}}\dot{\mathbf{r}}\dot{ \mathbf{r}}+\frac{\partial\psi_{\mathrm{t}}}{\partial\mathbf{F}}\dot{\mathbf{ r}}\dot{\mathbf{r}}+\frac{\partial\psi_{\mathrm{v}}}{\partial\mathbf{F} \mathbf{r}}\,\,\cdot\,\boldsymbol{\nabla}\dot{\mathbf{R}}\,, \tag{2.10}\] where \(\therefore\) denotes the triple inner tensor product. Inserting eqs. (2.10) into the global form of the Clausius-Duhem inequality (2.7) and applying the chain rule, the non-negative requirement leads therefore to \[\int_{\Omega_{0}}\left\{\left(\mathbf{P}\colon\mathbf{F}\mathbf{ p}^{\dagger}\colon\mathbf{F}\mathbf{t}^{\dagger}-\rho_{0}\,\frac{\partial\psi_{ \mathrm{e}}}{\partial\mathbf{F}\mathbf{e}}\Big{|}_{\mathbf{R}}\right)\colon \dot{\mathbf{F}}\dot{\mathbf{e}}+\left(\mathbf{F}\mathbf{e}^{\dagger}\colon \mathbf{P}\cdot\mathbf{F}\mathbf{p}^{\dagger}-\rho_{0}\,\frac{\partial\psi_{ \mathrm{e}}}{\partial\mathbf{F}}\Big{|}_{\mathbf{F}\mathbf{e}}-\rho_{0}\frac{ \partial\psi_{\mathrm{t}}}{\partial\mathbf{F}\mathbf{t}}\right)\colon\dot{ \mathbf{R}}+\boldsymbol{\Sigma}_{*}\colon\mathbf{D}\mathbf{p}\right.\] \[\left.-\rho_{0}\frac{\partial\psi_{\mathrm{v}}}{\partial\boldsymbol {\nabla}\mathbf{F}\mathbf{t}}\,\,\cdot\,\boldsymbol{\nabla}\dot{\mathbf{R}} \right\}\mathrm{d}\Omega_{0}\geq 0\,, \tag{2.11}\] where \(\boldsymbol{\Sigma}_{*}\) is a work-conjugate stress measure related to the first Piola-Kirchhoff \(\mathbf{P}\), as follows \[\boldsymbol{\Sigma}_{*}=\mathbf{F}\mathbf{t}^{\dagger}\colon\mathbf{P}\cdot \mathbf{F}\mathbf{p}^{\dagger}\,. \tag{2.12}\] Using the permutability of time and space differentiation in the reference configuration and the Gauss theorem, the last right-hand side term in eq. (2.11) can be rewritten, i.e., \[\int_{\Omega_{0}}\left(\frac{\partial\psi_{\mathrm{v}}}{\partial\boldsymbol{ \nabla}\mathbf{F}\mathbf{t}}\,\cdot\,\boldsymbol{\nabla}\dot{\mathbf{R}} \right)\,\mathrm{d}\Omega_{0}=-\int_{\Omega_{0}}\left(\boldsymbol{\nabla}\cdot \frac{\partial\psi_{\mathrm{v}}}{\partial\boldsymbol{\nabla}\mathbf{F} \mathbf{t}}\,\cdot\,\dot{\mathbf{R}}\right)\,\mathrm{d}\Omega_{0}+\int_{\Omega_ {0}}\left(\dot{\mathbf{F}}\colon\frac{\partial\psi_{\mathrm{v}}}{\partial \boldsymbol{\nabla}\mathbf{F}\mathbf{t}}\,\cdot\,\boldsymbol{n}\right)\,\mathrm{ d}\Sigma_{0}\,, \tag{2.13}\] where \(\Sigma_{0}\) is a boundary of \(\Omega_{0}\) with unit outward normal \(\boldsymbol{n}\). Assuming that the surface dissipation is absent during the transformational process, additional boundary conditions as set of nine equations for phase transitions may also be derived by \[\frac{\partial\psi_{\mathrm{v}}}{\partial\boldsymbol{\nabla}\mathbf{F} \mathbf{t}}\,\cdot\,\boldsymbol{n}=\boldsymbol{0}\,,\,\,\text{with}\,\,\,\dot{ \mathbf{R}}\neq\boldsymbol{0}\,\,\text{at}\,\Sigma_{0}\,, \tag{2.14}\] corresponding to the orthogonality relations between \(\boldsymbol{\nabla}\mathbf{F}\) and the external surfaces \(\Sigma_{0}\). Thus, eqs. (2.11\(-\)2.14) yield to a local formulation of the free energy imbalance in terms of dissipation per unit reference volume of mechanical energy \(\mathcal{D}\), as follows \[\mathcal{D}=\left(\mathbf{P}\colon\mathbf{F}\mathbf{p}^{\dagger}\colon\mathbf{ F}\mathbf{t}^{\dagger}-\rho_{0}\,\frac{\partial\psi_{\mathrm{e}}}{\partial \mathbf{F}\mathbf{e}}\Big{|}_{\mathbf{R}}\right)\colon\dot{\mathbf{F}}\mathbf{ e}+\boldsymbol{\Sigma}_{*}\colon\dot{\mathbf{R}}\mathbf{+}\boldsymbol{\Sigma}_{*} \colon\mathbf{D}\mathbf{p}\geq 0\,, \tag{2.15}\] where the dissipative forces \(\mathbf{\chi}\mathbf{t}\), conjugated to dissipative rate \(\dot{\mathbf{F}}\mathbf{t}\), are given by \[\mathbf{\chi}\mathbf{t}=\mathbf{F}^{\mathrm{t}}\cdot\mathbf{P}\cdot\mathbf{F} \mathbf{p}^{\mathrm{t}}-\rho_{0}\left.\frac{\partial\psi_{\mathrm{F}}}{ \partial\mathbf{F}}\right|_{\mathbf{F}\mathbf{t}}-\rho_{0}\frac{\partial\psi_{ \mathrm{F}}}{\partial\mathbf{F}\mathbf{t}}+\rho_{0}\,\boldsymbol{\nabla}\cdot \frac{\partial\psi_{\mathrm{F}}}{\partial\boldsymbol{\nabla}\mathbf{F}\mathbf{t }}\,. \tag{2.16}\] The relation (2.16) defines the thermodynamic displacive driving forces for change in \(\mathbf{F}\mathbf{t}\), acting on a material point \(\boldsymbol{X}\) under isothermal conditions. Although the plastic deformation is not integrated as an internal state variable, e.g. via a defect-energy term as in Refs. [110, 2], but rather as a kinematic variable, the plastic contribution may significantly alter the state of residual stress and also play an important role in dictating the morphology of the microstructural changes and in modeling the irreversibility of phase transitions. #### 2.2.4 Constitutive equations Constitutive equations for reversible elastic deformations and irreversible processes of deformable material bodies undergoing phase and plastic deformations are required to be consistent with the Clausius-Duhem inequality. #### Hyperelasticity The standard assumption that the rate of dissipation is independent of \(\mathbf{F}\mathbf{e}\) in eq. (2.15), i.e., elasticity is a non-dissipative process, results in the hyperelasticity constitutive relation in terms of the first Piola-Kirchhoff stress field, as follows \[\mathbf{P}=\rho_{0}\left.\frac{\partial\psi_{\mathrm{F}}}{\partial\mathbf{F} }\right|_{\mathbf{R}}\cdot\mathbf{F}^{-\mathrm{t}}\cdot\mathbf{F}\mathbf{p}^{ -\mathrm{t}}\,. \tag{2.17}\] A quadratic form for the strain energy density per unit reference volume is assumed, for which a dependence of \(\psi_{\mathrm{F}}\) on \(\mathbf{F}\mathbf{e}\) and \(\mathbf{F}\mathbf{t}\) manifests explicitly via the anisotropic elastic components: \[\rho_{0}\psi_{\mathrm{F}}=\tfrac{1}{2}\mathbf{E}\mathbf{e}\colon\mathrm{D}\left( \mathbf{C}\mathbf{t}\right):\mathbf{E}\mathbf{e}\,, \tag{2.18}\] where \(\mathbf{E}\mathbf{e}\) is the elastic Green-Lagrange strain tensor, defined by \[\mathbf{E}\mathbf{e}=\tfrac{1}{2}\left(\mathbf{C}\mathbf{e}-\mathbf{I}\right)\,, \tag{2.19}\] with \(\mathbf{C}\mathbf{e}=\mathbf{F}\mathbf{e}^{\mathrm{t}}\cdot\mathbf{F}\mathbf{e}\) the right elastic Cauchy-Green deformation tensor, so that \(\mathbf{C}\mathbf{e}=\mathbf{F}\mathbf{t}^{\mathrm{t}}\cdot\mathbf{C}\mathbf{ e}\cdot\mathbf{F}\mathbf{t}\). Inserting eq. (2.18) into the hyperelasticity condition (2.17), the nonlinear stress-elastic strain constitutive relation is rewritten as follows \[\mathbf{P}=\mathbf{F}\mathbf{e}\cdot\mathbf{S}\mathbf{e}\cdot\mathbf{F}^{- \mathrm{t}}\cdot\mathbf{F}\mathbf{p}^{-\mathrm{t}}+\mathbf{F}\mathbf{e}\cdot \left(\mathbf{E}\mathbf{e}\colon\frac{\partial\mathrm{D}\left(\mathbf{C} \mathbf{t}\right)}{\partial\mathbf{C}\mathbf{t}}\colon\mathbf{E}\mathbf{e} \right)\cdot\mathbf{F}\mathbf{p}^{-\mathrm{t}}\,, \tag{2.20}\] where \(\mathbf{S}\mathbf{e}=\mathrm{D}\left(\mathbf{C}\mathbf{t}\right):\mathbf{E}\) is an elastic stress measure associated with \(\mathbf{E}\mathbf{e}\), and, \(\partial_{\mathbf{C}\mathbf{t}}\mathrm{D}\) is a sixth-order tensor, i.e., the derivative of \(\mathrm{D}\) with respect of \(\mathbf{C}\mathbf{t}\). It is worth pointing out that the anisotropic pressure-dependent elastic stiffness tensors of both bcc and hcp phases are explicitly taken into account in the present formalism. With use of the non-dissipative properties of hyperelasticity, the local dissipation considered in the Clausius-Duhem inequality (2.15) can also be conceptually divided into transformational \(\mathcal{D}_{\mathrm{t}}\) and plastic \(\mathcal{D}_{\mathrm{p}}\) dissipative rates per unit reference volume, i.e., \[\mathcal{D}\doteq\mathcal{D}_{\mathrm{t}}+\mathcal{D}_{\mathrm{p}}\geq 0\,, \tag{2.21}\] due to the onset of the phase transitions or the movements of interface during phase transitions, and, to the plastic deformation in materials, respectively. For simplicity, it is assumed that both transformational and plastic dissipative processes are thermodynamically uncoupled such that the inequality (2.21) splits into two stronger non-negative inequalities, as follows \[\mathcal{D}_{\mathrm{t}}=\mathbf{\chi}\mathbf{t}\colon\mathbf{F}\mathbf{t} \geq 0\,\,\,\text{and,}\,\,\,\mathcal{D}_{\mathrm{p}}=\mathbf{\Sigma}_{*}\colon \mathbf{D}\mathbf{p}\geq 0\,. \tag{2.22}\] Kinetic constitutive relations that relate the rates \(\dot{\mathbf{F}}\mathbf{t}\) and \(\mathbf{D}\mathbf{p}\) to the associated driving forces for both dissipative processes in hyperelastic materials must also be defined such that the inequalities in eqs. (2.22) are satisfied. These steps are carried out in the two subsequent sections. #### Kinetics of phase transitions For solid-state structural transformations, a linear kinetic equation that relates the rate of the transformational distortion \(\hat{\mathbf{R}}\) to the displacive driving forces \(\mathbf{X}\mathbf{t}\) is suggested, i.e., \[v\,\hat{\mathbf{R}}=\mathbf{X}\mathbf{t}\,, \tag{23}\] where \(v>0\) is a viscosity-like parameter. For example, the case with \(v\to 0\) represents an instantaneous relaxation. The evaluation of the kinetic equations for martensitic phase transitions is still a subject of intense debates, within which the average transformational kinetics may be influenced by the nucleation processes, interface mobilities, collective dislocation behaviors, as well as inertial effects. In the context of the time-dependent Ginzburg-Landau formalism, a detailed modeling of the kinetics of phase transitions in iron is not the purpose of the present analysis. However, the linear form of the driving forces \(\mathbf{X}\mathbf{t}\) gives rise to thermodynamic consistency conditions for phase transformations, so that the dissipation inequality in eq. (22) is unequivocally satisfied, as follows \[\mathcal{D}_{\mathrm{t}}=v\,|\mathbf{X}\mathbf{t}|^{2}\geq 0\,, \tag{24}\] with \(|\mathbf{X}\mathbf{t}|\) the Frobenius norm of \(\mathbf{X}\). A nonequilibrium thermodynamic system is also characterized when \(\mathcal{D}_{\mathrm{t}}>0\), e.g. corresponding to mobile solid-solid interfaces when \(\mathbf{X}\mathbf{t}>\mathbf{0}\). Using eqs. (16) and (20), eq. (23) yields \[v\,\hat{\mathbf{R}}=\mathbf{X}\mathbf{t}=\underbrace{\mathbf{C}\mathbf{e}\cdot \left(\mathrm{D}\left(\mathrm{C}\mathbf{t}\right):\mathbf{E}\mathbf{e}\right) \cdot\mathbf{R}^{-\mathrm{t}}}_{\text{forces due to elastic energy}}-\underbrace{\rho_{0}\frac{\partial\Psi_{\mathrm{t}}}{ \partial\mathbf{F}}+\rho_{0}\,\boldsymbol{\nabla}\cdot\frac{\partial\Psi_{ \mathrm{\nabla}}}{\partial\mathbf{V}\mathbf{F}}}_{\text{ transformational forces}}\,, \tag{25}\] including mechanical elastically and transformational inelastically induced driving forces, with a gradient-related term for interface energy. Equation (25) shows competition between driving forces due to elastic energy and the inelastic transformational forces related to microstructure evolution processes in materials. In particular, the (meta)stable equilibrium configurations are achieved when \(\mathbf{X}\mathbf{t}=\mathbf{0}\), exhibiting a force balance between the elastic and inelastic contributions. A general quadratic form for the gradient energy penalty that is localized at the diffuse interfaces between two phases may be defined by \[\rho_{0}\,\Psi_{\mathrm{\nabla}}=\tfrac{1}{2}\boldsymbol{\nabla}\mathbf{F}\, \therefore\,\mathbf{{}^{6}}\boldsymbol{\Lambda}\,\therefore\boldsymbol{\nabla} \mathbf{F}\,, \tag{26}\] where \({}^{6}\boldsymbol{\Lambda}\) is a positive definite symmetric (major symmetry) sixth-order tensor that takes into account the gradient-energy interaction between different phases. Assuming an isotropic description of the interface energy and neglecting the interactions between all phases [164] such that \({}^{6}\boldsymbol{\Lambda}=\lambda\,\,^{6}\boldsymbol{\mathrm{I}}\), with \({}^{6}\boldsymbol{\mathrm{I}}\) the sixth-rank identity tensor, eq. (26) reduces to \[\rho_{0}\,\Psi_{\mathrm{\nabla}}=\tfrac{1}{2}\lambda\,\boldsymbol{\nabla} \mathbf{F}\,\therefore\,{}^{6}\boldsymbol{\mathrm{I}}\,\therefore\boldsymbol{\nabla} \mathbf{F}=\tfrac{1}{2}\lambda\,\boldsymbol{\nabla}\mathbf{F}\,\therefore\boldsymbol {\nabla}\mathbf{F}\,, \tag{27}\] where the positive scalar \(\lambda\) controls phenomenologically the finite width of interfaces. The latter distance may be correlated to the short-range elastic fields produced by discrete intrinsic dislocation arrays between bcc/hcp semicoherent heterophase interfaces and also computed by using a recent formalism linking the Frank-Bilby equation and anisotropic elasticity theory, as investigated in chapter 3. Finally, the driving forces expressed in the Ginzburg-Landau formalism are given by \[v\,\hat{\mathbf{R}}=\mathbf{X}\mathbf{t}=\mathbf{C}\mathbf{e}\cdot\left( \mathrm{D}\left(\mathbf{C}\mathbf{t}\right):\mathbf{E}\mathbf{e}\right)\cdot \mathbf{F}^{-\mathrm{t}}-\rho_{0}\frac{\partial\Psi_{\mathrm{t}}}{\partial \mathbf{F}}+\lambda\,\boldsymbol{\nabla}^{2}\mathbf{R}\,, \tag{28}\] with \(\boldsymbol{\nabla}^{2}\) the Laplacian operator. #### Plastic flow rule Macroscopic quasi-perfectly plastic regimes have been observed in polycrystalline bcc iron samples under high-strain rate compressions [136]. To go beyond the elastic limit, the large strain perfectly plastic \(I_{2}\) flow theory has also been incorporated in the present model. Accordingly, the evolution of the plastic distortion \(\mathbf{F}\mathbf{p}\), given in terms of the plastic deformation rate \(\mathbf{D}\mathbf{p}\), is determined by considering the postulate of maximum dissipation [119]. The space of admissible stresses \(\mathcal{E}_{\sigma}\) is written as \[\mathcal{E}_{\sigma}=\left\{\sigma\mid\phi\left(\sigma\right)<0\right\}, \tag{29}\] where the yield function \(\phi\) is expressed in terms of the Cauchy stress \(\sigma\), defined by \[\sigma=j^{-1}\,\mathbf{P}\cdot\mathbf{F}^{\mathrm{t}}=j^{-1}\,\mathbf{F} \mathbf{e}\cdot\mathbf{S}\mathbf{e}\cdot\mathbf{F}^{\mathrm{t}}+j^{-1}\, \mathbf{F}\mathbf{e}\cdot\left(\mathbf{E}\mathbf{e}\vdots\frac{\partial \mathrm{D}\left(\mathbf{C}\mathbf{t}\right)}{\partial\mathbf{C}\mathbf{C}}: \mathbf{E}\mathbf{e}\right)\cdot\mathbf{F}\mathbf{e}^{\mathrm{t}}\,, \tag{30}\] according to eq. (2.20). The work-conjugate stress \(\mathbf{\Sigma}_{*}\) in eq. (2.12) may also be related to the Cauchy stress tensor \(\sigma\) by \[\mathbf{\Sigma}_{*}=j\,\text{Fe}^{\text{t}}\cdot\mathbf{\sigma}\cdot\text{Fe}^{-\text{ t}}=\text{R}^{\text{t}}\cdot\mathbf{\Sigma}\cdot\text{R}^{-\text{t}}\,, \tag{2.31}\] where \(\mathbf{\Sigma}=j\,\text{Fe}^{\text{t}}\cdot\mathbf{\sigma}\cdot\text{Fe}^{-\text{t}}\). Thus, the rate of plastic deformation \(\mathbf{D}\mathbf{p}\) is given by the associated flow rule, as follows \[\mathbf{D}\mathbf{p}=\dot{\eta}\,\,\text{Fe}^{-\text{t}}\cdot\frac{\partial \phi}{\partial\sigma}\cdot\text{Fe}=\dot{\eta}\,\,\mathbf{H}\,, \tag{2.32}\] with \(\dot{\eta}\geq 0\) a non-negative scalar-valued factor, so-called the plastic multiplier, that is required to satisfy the consistency relation: \(\dot{\eta}\phi=0\). The outward normal to the yield surface is given by \(\mathbf{H}\) in the stress space, for which the yield function \(\phi\) in eqs. (2.29) and (2.32) is described with the von Mises yield criterion, i.e., \[\phi\left(\sigma\right)=\sqrt{3\,j_{2}\left(\sigma\right)}-\sigma_{0}\,\,\, \text{with},\,\,\,\,j_{2}=\tfrac{1}{2}\,\text{dev}\,\sigma\colon\text{dev} \,\sigma\,, \tag{2.33}\] where \(\sigma_{0}>0\) is the yield stress measure, and, \(\text{dev}\,\sigma\) denotes the deviatoric part of \(\sigma\). Finally, including the direction of the plastic flow into the rate \(\mathbf{D}\mathbf{p}\), eq. (2.32) yields \[\mathbf{D}\mathbf{p}=\tfrac{3}{2}\,\dot{\eta}\,\,\text{Fe}^{-\text{t}}\cdot \frac{\text{dev}\,\sigma}{\sigma_{0}}\cdot\text{Fe}\,, \tag{2.34}\] for which the dissipation inequality for plastic flow in eq. (2.22) with (2.31) is satisfied, i.e., \[\mathcal{D}_{\text{p}}=\tfrac{3}{2}\,j\,\dot{\eta}\,\,\frac{|\text{dev}\sigma |^{2}}{\sigma_{0}}\geq 0\,. \tag{2.35}\] According to eqs. (2.24) and (2.35), the present formalism is also thermodynamically consistent since the Clausius-Duhem inequality (2.21) is fulfilled. #### Multiple reaction pathways and energy landscape In what follows in section 2.2.5, focus is on the \(\alpha\leftrightarrow\epsilon\) phase transitions in iron, where the energy landscape is defined by reaction pathways for multivariants with respect to the point group symmetry properties of the bcc and hcp lattices. The bcc-to-hcp transition mechanismAs illustrated in Fig. (2.2a), the considered crystallographic relations in the bcc-to-hcp martensitic phase transition are given by the Mao-Bassett-Takahashi mechanism [182], as follows \[[001]_{\text{bcc}}\parallel[2\bar{1}\bar{1}0]_{\text{hcp}}\,\,\,\text{and}, \,\,\,\,(110)_{\text{bcc}}\parallel(0001)_{\text{hcp}}\,, \tag{2.36}\] which differs from the transformation path proposed in Ref. [48] by a rotation of \(\sim\pm 5.2^{\circ}\) around the \([0001]_{\text{hcp}}\) axis [273]. The structural relations in eq. (2.36) are achieved by considering two transformation operations, as shown in Fig. (2.2b). The hcp phase may be obtained by applying a shear to a \((110)_{\text{bcc}}\) plane, which consists of an elongation and a compression along the \([1\bar{1}0]_{\text{bcc}}\) and the \([001]_{\text{bcc}}\) directions, respectively. This transformation is required to form a regular hexagon (in red in Fig. 2.2b) and may be related to a homogeneous linear mapping \(\mathbf{U}\), i.e., \[\mathbf{U}=\begin{bmatrix}\dfrac{3}{4\sqrt{2}}+\dfrac{1}{4}\sqrt{\dfrac{3}{2}} \,c_{/a}&-\dfrac{3}{4\sqrt{2}}+\dfrac{1}{4}\sqrt{\dfrac{3}{2}}\,c_{/a}&0\\ -\dfrac{3}{4\sqrt{2}}+\dfrac{1}{4}\sqrt{\dfrac{3}{2}}\,c_{/a}&\dfrac{3}{4 \sqrt{2}}+\dfrac{1}{4}\sqrt{\dfrac{3}{2}}\,c_{/a}&0\\ 0&0&\dfrac{\sqrt{3}}{2}\end{bmatrix}\,, \tag{2.37}\] where \(c_{/a}=c\,/a\) is the lattice ratio for the pure \(\epsilon\)-Fe phase [54], while the volume change accompanying the phase transition is determined by \(\det\mathbf{U}=9c_{/a}\,/16\). Then, the mechanism involves a shuffle \(\mathbf{t}\), which corresponds to atomic displacements of every other deformed \((110)_{\text{bcc}}\) plane in one of the two possible opposite \([1\bar{1}0]_{\text{bcc}}\) directions. The close-packed structure of hcp is also obtained, where a ratio \(c_{/a}\) of \(1.603\pm 0.001\) has been experimentally determined along this bcc-to-hcp path in iron [182, 76], reflecting a \(\sim 10\%\) volume reduction. In the described case, the transformations \(\mathbf{U}\) and \(\mathbf{t}\) are illustrated separately but can occur simultaneously, as discussed by using first-principles simulations [79]. For both scenarios, the shuffle does not induce any lattice-distortion transformations and has therefore no direct coupling with the overall stress in the deforming materials. Although not visible for a given deformation state at the macroscopic scale, the shuffling modes, however, may have important implications on the free energy along the reaction pathways as well as the kinetics of phase transitions, which are not taken into account in the present formalism. Assuming to take place at a smaller time scale compared to the lattice-distortion transformations, additional variables of state (also, additional associated kinetic equations) should therefore be introduced to characterize such atomistic displacements. With the aforementioned considerations, and because of the required number of finite element meshes for three-dimensional calculations, the example applications to high-pressure compression in sections 2.3 and 2.4 focus on the first cycle of forward and reverse martensitic transitions only, for which the shuffle does not modify the point group symmetries. For higher-order cycles, this mechanism may be responsible for the generation of an unbounded set of variants. The notion of transformation cycles has been addressed in Ref. [75], where a two-dimensional simulation has shown that variants could hierarchically nucleate into previously created ones over up to five levels of transformations for the square to hexagonal martensitic phase transitions. When \(c_{/a}\) is experimentally chosen to determine eq. (2.37), the corresponding homogeneous mapping \(\mathbf{U}\) contains obviously and inseparably both elastic and irreversible part of the deformation in samples. A homogeneous distortion \(\mathbf{U}\) is therefore introduced to identify the pure transformational component of the total deformation provided by experimental data under high hydrostatic pressure, i.e., \[\mathbf{U}=\kappa\mathbf{U}\,, \tag{2.38}\] where \(\kappa\) is a elastic correction factor, as discussed in Ref. [257]. ### Multiple symmetry-related variants During the forward \(\alpha\to\epsilon\) and the reverse \(\epsilon\to\alpha^{\prime}\) martensitic transformations, significant differences in orientation from the initial \(\alpha\)-Fe phase may exist. To make the clear distinction in phase orientation between variant formation and selection, \(\alpha^{\prime}\) denotes here the reversed \(\alpha\) phase, as depicted by the two-dimensional schematic network in Fig. (2.3a). A rigorous link between the standard crystallographic concepts of holoherty with group-subgroup relationships, crystal system and Bravais lattice type (cubic and hexagonal), is explicitly included into the phase-field formalism. For the forward \(\alpha\to\epsilon\) transition, the generation of all hcp variants \({}^{en}_{\ a}\mathbf{U}\) from the linear mapping \(\mathbf{U}\) is described by \[{}^{en}_{\ a}\mathbf{U}=\mathbf{R}^{t}_{\mathrm{bcc}}\cdot\mathbf{U}\cdot \mathbf{R}_{\mathrm{bcc}}\,, \tag{2.39}\] Figure 2.2: Crystallographic relations in the bcc-to-hcp martensitic phase transition established in Refs. [182, 23]. (a) Red atoms in a bcc atomic-side unit cell are located at a \((110)_{\mathrm{bcc}}\) layer and the blue atoms at the adjacent layers. (b) The transition path consists of two transformations. First, a shear deformation \(\mathbf{U}\) leads to an elongation and a compression along the \([1\bar{1}0]_{\mathrm{bcc}}\) and the \([001]_{\mathrm{bcc}}\) directions, respectively. The deformation transforms a polygon in blue into a regular hexagon in red, corresponding to the \((0001)_{\mathrm{hcp}}\) hcp basal plane. Then, a shuffle \(t\) is applied to the entire plane that contains the blue atoms, e.g. by shifting all these atoms in the \([1\bar{1}0]_{\mathrm{bcc}}\) direction. where \(\mathbf{R}_{\mathrm{bcc}}\) is a rotation matrix in the point group of cubic lattice "\(\mathcal{H}_{\mathrm{bcc}}\) and \(n\) the number of hcp variants [209]. Because of the high symmetry of the considered phase, a total number of 6 hcp variants are generated, i.e., \(n=1,\ldots,6\), within which 18 operations in the basic group of 24 rotations for cubic lattices are redundant. To complete the phase transformations with the reverse \(\epsilon\to a^{\prime}\) transitions, the bcc variants \(\kappa^{m}_{\mathrm{cen}}\mathbf{U}\) are deduced by performing the following operation: \[\kappa^{\alpha m}_{\mathrm{cen}}\mathbf{U}=\mathbf{R}^{\mathrm{t}}_{\mathrm{ bcc}}\cdot\mathbf{R}^{\mathrm{t}}_{\mathrm{hcp}}\cdot\mathbf{U}\mathbf{t}^{-1} \cdot\mathbf{R}_{\mathrm{hcp}}\cdot\mathbf{U}\cdot\mathbf{R}_{\mathrm{bcc}}\,, \tag{2.40}\] where \(\mathbf{R}_{\mathrm{hcp}}\) is a rotation matrix in the point group of hexagonal lattice "\(\mathcal{H}_{\mathrm{hcp}}\) and \(m\) the number of bcc variants [209]. Equation (2.40) consists in generating 12 bcc variants, i.e., \(m=1,\ldots,12\), so that a total of 19 variants (including the identity as the 19th variant) are identified to describe the complete bcc-hcp-bcc transition in terms of multiple symmetry-related variant structures. Figure (2.3a) depicts the forward transition of the initial bcc phase, leading to six equivalent hcp phases, and, the reverse transition from each hcp phase that leads to three bcc phases. All tabulated hcp and bcc variants with the corresponding holohedral subgroups "\(\mathcal{H}_{\mathrm{bcc}}\) and "\(\mathcal{H}_{\mathrm{hcp}}\) are given in Tab. 1 from Ref. [257], where the rotation axes are expressed in the hcp and bcc lattice basis, respectively. For clarity, the matrices defined by eqs. (2.39) and (2.40) are written in the following as \({}^{k}\mathbf{U}\)t with \(k=1,\ldots,18\), i.e., \[{}^{k}\mathbf{U}=\begin{cases}\epsilon^{\alpha_{1}}_{\mathrm{a}}\mathbf{U}& 1\leq k\leq 6\\ \epsilon^{\alpha_{2}}_{\mathrm{cen}}\mathbf{U}&7\leq k\leq 18\,,\end{cases} \tag{2.41}\] which are associated with the variant of interest \(V_{k}\) for the forward (\(1\leq k\leq 6\)) and the reverse (\(7\leq k\leq 18\)) transformations. #### Reaction pathways in strain spaces Instead of introducing the Landau thermodynamic potential [163], where the classical Landau-type approach with polynomials is not convenient to apply for reconstructive transitions due to the large numbers of potential energy wells [28], the concept of reaction pathways [74, 75] is used to describe the phase transitions in iron. In particular, the minimum inelastic energy density profile between two different pure phases is represented by a single reaction pathway, along which the associated function \(\psi_{\mathrm{Lk}}\) is assumed to possess Figure 2.3: (a) Schematic illustration of the multiple symmetry-related variants for the forward \(\alpha\to\epsilon\) (in red) and the reverse \(\epsilon\to a^{\prime}\) (blue) phase transitions in iron. (b) The corresponding reaction pathway network in a specific \(\{\mathbf{C}_{1},\mathbf{C}_{2},\mathbf{C}_{3}\}\) strain space, within which the transformational Cauchy-Green tensor \(\mathbf{C}=\mathbf{F}^{\mathrm{t}}\cdot\mathbf{F}\) as well as some quantities described in the text, are defined. the same symmetries as all symmetry-related variants \(V_{k}\), and, to satisfy the principle of material objectivity [27], e.g., \[\hat{\psi}_{\mathrm{k}}\doteq\hat{\psi}_{\mathrm{k}}(^{k}\hat{\mathbf{C}})\,, \tag{2.42}\] where \({}^{k}\hat{\mathbf{C}}\) are the transformational Cauchy-Green strain measures for all pure phases, given by \[{}^{k}\hat{\mathbf{C}}={}^{k}\mathbf{U}^{\mathrm{t}}\cdot{}^{k}\mathbf{U}\,, \tag{2.43}\] as listed in Appendix A from Ref. [257], with the aid of eqs. (2.39\(-\)2.41). Here and in the following, the superimposed caret will be used to indicate quantities strictly defined along the pathways. To model continuous forward and the reverse transformations, each transition pathway \(k\) is represented by linear interpolation between starting \({}^{k}\hat{\mathbf{C}}_{\mathrm{start}}\) and ending \({}^{k}\mathbf{C}_{\mathrm{end}}\) strain states, as follows \[{}^{k}\hat{\mathbf{C}}\left(s_{k}\right)=\left(1-s_{k}\right){}^{k}\hat{ \mathbf{C}}_{\mathrm{start}}+s_{k}{}^{k}\hat{\mathbf{C}}_{\mathrm{end}}\,, \tag{2.44}\] with \(s_{k}\in[0,1]\) the curvilinear coordinates along \(k\). For instance, \(\mathrm{hcp}\) variants \(V_{k}\) are parameterized by: \({}^{k}\hat{\mathbf{O}}_{\mathrm{start}}=\mathbf{I}\) and \({}^{k}\hat{\mathbf{C}}_{\mathrm{end}}={}^{k}\mathbf{U}^{2}\), with \(1\leq k\leq 6\). Generating the reaction pathways with eqs. (2.37\(-\)2.44) and using projection matrices \(\mathbf{C}_{1}\), \(\mathbf{C}_{2}\) and \(\mathbf{C}_{3}\), an example of three-dimensional representation of the network is shown in Fig. (2.3b), within which each pathway connects continuously and linearly with two pure bcc/hcp variants \(V_{k}\) in the \(\{\mathbf{C}_{1},\mathbf{C}_{2},\mathbf{C}_{3}\}\) strain space. The projection is not unique and the specific strain space in Fig. (2.3b) is characterized by using the following matrices: \[\mathbf{C}_{1}=\begin{bmatrix}1&1&0\\ 1&-3&0\\ 0&0&0\end{bmatrix}\,,\,\,\mathbf{C}_{2}=\begin{bmatrix}0&0&0\\ 0&1&1\\ 0&1&-3\end{bmatrix}\,,\,\,\mathbf{C}_{3}\,\,\,=\begin{bmatrix}-3&0&1\\ 0&0&0\\ 1&0&1\end{bmatrix}\,. \tag{2.45}\] The reaction pathway network describes also a six-dimensional energy landscape, for which each straight segment represents a minimum-energy reaction pathway that connects two stable/(meta)stable states with possible (if any) saddle points [28]. #### Inelastic energy landscape In order to define the total inelastic energy landscape \(\psi_{\mathrm{t}}\) in a whole strain space, e.g. not only restricted along the pathways as \(\hat{\psi}_{\mathrm{k}}\), the partition of unity approach is used as a weighted sum of the contribution \(\psi_{\mathrm{k}}\) of each individual pathway \(k\). Thus, the overall inelastic energy density \(\psi_{\mathrm{t}}\) is formally defined by introducing the weighting functions \(\omega_{k}\left(\mathbf{C}\right)\), i.e., \[\psi_{\mathrm{t}}\left(\mathbf{C}\right)=\sum_{k=1}^{18}\omega_{k}\left( \mathbf{C}\right)\,\psi_{\mathrm{k}}\left(\mathbf{C}\right)\,, \tag{2.46}\] for any transformational Cauchy-Green tensor \(\mathbf{C}=\mathbf{R}^{\mathrm{t}}\cdot\mathbf{R}\). Without loss of generality, for any given tensor \(\mathbf{A}\), e.g. \(\mathbf{C}\) and \(\mathbf{C}\)et, these functions \(\omega_{k}\left(\mathbf{A}\right)\) satisfy the partition of unity condition, namely: \[\sum_{k=1}^{18}\omega_{k}\left(\mathbf{A}\right)=1\,\,\,\text{with},\,\,\, \,\omega_{k}\left(\mathbf{A}\right)=\frac{d_{k}^{-h}\left(\mathbf{A}\right)} {\sum_{i=1}^{18}d_{i}^{-h}\left(\mathbf{A}\right)}\,, \tag{2.47}\] where \(h\) is a positive parameter that controls the weighted average of all pathways. The quantities \(d_{k}\left(\mathbf{A}\right)\) correspond to the minimum Euclidean distances in the strain space between \(\mathbf{A}\) and the pathways \(k\), defined by \[d_{k}\left(\mathbf{A}\right)=|{}^{k}\mathbf{\Pi}\left(\mathbf{A}\right)|{=} \min_{\zeta_{k}\in[0,1]}\,|\mathbf{A}-{}^{k}\hat{\mathbf{A}}\left(\zeta_{k} \right)|\,, \tag{2.48}\] where \({}^{k}\hat{\mathbf{A}}\left(\zeta_{k}\right)\) are also mapped onto the reaction pathways with \(\zeta_{k}\left(\mathbf{A}\right)\) the corresponding reaction coordinates. For example, when \(\mathbf{A}=\mathbf{C}\)t: Fig. (2.3b) shows the projected tensor \({}^{1}\hat{\mathbf{C}}\left(\zeta_{1}\right)\) onto the forward pathway \(1\), between the initial single-crystal bcc phase and the hcp variant \(V_{1}\). Introducing the convenient curvilinear coordinates \(\zeta_{k}^{\infty}\left(\mathbf{A}\right)\) for fictitious unbounded pathways, as follows \[\zeta_{k}^{\infty}\left(\mathbf{A}\right)={}^{k}\hat{\mathbf{D}}:\left( \mathbf{A}-{}^{k}\hat{\mathbf{C}}_{\mathrm{start}}\right)=\frac{{}^{k}\hat{ \mathbf{C}}_{\mathrm{end}}-{}^{k}\hat{\mathbf{C}}_{\mathrm{start}}}{|{}^{k} \hat{\mathbf{C}}_{\mathrm{end}}-{}^{k}\hat{\mathbf{C}}_{\mathrm{start}}|}: \left(\mathbf{A}-{}^{k}\hat{\mathbf{C}}_{\mathrm{start}}\right)\,, \tag{2.49}\] where \({}^{k}\hat{\mathbf{D}}\) defines the normalized direction of the pathway \(k\), the argmin \(\zeta_{k}\) in eq. (2.48) is also determined by solving \(\partial_{\zeta_{k}}d_{k}\left(\mathbf{A}\right)=0\) for a given \(\mathbf{C}\)t, leading to \[\zeta_{k}\left(\mathbf{A}\right)=\begin{cases}\zeta_{k}^{\infty}\left(\mathbf{A }\right)&\text{if}:\,\,\,\,\zeta_{k}^{\infty}\left(\mathbf{A}\right)\in[0,1] \\ 0&\text{if}:\,\,\,\zeta_{k}^{\infty}<0\\ 1&\text{if}:\,\,\,\zeta_{k}^{\infty}>1\,,\end{cases} \tag{2.50}\] so that the distance measure \(d_{k}\left(\mathbf{A}\right)\) in eq. (2.48) with (2.50) represents the minimum distance from \(\mathbf{A}\) to a given segment in \(\mathds{R}^{6}\). On the other hand, it is assumed that each potential \(\psi_{\mathrm{t}_{k}}\) in eq. (2.46) is related to the minimum energy density \(\hat{\psi}_{\mathrm{t}_{k}}\) combining with an additional out-of-path component, i.e., \[\psi_{\mathrm{t}_{k}}\left(\mathbf{C}\right)=\hat{\psi}_{\mathrm{t}_{k}}\left( \zeta_{k}\left(\mathbf{C}\right)\right)+\underbrace{\sigma\,d_{k}(\mathbf{C} )+\pi\,|\,\mathrm{tr}\,^{k}\Pi\left(\mathbf{C}\right)|}_{\text{out-of-path component}}\,, \tag{2.51}\] such that \(\partial_{\mathbf{C}}\,\mathrm{tr}\,^{k}\Pi\left(\mathbf{C}\right)\) and \({}^{k}\hat{\mathbf{D}}\) are orthogonal to each other, i.e., \(\partial_{\mathbf{C}}\,\mathrm{tr}\,^{k}\Pi\left(\mathbf{C}\right)\). Here, \(\mathrm{tr}\,\mathbf{A}\) denotes the trace of \(\mathbf{A}\). The parameters \(\sigma\) and \(\pi\) in eq. (2.51) scale two different out-of-path energy barriers: one component is linearly proportional to the Euclidean distance from the pathways with \(\sigma\), while the second coefficient \(\pi\) is used to distinguish different force magnitudes for isochoric and volumetric transformational deformations, when \(\pi\neq 0\). Figure (2.4) illustrates the construction of the overall inelastic energy landscape \(\psi_{\mathrm{t}}\) defined by eq. (2.46) with eq. (2.51), for all \(\mathbf{C}\) of the neighborhood of the associated reaction pathway network in Fig. (2.3b). In accordance with the model parameters discussed in section 2.3.1, Fig. (2.4a) shows the given (invariant) minimum energy density \(\hat{\psi}_{\mathrm{t}_{k}}\) along all reaction coordinates \(\zeta_{k}\left(\mathbf{C}\right)\) of the individual pathways \(k\). Then, the weighting functions \(\omega_{k}\left(\mathbf{C}\right)\) are used to extrapolate each contribution into the whole space: Fig. (2.4b) depicts a \(5\times 10^{8}\) J.m\({}^{-3}\)-iso-surface of the extended inelastic energy part \(\omega_{k}\left(\mathbf{C}\right)\hat{\psi}_{\mathrm{t}_{k}}\) in the \(\left\{\mathbf{C}_{1},\mathbf{C}_{2},\mathbf{C}_{3}\right\}\) strain space. As illustrated by arrows, the iso-surface is perpendicular to the reaction pathways and the energy profile is "sombrero-shaped" along the axis \(\mathbf{C}_{1}+\mathbf{C}_{2}+\mathbf{C}_{3}\). Figure (2.4c) shows a \(10^{9}\) J.m\({}^{-3}\)-iso-volume related to the out-of-path contribution \(\sigma d_{k}\left(\mathbf{C}\right)\) only, i.e., with \(\pi=0\) in eq. (2.51). For sake of clarity, this additional energy potential is depicted in Fig. (2.4d) onto two planes passing by variants \(V_{1}\) and \(V_{3}\) (upper plane) and variants \(V_{5}\) and \(V_{6}\) (lower plane). It is also shown that the energy profile is exclusively controlled by the iso-distances around the paths, as illustrated by the cylinders around the paths and by the half-spheres at their ends. Finally, Fig. (2.4e) gives the same \(10^{9}\) J.m\({}^{-3}\)-iso-volume of the total inelastic energy \(\psi_{\mathrm{t}}\) landscape, within which the volume in (c) is plotted with transparency as well, for comparison. In contrast with Figs. (2.4c) and (d), it is shown that the total energy has a "cone-shaped" profile, exhibiting the directional character of the transformations toward the pure hcp phases, as distinctly depicted onto both planes in Fig. (2.4f). #### Transformational inelastic forces The calculation of the inelastic driving forces for phase transformations in eq. (2.28) is deduced by computing the derivative of \(\psi_{\mathrm{t}}\) with respect to \(\mathbf{R}\), which can be expressed as follows \[\frac{\partial\psi_{\mathrm{t}}}{\partial\mathbf{R}}=2\,\mathbf{R}\cdot \frac{\partial\psi_{\mathrm{t}}\left(\mathbf{C}\right)}{\partial\mathbf{C}}\,. \tag{2.52}\] According to eq. (2.46), the derivative of the energy function in the right-hand side of eq. (2.52) yields \[\frac{\partial\psi_{\mathrm{t}}\left(\mathbf{C}\right)}{\partial\mathbf{C} \mathbf{t}}=\sum_{k=1}^{18}\psi_{\mathrm{t}_{k}}\left(\mathbf{C}\right)\frac{ \partial\omega_{k}\left(\mathbf{C}\right)}{\partial\mathbf{C}\mathbf{t}}+ \omega_{k}\left(\mathbf{C}\right)\frac{\partial\psi_{\mathrm{t}_{k}}\left( \mathbf{C}\right)}{\partial\mathbf{C}\mathbf{t}}\,, \tag{2.53}\] where the derivative of the weighting functions \(\omega_{k}\left(\mathbf{A}\right)\) with respect to \(\mathbf{A}\) is given, without loss of generality, by \[\frac{\partial\omega_{k}\left(\mathbf{A}\right)}{\partial\mathbf{A}}=h\,\sum_{ i=1}^{18}\frac{\omega_{i}\left(\mathbf{A}\right)}{d_{i}\left(\mathbf{A} \right)}\left(\omega_{k}\left(\mathbf{A}\right)-\delta_{ik}\right)\,^{i} \mathbf{N}\left(\mathbf{A}\right)\,, \tag{2.54}\] with \(\delta_{ik}\) the Kronecker delta, i.e., \(\delta_{ik}=1\) if \(i=k\) and \(=0\), otherwise, and, \({}^{i}\mathbf{N}\left(\mathbf{A}\right)\) represents the normal tensor to the pathway \(i\) in the direction of \(\mathbf{A}\), obtained in the following form: \[{}^{i}\mathbf{N}\left(\mathbf{A}\right)=\frac{\partial d_{i}\left(\mathbf{A} \right)}{\partial\mathbf{A}}=\frac{{}^{i}\Pi\left(\mathbf{A}\right)}{d_{i} \left(\mathbf{A}\right)}\,, \tag{2.55}\] such that \(|{}^{i}\mathbf{N}\left(\mathbf{A}\right)|=1\), and, \({}^{i}\mathbf{N}\left(\mathbf{A}\right)\,{}^{i}\mathbf{\hat{D}}=0\) when \(\varepsilon_{k}^{\infty}\left(\mathbf{A}\right)\in\left[0,1\right]\). Moreover, the derivative of \(\psi_{k}\) with respect to \(\mathbf{C}\) in eq. (2.53) leads to \[\frac{\partial\psi_{\mathrm{t}_{k}}\left(\mathbf{C}\right)}{\partial\mathbf{C} \mathbf{t}}= \frac{\partial\hat{\psi}_{\mathrm{t}_{k}}\left(\zeta_{k}\left(\mathbf{C} \right)\right)_{k}}{\partial\zeta_{k}}{}^{k}\mathbf{\hat{D}}+\sigma\,^{k} \mathbf{N}\left(\mathbf{C}\right)+\pi\,\mathrm{sgn}\left(\mathrm{tr}\,^{k} \Pi\left(\mathbf{C}\right)\right)\left(\mathbf{I}-{}^{k}\mathbf{\hat{D}}\, \mathrm{tr}\,^{k}\mathbf{\hat{D}}\right)\,. \tag{2.56}\] Substituting eqs. (2.54) and (2.56) into eq. (2.53), and, then into eq. (2.52), it is also shown that two directions are included in the transformational inelastic forces: one component is related to the longitudinal directions \({}^{k}\mathbf{\hat{D}}\) along the reaction pathways, while the second component is associated with the normal directions \({}^{k}\mathbf{N}\left(\mathbf{C}\right)\) towards \(\mathbf{C}\). #### Mechanical elastic forces Since the phase-field model aims at modeling high-pressure phase transitions in iron, particular attention is paid to the configuration within which the nonlinear elastic stiffness tensor is expressed. The out-of-path elasticity tensor \(\mathbf{D}\left(\mathbf{C}\mathbf{t}\right)\) in eq. (2.28), which depends on the elastic and transformational deformation states, is given in the whole strain space by \[\mathbf{D}\left(\mathbf{C}\mathbf{t}\right)=\sum_{k=1}^{18}\omega_{k}\left( \mathbf{C}\mathbf{t}\right)\,^{k}\mathbf{D}\left(\zeta_{k}\left(\mathbf{C} \mathbf{t}\right)\right)\,, \tag{2.57}\] where \({}^{k}\mathbf{D}\) are the elasticity tensors associated with the reaction pathways \(k\), and, \(\zeta_{k}\left(\mathbf{C}\mathbf{t}\right)\) are the reaction coordinates that minimize the Euclidean distance \(d_{k}\left(\mathbf{C}\mathbf{t}\right)\) between \(\mathbf{C}\mathbf{t}\) and the individual paths \(k\). The weighting functions \(\omega_{k}\left(\mathbf{C}\mathbf{t}\right)\) are also defined by eq. (2.47), where the partition of unity is written as a function of \(\mathbf{C}\mathbf{t}\). The projected tensors \({}^{k}\mathbf{C}\)et are also mapped onto the reaction pathways (as well as \({}^{k}\mathbf{C}\mathbf{t}\)) and the corresponding reaction coordinates are consistently determined by solving \(\partial_{\zeta_{k}}\,d_{k}\left(\mathbf{C}\mathbf{t}\right)=0\). Imposing \(\partial_{\mathbf{C}\mathbf{t}}\mathbf{D}\left(\mathbf{C}\mathbf{t}\right)=0\) for all pure (meta)stable phases (i.e., at the ends of all reaction pathways \(k\), when \(\zeta_{k}=0\) and \(\zeta_{k}=1\)) with \(\mathbf{C}\mathbf{t}={}^{k}\mathbf{C}\mathbf{t}\), the elasticity tensors \({}^{k}\mathbf{D}\) in eq. (2.57) may be represented by a cubic interpolation function to ensure numerical stability, i.e., \[{}^{k}\mathbf{D}\left(\zeta_{k}\left(\mathbf{C}\mathbf{t}\right)\right)=\left( 1-3\zeta_{k}{}^{2}+2\zeta_{k}{}^{3}\right)\,\mathbf{D}^{\alpha}+\left(3\zeta_ {k}{}^{2}-2\zeta_{k}{}^{3}\right)\,\mathbf{D}^{\alpha}\,, \tag{2.58}\] with \(\mathbf{D}^{\alpha}\) and \(\mathbf{D}^{\alpha}\) the elastic stiffness tensors of the pure bcc and hcp iron phases, respectively. In particular, if \(\zeta_{k}^{\infty}\left(\mathbf{C}\mathbf{t}\right)<0\) (\(>1\)), also \({}^{k}\mathbf{D}\left(\zeta_{k}\left(\mathbf{C}\mathbf{t}\right)\right)= \mathbf{D}^{\alpha}\) (\(=\mathbf{D}^{\alpha}\)). For instance, for such pure hcp \(e\)-Fe phases, the Figure 2.4: Construction of the total inelastic energy landscape \(\psi_{k}\) associated with the multiple reaction pathways \(V_{k}\) in iron. (a) Invariant and minimum energy profile along the individual reaction pathways \(k\) from \(0\) (in dark red, for the pure bcc phases) to \(\sim 8\times 10^{8}\,\mathrm{J.m^{-3}}\) (in white, for hcp phases). (b) Extrapolation of the minimum energy potential in the whole \(\left\{\mathbf{C}_{1},\mathbf{C}_{2},\mathbf{C}_{3}\right\}\) strain space, e.g. \(5\times 10^{8}\,\mathrm{J.m^{-3}}\)-iso-surface. (c) shows a \(10^{9}\,\mathrm{J.m^{-3}}\)-iso-volume of the out-of-path contribution \(\sigma d_{k}\) with \(\pi=0\), whereas (d) illustrates the energy profile onto two planes passing by variants \(V_{1}\) and \(V_{3}\) (upper plane) and variants \(V_{5}\) and \(V_{6}\) (lower plane). (e) and (f) are similar to (d) and (e) for the total inelastic energy \(\psi_{k}\) landscape, respectively. finite hyperelasticity condition from eq. (2.18) leads therefore to (2.59) \[\mathbf{D}^{\epsilon}=\rho_{0}\,\frac{\partial^{2}\!\!/_{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 2.3.1 Material and model inputs Tables (2) and (3) in Ref. [257] list the values for the material and model parameters for iron under high pressure compression, respectively, which have been collected from a variety of sources. In the present phase-field model, the elastic pressure-dependent properties of iron are defined by four pressures: \(\{p^{\alpha},p^{\varepsilon}\}\), for which the crystalline phases are fully bcc, and, fully converted to hcp, respectively; and: \(\{p^{\alpha-\varepsilon},p^{\varepsilon+\alpha}\}\), which characterize the transition states where the forward and reverse transformations start, respectively. Here, the equilibrium pressures \(\{p^{\alpha}=0,p^{\varepsilon}=-20\}\) GPa, with the corresponding atomic volumes \(\{v^{\alpha}=11.75,v^{\varepsilon}=10.20\}\) A\({}^{3}/\)at, are selected from Ref. [76]. In accordance with these experimental measures, the associated elastic components \(\mathrm{b}^{\pm}\) and \(\mathrm{b}^{\varepsilon}\) for both pure bcc and hcp phases are given in Ref. [166], while the stiffness tensor \(\mathbf{D}^{\varepsilon}\) is expressed in the current configuration by using eq. (2.61), and, \(\mathbf{D}^{\alpha}=\mathrm{b}^{\pm}\) at zero pressure. The ratio \(c_{/\pm}=1.603\) of the hcp close-packed structure has been experimentally determined in Ref. [182], so that \(\mathbf{U}=9c_{/\pm}/16=0.902\). However, \(\mathbf{U}\) corresponds to the complete phase transformation into the hcp iron sample at \(p^{\varepsilon}=-20\) GPa, for which the experimental measurements contain indistinctly elastic and transformational distortions. According to eq. (2.38) and following the procedure in Appendix C from Ref. [257], the transformational part \(\mathbf{U}\) is related to \(\mathbf{U}\) as follows \[\mathbf{U}=\kappa\mathbf{U}=\sqrt{2}\left(1+\sqrt{1+\frac{8}{3}\frac{j_{\rm exp }\;p^{\varepsilon}}{D^{\varepsilon}}}\right)^{-1/2}\mathbf{U}\,, \tag{2.63}\] where \(D^{\varepsilon}\) is the hcp bulk modulus, and, \(j_{\rm exp}=v^{\varepsilon}/v^{\alpha}\) is the experimental volume change from the initial pure bcc sample, at \(p^{\alpha}=0\) GPa, to the final pure polycrystalline hcp iron, at \(p^{\varepsilon}=-20\) GPa. In the present perfect plasticity theory, a constant yield stress is chosen to analyze the crucial role of plasticity on nucleation and selection of variants during phase transformations, i.e., \(\sigma_{0}=0.25\) GPa, which is fairly of the same order of magnitude with Hugoniot elastic limits in Ref. [217]. The positive parameter \(h\) of the weighting functions controls the energetic part of the phase transition during a possible jump from one reaction pathway to the neighboring branches. The energy variation for such transition may be determined using molecular dynamics simulations [74], for which the exponent can be tuned to reproduce the atomistic results. However, without relevant information about the bcc-bcc and hcp-hcp phase transitions in iron, it is therefore assumed that all reaction pathways are mainly controlled by their immediate surroundings. This consideration may be achieved by imposing large magnitudes for \(h\), e.g. \(h=10\), as well as large values for the energy barrier parameters \(\sigma\) and \(\pi\). The relation \(\pi=10\,\sigma\) (in GPa) is used in the energy penalty part of eq. (2.51) to consider higher pull-back forces onto the pathways for the volumetric than the isochoric phase transformations, which are conveniently applied to non-zero strain states that are out of the transition pathways, i.e., for any \(\mathbf{C}\) with \({}^{k}\mathbf{\Pi}\left(\mathbf{C}\right)\neq\mathbf{0}\). The onset of a new crystalline phase can be viewed as the product of a morphological instability involving elastic energy, interfacial energy, inelastic energy, transformational dissipation, plastic dissipation, additional energies due to the long-range elastic interactions between variants, etc. Because of the complexity in modeling such phase instability, a phenomenological form is adopted to define the minimum energy density \(\hat{\psi}_{\mathrm{k}}\) as a function of the reaction coordinate \(\zeta_{k}\) along each individual pathway \(k\), i.e., \[\rho_{0}\hat{\psi}_{\mathrm{k}}\left(\zeta_{k}\left(\mathbf{C}\right)\right)= \tfrac{1}{2}c_{1}\,\zeta_{k}^{2}+c_{2}\,\zeta_{k}\,, \tag{2.64}\] with \(c_{1}\) and \(c_{2}\) (in J.m\({}^{-3}\)) two parameters that may be calibrated to experimental data. As described in Appendix D from Ref. [257], these parameters are given by \[c_{2}=\tfrac{1}{2}j^{\alpha-\alpha\varepsilon}p^{\alpha-\alpha\varepsilon}\, \mathrm{tr}^{k}\hat{\mathbf{D}}\,,\ \ \mathrm{and},\ c_{1} =\tfrac{1}{2}j^{\varepsilon-\alpha}p^{\varepsilon-\alpha}\,(^{k} \mathbf{U}^{-2}\cdot^{k}\hat{\mathbf{D}})-c_{2}\,, \tag{2.65}\] with \(j^{\alpha-\alpha\varepsilon}=v^{\alpha-\varepsilon}/v^{\alpha}\) and \(j^{\varepsilon-\alpha\varepsilon}=v^{\varepsilon-\alpha}/v^{\alpha}\) the experimental volume changes from the initial pure bcc sample to the Hugoniot states where the forward and reverse transitions occur, at \(p^{\alpha-\varepsilon\varepsilon}\) and \(p^{\varepsilon-\alpha\varepsilon}\), respectively. According to the recent experimental results from Ref. [76], the forward transition starts at \(p^{\alpha-\varepsilon\varepsilon}=-14.9\) GPa, with the corresponding volume \(v^{\alpha-\varepsilon\varepsilon}=11.0\) A\({}^{3}/\)at, and, the reverse at \(p^{\varepsilon-\alpha}=-12.0\) GPa, with \(v^{\varepsilon-\alpha\varepsilon}=10.6\) A\({}^{3}/\)at. The minimum energy density profile along the individual reaction pathways from eq. (2.64) with eq. (2.65), for which the values of \(c_{1}\) and \(c_{2}\) are provided in Tab. 3 from Ref. [257], is depicted in Fig. (2.4a). The parameter \(v\) in the relaxation eq. (2.23) is akin to viscosity in classical viscoplastic approaches. For the face-centered cubic (fcc) to bcc phase transitions in Fe\({}_{3}\)Ni, an attempt to fit the magnitude \(v=14\) mPa.s, comparable to the viscosity of liquid metals, has been investigated by using molecular dynamics simulations [74]. Such quantitative data analysis is not available for the bcc-hcp transformations in iron, but it is assumed that the amount of stress state due to the viscosity is lower than the yield stress, i.e., \(v\xi_{\mathrm{t}}<\sigma_{0}\), where \(\xi_{\mathrm{t}}\) is a measure of the transformational strain rate. This measure can be estimated by \(\xi_{\mathrm{t}}=\xi_{\mathrm{t}}/\Delta t=\tfrac{1}{2}|\mathbf{C}-\mathbf{ I}|/\Delta t\) during a time interval \(\Delta t\) awaited for the transformation, with \(\varepsilon_{\mathrm{t}}\) the norm of the transformational Green-Lagrange deformation tensor. Thus, it follows that \(v<\sigma_{0}f_{f}/\varepsilon_{\mathrm{t}}\), with \(t_{f}\) the final simulation time. According to the mentioned material inputs and time characteristics discussed in the following section, it is also considered that \(v\approx\sigma_{0}t_{f}/\epsilon_{t}\approx 2.6\) kPa.s. Finally, the Laplacian operator in eq. (2.28) can be approximated using the mesh discretization in the finite element framework, such that \(\lambda=\lambda^{*}/\ell^{2}\), where \(\lambda^{*}=0.5\) GPa is a mesh-size parameter and \(\ell\) is an average element size of the simulation grid. #### 2.3.2 Analysis of the pressure-volume responses The simulated material is a cube containing 1 million finite elements with full periodic boundary conditions, within which each element volume is \(V_{\epsilon}=\ell^{3}=1\)\(\mu\)m\({}^{3}\). In the present dynamic continuum mechanics framework, the final simulation time \(t_{f}\) is related to the physical time \(t_{c}\), needed for acoustic waves to travel through the samples. Assuming that \(t_{f}=100\,\epsilon_{c}\), the latter relation also means that the acoustic waves run over 100 times during the entire simulations for each sample, which ensures the quasi-static loading conditions. Thus, \(t_{c}=L/c_{L}\), with \(L=100\ell=0.1\) mm, the initial box length, and \(c_{L}\), the longitudinal wave celerity in iron, i.e., \(c_{L}=\sqrt{b_{11}^{2}/\rho_{0}}\). It therefore follows that: \(c_{L}=5850\) m.s\({}^{-1}\), and, \(t_{f}\approx 1.7\)\(\mu\)s, corresponding to the duration of the all performed calculations. Here and in the following, the subscript \(f\) will denote the final state. The initial single-crystal bcc iron is subjected to a three-step loading, as follows. First, all edges are continuously and proportionally decreased to a global volume reduction imposed by \(j=V/V_{0}=0.86\), for which the volume change is achieved within a time step from \(t_{0}\) to \(t=0.4\,t_{f}\). Then, a constant volume is maintained from \(t=0.4\,t_{f}\) to \(0.6\,t_{f}\), and, finally the volume is released back to the initial volume, so that \(j_{f}=V_{f}/V_{0}=1\), at \(t=t_{f}\). Figure (2.5) illustrates the volume change \(j\) as a function of the pressure \(p\) in GPa. Although different in shape and magnitude, both hysteresis loops characterize martensitic transitions over a wide range of pressure, involving an important stored elastic strain energy caused by the coexistence of numerous solid-state phases. The difference in both phase transformation hysteresis is due to plastic deformation in samples, which exhibits a larger width for the case with plasticity than without. When increased pressure, the appearance of the high pressure hcp phase is reached at \(-25.6\) GPa, followed by a sudden drop to \(-23.1\) GPa (without) and \(-19.7\) GPa (with plasticity), due to dissipative effects during the forward \(\alpha\to\epsilon\) transitions. However, the reverse \(\epsilon\to\alpha\) transition without plasticity is characterized by a slow martensitic transformation, compared to an instantaneous volume change that occurs between \(-7.4\) and \(-2.1\) GPa with plasticity. Significantly, the forward transformation pressures predicted by the present model are higher than the experimental values for bcc samples that have been fully converted to hcp phases, within the range of \(-18.4\) GPa [76] and \(-23.0\) GPa [242] at room temperature. The experimental measurements from Refs. [101, 76] have been plotted in Fig. (2.5) with symbols, where the more recent data in Ref. [76] for high-purity Fe single crystals in helium pressure medium (shown by the oriented blue arrows) can be compared to the simulated hysteresis widths. Within the pressure range of coexistence of both phases, the experimental bcc (open symbols) and hcp (solid symbols) atomic volumes are separately deduced from X-ray diffraction measurements of lattice parameters at each applied pressure step. On the other hand, the computed results (solid lines) are obtained using the average pressure and volume states over the simulation samples. In addition, the pressure discrepancies are possibly due to the approximations/presumptions in the present coupled formalism and, more precisely, to the absence of free boundaries in the prescribed simulation setups. For instance, simulations in a helium pressure media, which is a fluid with a very low viscosity, together with a dislocation density-based crystal plasticity model, should give rise to a better description of the nonhydrostatic effects and anisotropic stresses in the transition pressures, and also of the hysteresis widths of iron. In accordance with the present calculations with periodic boundary conditions, classical molecular dynamics simulations using an embedded atom method potential have shown that the simulated transition pressure of the hcp and face-centered cubic (fcc) phases is significantly higher for uniform (\(31-33\) GPa) than uniaxial (\(14\) GPa) compression [275]. Although the simulated coexistence domain is larger than the experimental domain under quasi-hydrostatic conditions, the present P-V equation-of-state curves behave in good agreement with experimental responses when increasing (from \(0\) to \(-18\) GPa) and decreasing (from \(-23\) to \(-7\) GPa) pressures [101, 76]. Figure (2.6) illustrates the partitioning of the total energy \(\psi\) in terms of elastic \(\psi_{\mathrm{e}}/\psi\) (in blue) and inelastic \((\psi_{\mathrm{t}}+\psi_{\mathrm{\varphi}})/\psi\) (green) energy ratios as a function of the dimensionless simulation time \(t^{*}=t/t_{f}\), for calculations without and with plasticity. It is also shown that the total energy is mainly composed by the elastic strain energy until the nucleation of the first hcp phases in iron occurs at \(t^{*}\approx 0.28\), as depicted by the two vertical arrows. When the volume is maintained constant, Fig. (2.6a) shows that the dissipative transformational process leads to \(38\%\) decrease in the amount of elastic energy, while the latter represents \(54\%\) of the total energy. During the early stages of the pressure release (as shown by a double-headed arrow), the stress state decreases, but the pressure remains sufficiently high to maintain the newly formed phases, as depicted by \(*\) in Fig. (2.6a) when the internal elastic stored energy increases then to \(t^{*}=0.90\), before completely releasing back to zero. However, plastic deformation allows for a considerably higher stress relaxation between variants when phase transformations occur at large volume change states, as shown in Fig. (2.6b), where the upper thin curve for the elastic energy ratio without plasticity has been included for comparison. It also emphasizes the reduction of the stored elastic energy due to the plastic dissipation, for which the elastic strain energy falls down to 42% (compared to 54% without plasticity) of the total energy and remains constant during the second loading step. When the volume increases back to the initial volume, the elastic energy is then dramatically reduced to zero, significantly dissipated by plastic deformation. #### Microstructure and variant selection Figure (2.7a) illustrates the microstructure texture variation of transition-induced volume change \(j\) versus the dimensionless time \(t^{*}\) in the form of histograms. These histograms are obtained by splitting the simulation volume change (ranging from \(j=0.80\) to 1) into 100 bins of constant width, within which the phase fraction of materials is computed for all time steps. Coexistence of \(\alpha\)-Fe and \(\epsilon\)-Fe phases with different equilibrium volumes therefore leads to a multimodal histogram in the large range of pressure, where the grayscale represents the volume fractions of phases. For both simulations, the single-crystal volume is homogeneously decreased with respect to the prescribed hydrostatic conditions, as depicted by the points A. Without plasticity, Fig. (2.7a) shows a single-mode histogram: the volume change is slightly spread out over a large time interval, starting from the first forward phase transitions at \(t^{*}=0.28\) (point B). This spreading regime is spatially correlated to the strong elastic interactions between numerous variants that have partially been reversed into hcp phases only, from point B to D. However, continued pressure release results in Figure 2.6: Partitioning of the total energy into the elastic and inelastic components as a function of the dimensionless simulation time \(t^{*}\), for calculations (a) without and (b) with plasticity. Figure 2.5: Volume change \(j\) as a function of the pressure \(p\) in GPa, for calculations without (black dotted line) and with (red full line) plasticity, with \(\sigma_{0}=0.25\) GPa. The experimental bcc (open symbols) and hcp (solid symbols) atomic volumes are separately deduced from X-ray diffraction measurements of lattice parameters at each applied pressure step, while the computed results (solid lines) are obtained using the average pressure and volume states over the simulation samples. a decrease in the proportion of the hcp phase compensated by an increase of the bcc phase between C and D. When the simulated iron is transformed back to the initial single-crystal material (point D), the volume exhibits no spatial variation, corresponding to a sharp single-mode histogram. With plasticity, the volume spreading is dramatically reduced after a brief fluctuation (point B) and remains a single mode until the first reverse phase transitions occur. Between \(t^{*}\approx 0.75\) and \(0.90\), a mixed-mode regime can be pointed out, which exhibits the structural texture formation of heterogeneous microstructure. The higher volume (point C\({}^{\prime}\)) is greater in magnitude than the average prescribed volume, until all reversions are achieved (point D\({}^{\prime}\)). The second mode (point C) corresponds to a volume that remains constant and slightly increases during the reversions (point D). According to these different modes, a particular microstructure texture evolution in iron associated with preferential variant selection during the phase transitions is also expected. Figure (b)b shows the volume fractions of each variant \(V_{k}\) as a function of the simulation time \(t^{*}\). Without plasticity, Fig. (b)b illustrates that the initial phase is partially transformed into the 6 possible hcp variants with comparable phase fractions, within which a residual amount of bcc phase persists in the microstructure, even for a large pressure range up to \(-25\) GPa. When the compression is released to the original volume, all hcp variants are transformed back to the initial single-crystal bcc iron, behaving partially as a shape-memory alloy. For this case, most of transformations to \(e\)-Fe phases are partial only. These pseudo-hcp structures break the symmetries of fully formed hcp lattice, and, cannot lead to the formation of reversed \(\alpha^{\prime}\)-Fe phases. Because the mismatch between bcc and hcp phases is not taken into account in the present formalism, the elastic strain state due to the interaction between variants is mainly responsible for the incomplete polymorphic phase transformations without plasticity. Therefore, when numerous hcp nucleus are considered, the long-range elastic interactions between variants dramatically increase the overall elastic energy, which in turn hinder the forward \(\alpha\to\epsilon\) phase transitions. Because plasticity dissipates considerably the stored elastic strain energy, the onset of plasticity screens the elastic interactions between variants and thus decreases the energy cost to form the hcp variants. It also appears as an essential mechanism to enhance phase transformations by relaxing stresses due to elastic interactions, so that the complete formation of a polycrystalline iron formed by the 6 hcp variants is energetically favorable, as shown in Fig. (b)b. In addition, a sudden burst of reversed \(\alpha^{\prime}\)-Fe nucleation of variants occurs at \(t^{*}\approx 0.90\), with \(\sim\)2% volume fraction for each \(\{V_{12},V_{13},V_{15}\}\), \(\sim\)1% for each \(\{V_{11},V_{14},V_{16}\}\), and, \(\sim\)0.5% for each of the 6 other bcc variants. Thus, both initial \(\alpha\)-Fe and reversed \(\alpha^{\prime}\)-Fe phases coexist at \(t^{*}=1.0\), without any retained hcp phases. However, the initial \(\alpha\)-Fe phase orientation largely dominates the forward and reverse transitions, while the volume fraction of \(\alpha^{\prime}\) inclusions is \(\sim 12.3\%\) in the final microstructure. To summarize, Fig. (8) illustrates the microstructure evolution under hydrostatic pressure at \(t^{*}=0.6\) and \(t^{*}=1.0\), defined in both strain and current mesh spaces. As shown in Fig. (a)a, the non-flat sample surfaces capture the signature of the local unconstrained deviatoric stress component of the externally applied hydrostatic conditions. For the simulation without plasticity, the initial bcc \(\alpha\)-Fe phase (in gray) Figure 7: Evolution of (a) the volume change \(j\) and (b) the phase fractions of variants \(V_{k}\) as a function of the dimensionless simulation time \(t^{*}\), for calculations without and with plasticity. is not completely converted into hcp \(\epsilon\)-Fe phases, with a retained \(\sim\)26.6% volume fraction of bcc phase at \(t^{*}=0.6\). However, the calculation with plasticity exhibits a polycrystalline iron that has been entirely transformed into 6 hcp \(\epsilon\)-Fe grain variants (red gradient). Such close-packed grains have been observed by performing large-scale molecular dynamics simulations under shock loading [140]. It is worth mentioning that various morphologies of hcp phases have been observed for structural phase transformations in iron, e.g. needle-like \(\epsilon\)-Fe phases [276], lath-like \(\epsilon\)-Fe regions [54], and, ellipsoidal \(\epsilon\)-Fe particles [204], for which the \(\alpha\leftrightarrow\omega\) martensitic transitions exhibit similar discrepancies in zirconium [18]. On the release of hydrostatic pressure, the calculation without plasticity transforms back to the initial single-crystal bcc iron at \(t^{*}=1.0\), while the calculation with plasticity leads to 12 reversed bcc \(\alpha^{\prime}\)-Fe, heterogeneously nucleated in pairs (e.g. \(\{V_{11},V_{12}\}\), in light and dark green) from one single \(\epsilon\)-Fe variant. ### 2.4 Shock wave propagation The numerical shock wave calculations accurately describe some important features reported by the experimental literature, and strongly complement our understanding of the phase-change dynamics in iron at larger time and length scales than hitherto explored by molecular dynamics simulations in the last two decades. The numerical model is able to reproduce unstable shock waves (which break up into elastic, plastic and phase-transition waves), providing new stress-informed insights into the coupling between the high strain-rate plasticity and microstructure evolution during the displacive phase transitions. Figure 2.8: Transformational states defined in both strain and current mesh spaces at (a) \(t^{*}=0.6\) and (b) \(t^{*}=1.0\), for calculations without and with plasticity. Each black dot in the strain space represents the current transformational strain \(\mathbf{C}\mathbf{t}\) for all mesh elements, while the colors along the pathways are associated with the corresponding phases and variants in the 3D simulated microstructures. Without plasticity, the initial bcc \(\alpha\)-Fe phase remains in a large fraction (\(\sim\)26.6%, in gray) at \(t^{*}=0.6\), whereas the calculation with plasticity exhibits a polycrystalline iron formed by the 6 hcp \(\epsilon\)-Fe variants only (red gradient). On the release of hydrostatic pressure, the former is transformed back to the initial single-crystal bcc iron at \(t^{*}=1.0\), while the latter shows the presence of 12 reversed bcc \(\alpha^{\prime}\)-Fe with \(\sim\)12.3% volume fraction. #### The internal structure of shock waves In the following dynamical analyses, the three-dimensional iron samples are oriented along the \([100]\) directions, and the shock waves are generated along the \(z\parallel[001]_{\rm loc}\) direction, using \(80\times 80\times 1280\) element-free Galerkin nodes (\(\sim 8.2\) millions), with periodic boundary conditions transverse to the direction of shock front propagation, i.e., to \(\mathbf{x}\parallel[100]_{\rm loc}\) and \(\mathbf{y}\parallel[010]_{\rm loc}\). The initial shock compression is induced by imposing a velocity of 850 m.s\({}^{-1}\) on the rear face along \(\mathbf{z}\parallel[001]_{\rm loc}\), while the free surface is located at the extremity of the rectangular parallelepiped-shaped samples, as depicted in Fig. (2.9a). The unshocked material is at rest at \(t=t_{0}=0\), while the final simulation time \(t_{f}\) is related to the physical time \(t_{f}\) for acoustic waves to travel through the sample. The dynamical loading conditions are controlled by assuming that \(t_{f}=2.5\,t_{c}\), such that the acoustic waves run over 2.5 times the samples during the entire simulations. Thus, \(t_{c}=L_{z}/c_{L}\), where \(L_{z}\) is the initial box length in the \([001]_{\rm loc}\) shock direction, with \(L_{z}=16\,L_{x}=16\,L_{y}=1.28\) mm, and \(c_{L}\) is the longitudinal wave celerity in iron, defined by \(c_{L}=\sqrt{p_{11}^{2}/\rho_{0}}\), with \(h_{11}^{a}=271\) GPa the corresponding low-pressure elastic component of the pure bcc iron [166]. It therefore follows that: \(c_{L}=5850\) m.s\({}^{-1}\), so that \(t_{f}\approx 0.55\)\(\mu\)s, which corresponds to the duration of all calculations. For convenience, a dimensionless time \(t^{*}\) is defined as \(t^{*}=t/t_{f}\), while the dimensionless length \(L^{*}\) along \(z\) is given by \(L^{*}=z/L_{z}\), so that both quantities \(t^{*}\) and \(L^{*}\) are ranged between 0 and 1. Moreover, the classical sign convention in continuum mechanics is used, so that compressive (extensive) volumetric stresses have negative (positive) signs. The capability of the continuum element-free Galerkin model to reproduce the experimental multiple split-wave structure is illustrated in Fig. (2.9) by displaying the spatial heterogeneous distribution of the pressure behind the incident compressive wave. Figures (2.9c) and (2.9d) show the corresponding two- and three-wave structures for representative simulations without and with plasticity at \(t^{*}=0.35\), respectively. Different regions, namely, the initial unshocked, the elastically compressed bcc iron, and the transformed regions with high-pressure hcp Fe multivariants are also depicted. Furthermore, the plastically deformed bcc iron can be displayed for the calculation with plasticity in Fig. (2.9d). A sharp PT wave front is exhibited without plasticity, while a more complex rough PT front (see inset in Fig. (2.9d)) is shown to generate mul Figure 2.9: (a) Schematics of the finite deformation framework that combines nonlinear elasto-viscoplasticity and multivariant phase-field theory to model the shock-induced response of single-crystal iron along the \([001]_{\rm bcc}\) direction. (b) Distribution of the pressure resulting from three-dimensional simulation without plasticity. The unstable shock wave breaks up into the elastic precursor and the phase-transition wave, which leads to different internal deformation states at material points. (c) Similar calculation with plasticity, within which an intermediate plastic wave front propagates between the elastic and phase-transition wave fronts. The inset shows a rough phase-transition front, leaving behind a complex high-pressure microstructure with preferred selection and evolution of hcp variants. tiple planar pulses (as depicted by the vertical double-headed arrows) that propagate toward the leading plastic front. These localized traveling-wave fronts are suddenly produced by the dynamical phase transitions with high velocity in the compressed bcc region with high-pressure elastic properties. The consequences of the complex three-wave structure and competing wave interactions in the evolving deformation microstructure are elucidated in the following sections. The shocked-induced microstructure during the martensitic phase transitions (also, the PT front) is analyzed in the six-dimensional Cauchy-Green strain space, as illustrated in Fig. (2.10). Thus, the deformation states that are mapped and visualized by colored points correspond to the local transformational distortions experienced by the iron samples. Each color is associated with the index of the nearest first-rank variant \(V_{k}\). Figures (2.10a) and (2.10b) depict the corresponding states that are captured when the elastic fronts reach the free surfaces for calculations without plasticity and with plasticity, respectively. The former shows that two hcp variants are nucleated without plasticity, denoted by \(V_{1}\) and \(V_{2}\). These two preferential \(e\)-Fe variants are formed with different volume fractions, i.e., \(62\%\) for \(V_{1}\) and \(35\%\) for \(V_{2}\), and are thoroughly promoted by the \([001]_{\rm bcc}\) direction of the shock. On the other hand, although the calculations with plasticity initiate the early formation of the same two variants, the four companion hcp variants are rapidly nucleated behind the PT wave front with comparable volume fractions. This microstructural fingerprint exhibits a crucial role played by the plastic deformation in nucleating and selecting all six energetically equivalent hcp variants in Fig. (2.10b). According to the previous simulations under high-pressure hydrostatic compression, the single-crystal iron has been transformed at high pressure into a polycrystalline microstructure that consists of the same six hcp variants, without any retained initial bcc phase. It is therefore suggested that the present high strain-rate plastic deformation can locally achieve a similar nearly relaxed three-dimensional hydrostatic state from the uniaxial strain state produced by the shock-wave compression. The nucleation of all (also, six) high-pressure hcp variants have never been described by atomistic calculations of shock-loaded iron, certainly because of the small dimensions that hinder plastic relaxation needed to nucleate these four companion hcp variants. For instance, two twinned hcp variants, separated by noncoherent grain boundaries (GBs), are observed in Refs. [140, 141]. #### Effect of plasticity in shock-loaded iron Because the deformation processes act as distinctive signatures in shock-compressed samples, reflecting the history the solid experienced (in terms of velocity, shock pressure, etc.), three averaged quantities over the computational samples are plotted in Fig. (2.11). Slice-averaged quantities within spatial planar bins (of one element width) are also used to quantify the role of plasticity in tailoring the complex microstructure from the uniaxial strain deformation, namely the free-surface velocity \(v_{z}\) in Fig. (2.11a), the pressure \(p=-(\sigma_{xx}+\sigma_{yy}+\sigma_{zz})/3\) in Fig. (2.11b), and the von Mises stress \(\sigma_{\rm vM}\) in Fig. (2.11c) with respect to \(t^{*}\), obtained without (gray curves) and with (black curves) plasticity. Both averaged quantities \(p\) and \(\sigma_{\rm vM}\) are displayed with respect to \(L^{*}\) along the \(z\parallel[001]_{\rm bcc}\) loading direction of the samples. Figure 2.10: (a) The three-rank network of reaction pathways is projected in a \(\{\mathbf{C}_{1},\mathbf{C}_{2},\mathbf{C}_{3}\}\) strain space, within which the local transformational Cauchy-Green Ct strain states at all material points are displayed with different colors (each color is associated with a specific hcp variant from the first-rank group symmetry operation). The results are related to the simulation without plasticity, captured at the instant when the elastic front reaches the free surface, revealing the nucleation of two (from amongst six possible variants) preferred hcp variants. (b) Similar simulation with plasticity at the same time instant as in (a), where the other four energetically equivalent hcp variants are activated in the transformed polymorphic microstructure. Such structural features indicate that the high strain-rate plastic deformation is locally capable of producing a nearly relaxed hydrostatic state from the uniaxial strain state produced by the shock-wave compression. Figure (2.11a) shows the presence of two distinct plateaus for the free-surface velocity profile without plasticity (gray curve), supporting by the split two-wave structure into the noticeable fastest elastic and the phase-transition (denoted by PT, see arrow) waves. The elastic wave is characterized by the elastic precursor \(\mathrm{Ep}\) with \(v_{z}=255\) m.s\({}^{-1}\), while the phase-transition front produces a considerable increase of the velocity at free surface, up to \(v_{z}=1660\) m.s\({}^{-1}\). On the other hand, the simulation with plasticity shows a much more complex velocity profile, where the multiple-wave structure consists inter alia of the elastic precursor \(\mathrm{Ep}\) with the same velocity as the case without plasticity, the plastic (P wave) front, and the elastic wave reverberation with the on-going PT wave, i.e., the \(\mathrm{rEp}\) wave. This wave profile is comparable to those reported in experimental works with distinct three-wave structures [19, 134]. The instants when both \(\mathrm{P}\) and \(\mathrm{rEp}\) waves reach the free surface are displayed by the double-headed arrows in Fig. (2.11a), corresponding to \(v_{z}=880\) m.s\({}^{-1}\) and \(v_{z}=1170\) m.s\({}^{-1}\), respectively. It is worth noting that both reflected \(\mathrm{Ep}\) and \(\mathrm{P}\) waves that propagate back in the elastically compressed and plastically deformed \(\mathrm{bcc}\) iron (thus, along the \([00\bar{1}]_{\mathrm{bcc}}\) direction) produce a residual stress state that does not favor the mandatory forward \(\alpha\to\epsilon\) phase transitions. The interaction in releasing the stresses between the reflected \(\mathrm{Ep}\) and P waves with the PT wave encourages therefore the reverse \(\epsilon\to\alpha\) phase transitions, without retaining any \(\epsilon\)-\(\mathrm{Rcp}\) phase nor without forming any \(\alpha^{\prime}\)-\(\mathrm{F}\)\(\mathrm{bcc}\) variants. Interestingly, this feature differs from the pure hydrostatic compression loading, for which a significant residual volume fraction (\(\sim 12\%\)) of \(\alpha^{\prime}\)\(\mathrm{bcc}\) inclusions has been obtained in the microstructure after the reverse phase transformations. Consequently, the incident PT wave cannot reach the free surface for calculations with plasticity, in contrast to the simulation case without plasticity. Additionally, it is worth mentioning that the amplitude of the steady-state free-surface velocity with plasticity is close to the one without plasticity, i.e., \(v_{z}=1707\) m.s\({}^{-1}\), which is roughly twice the particle velocity of \(850\) m.s\({}^{-1}\) imposed on the rear face behind the incident shock as a loading condition, consistently with the traction-free conditions at free surfaces. Figure 2.11: (a) Free surface velocity histories from shock-loaded iron samples without (gray curve) and with (black curve) plasticity. The former is caused by the arrival of the elastic precursor (denoted by \(\mathrm{Ep}\)) and of the phase-transition (PT) wave. The latter is decomposed by \(\mathrm{Ep}\), the plastic (P) wave front, and \(\mathrm{rEp}\) that results from the interaction between the reflected \(\mathrm{Ep}\) front at the free surface and the on-coming PT wave. (b) The representative profiles of pressure in \(\mathrm{GPa}\) along the \([001]_{\mathrm{bcc}}\) direction for both calculations without and with plasticity. The slice-averaged values within spatial planar bins of one finite element width correspond to the three-dimensional microstructures in Figs. (2.9c) and (2.9d). (c) The von Mises stress in \(\mathrm{GPa}\) for both calculations without and with plasticity. Both calculations without and with plasticity in Fig. (2.11b) exhibit a similar elastic state where compression remains uniaxial in the \([001]_{\rm bcc}\) direction, characterized by a pressure \(p_{\rm E}=3.9\) GPa in the elastically compressed bcc phase. By considering this threshold pressure as the Hugoniot elastic limit for the plasticity-free case, the value of 3.9 GPa is defined between two reference experimental data in polycrystalline iron samples, i.e., \(\sim 2.1\) GPa [294] and \(\sim 5.5\) GPa [233]. It is worth mentioning that the similar computed values for both uniaxial elastic limits without and with plasticity are fortuitous since the former corresponds to the transformational front (accompanied by both hydrostatic and deviatoric stresses), while the latter is related to the plastic front (mainly controlled by deviatoric stresses). In practice, once the phase transformation operated by one specific variant is initiated, the excess free energy between both bcc and hcp iron phase promotes a partially-to-complete shock-induced transition that behaves differently than pure pressure, as quantified by eq. (2.37). The corresponding released stress state after this phase transformation is much more complex than the stress state behind the deviatoric stress-driven plastic front. The changes from the uniaxial shock compression to a complex stress state after phase transitions in the plastically deformed iron cannot therefore be captured by a usual pressure-shock velocity (e.g. represented by a Rayleigh line), yielding an important distinction between the shock physics described at the macroscopic scale and ones described at the grain scale. Behind the traveling Ep wave front, the pressure profile depicts the presence of one (two) plateaus for calculations without (with) plasticity. The former exhibits the presence of the PT wave front as the pressure dramatically increases up to \(p_{\rm PT}=37.7\) GPa. The latter profile shows an intermediate pressure plateau that characterizes the plastically deformed bcc region, within which the forward \(\alpha\to\epsilon\) phase transitions start roughly at the onset pressure \(p_{\rm PT}=18.2\) GPa, as indicated by the dotted line in Fig. (2.11b). This value is on the range of experimental values for single-crystal iron under hydrostatic pressure [76], and in excellent agreement with large-scale molecular dynamics simulations in single-crystal iron as well, i.e., 18 GPa along the same \([001]_{\rm bcc}\) shock direction [277]. Here, the value deviates from the conventional macroscopic threshold from experiments on polycrystalline Fe samples (occurring at 13 GPa [17, 19]), for which the GBs with pre-existing intrinsic defects reduce the amplitude of the forward transition pressure [106, 299, 279]. Achieved after the complete phase transformation of the bcc into hcp variants, the upper plateau is governed by the load intensity and is reached at \(p=44.1\) GPa, slightly higher than the pressure without plasticity. This value is in very good agreement with recent results from molecular dynamics simulations in shocked iron [4], where a maximum mean pressure of \(\sim 40\) GPa has been measured by applying a comparable piston velocity of 800 m.s\({}^{-1}\). Figure (2.11c) shows the corresponding values for the von Mises stress, with \(\sigma_{\rm VM}=2.7\) GPa for both simulations in the elastically compressed bcc iron. Then, the large von Mises stress profile increases inhomogeneously in the sample without plasticity, which is due to a heterogeneous distribution of both hcp variants \(V_{1}\) and \(V_{2}\) in the microstructure with lamellar arrangements along the shock direction (not shown here). The maximum value is \(\sigma_{\rm VM}=18.1\) GPa. With plasticity, however, the volume-preserving plastic deformation relaxes significantly the internal von Mises stress to reach an averaged von Mises stress of \(\sigma_{\rm VM}=1.1\) GPa (\(<3.9\) GPa, at the peak Hugoniot elastic state) in the shocked-induced hcp multivariant region. This difference asserts the role played by plasticity to release the shear stress state produced by the uniaxial strain compression to obtain a roughly hydrostatic state with 6 high-pressure hcp variants (instead of 2 variants without plasticity) in the transformed heterogeneous microstructure. Figures (2.12a) and (2.12b) capture the evolution of the longitudinal stress component in the shock direction \(\sigma_{zz}\) in the Lagrangian adimensional position-time \((L^{*},t^{*})\) diagrams, without and with plasticity, respectively. The non-steady-state regimes of the present elastic precursor (Ep, solid lines), plastic (P, dashed), and phase-transition (PT, dotted) waves \(-\)moving with different average speed so that net distances between the respective fronts increase with time\(-\) exhibit a more complicated picture for the three-wave structure with the high strain-rate plasticity than the corresponding diagram without plasticity. The reflection of the incident fronts from the free surfaces are depicted as well. The leading E wave front, traveling at 5412 m.s\({}^{-1}\) (5541 m.s\({}^{-1}\)) for calculation with (without) plasticity, leaves the iron system in an elastically compressed state with high-pressure properties. The former value is in excellent agreement with the computed shock velocity of 5409 m.s\({}^{-1}\) using atomistics simulations in single-crystal iron without pre-existing defects [140], which is consistent with the present calculations. Without plasticity, the trailing PT front travels homogeneously in the sample at 4655 m.s\({}^{-1}\). For the three-wave structure, the nearly over-driven P front (but not over-run, i.e., characterized by a finite separation between the E and P waves) propagates at 5059 m.s\({}^{-1}\), while the slower heterogeneous PT front travels with intermittent regimes at \(3002\pm 99\) m.s\({}^{-1}\), which is much lower than the homogeneous PT front without plasticity. In contrast to the case without plasticity, the intermittent propagation of the PT front with plasticity reveals the presence of i) sudden nucleation events of hcp variants (as depicted by the arrows in Fig. (2.12b)), and consequently of ii) a so-called traveling release-stress envelope. This envelope propagates by reflection between the rear surface on the left-hand side of the sample and the PT wave before interacting the (unloading) reflected Ep wave with the free surface, as displayed by the asterisk \(*\) in Fig. (2.12b). It precedes always the slower wave, i.e., the PT wave, but travels faster than the elastic wave at 8312 m.s\({}^{-1}\) in the transformed high-pressure regions of iron (i.e., with high pressure-induced stiffness and density). These distinct nucleation sites of hcp variants are not experienced for calculations without plas ticity, exhibiting again the specific role played by the plastic deformation in governing such microstructural features. Analogous distinct nucleation events in position-time diagrams have been observed in shocked crystalline 1,3,5-triamino-2,4,6-trinitrobenzene using large values for the input parameter \(\sigma\) in molecular dynamics simulations [155]. #### Residual stresses in the plastically-deformed microstructure Figures (2.13a) display three shock-induced microstructures M\({}_{1}\), M\({}_{2}\), and M\({}_{3}\) in Fig. (2.12b) that are associated with \(t^{*}=0.21\), \(t^{*}=0.27\), and \(t^{*}=0.44\), respectively, for the calculation with plasticity only. For these microstructures, various stress-related quantities, i.e., the longitudinal Cauchy stress tensor component in the shock direction \(\sigma_{zz}\), the shear stress \(\tau=\left(\sigma_{zz}-\left(\sigma_{xx}+\sigma_{yy}\right)/2\right)/2\), \(s_{n}\), and \(s_{s}\), as well as the corresponding hcp variant selection, are displayed. Both stress quantities \(s_{n}\) and \(s_{s}\) are related to the second invariant of the stress deviator \(J_{2}\) and the von Mises stress \(\sigma_{\text{vM}}\) by \[3J_{2}=\sigma_{\text{vM}}^{2}=\tfrac{3}{2}\operatorname{dev}\boldsymbol{ \sigma}\colon\operatorname{dev}\boldsymbol{\sigma}=\tfrac{1}{2}s_{n}+6s_{s}\,, \tag{2.66}\] where \(\operatorname{dev}\boldsymbol{\sigma}\) is the deviatoric part of \(\boldsymbol{\sigma}\), so that \(s_{n}\) and \(s_{s}\) are defined by \[s_{n} =(\sigma_{xx}-\sigma_{yy})^{2}+(\sigma_{yy}-\sigma_{zz})^{2}+( \sigma_{xx}-\sigma_{zz})^{2}\] \[s_{s} =\sigma_{xy}^{2}+\sigma_{yz}^{2}+\sigma_{xz}^{2}\,, \tag{2.67}\] with \(\sigma_{xy}\), \(\sigma_{xz}\), and \(\sigma_{yz}\) being the orthogonal shear stresses. As a signed quantity, the shear stress \(\tau\), which equals to the von Mises stress if the off-diagonal terms are neglected, can also have positive (in red) or negative (green) values depending on the magnitude of \(\sigma_{zz}\) with respect to \(\left(\sigma_{xx}+\sigma_{yy}\right)/2\). All color legends for the stress-related quantities are displayed in Fig. (2.13b). At instant \(t^{*}=0.21\), the split three-wave structure into the Ep, P, and PT wave fronts is clearly distinguishable by the change in magnitude of \(\sigma_{zz}\) in Fig. (2.13a). Close to the phase-transition front, the transformed region with 6 high-pressure hcp variants is characterized by positive values of the shear stress \(\tau\) (values in red). Between the PT and P wave fronts, the shear stress \(\tau\) is negative (green), the stress field \(s_{s}\) is zero, while the quantity \(s_{n}\) exhibits the presence planar surfaces as pulses generated by the PT front that dynamically nucleates the hcp variants. These six variants are pictured with the same colors as in Fig. (2.10b). Behind the complex rough PT front, some hcp grains grow preferentially into flaky morphology with \((110)_{\text{bcc}}\) and \((1\bar{1}0)_{\text{bcc}}\) habit planes of the bcc iron, which are transformed into the \((0001)_{\text{hcp}}\) close-packed planes after the phase transition. At \(t^{*}=0.27\), the presence of a dynamical instability in the compressed and plastically deformed microstructure is shown. This occurs under a complex stress state that is responsible to an extremely rapid nucleation of a large single-crystal hcp variant \(V_{1}\) (in orange, as depicted by \(*\) in M\({}_{2}\) in Fig. (2.13a)) with Figure 2.12: (a) Slice-averaged maps of the longitudinal stress component \(\sigma_{zz}\) in the Lagrangian adimensional position-time \((L^{*},t^{*})\) diagram from simulation without plasticity. The two-wave (composed of the elastic Ep and phase-transition PT waves) structure with the reflection of both waves at the free surface are shown using different line types. (b) The three-wave structure with the presence of the intermediate plastic wave (P wave) front illustrates a considerably more complicated scenario of nonlinear wave interaction. As depicted by the arrows, this calculation with plasticity reveals two nucleation events at \(t^{*}=0.10\) and \(t^{*}=0.27\), which result in the inhomogeneous propagation of the the trailing PT wave and in the presence of a stress-release envelope. The latter travels faster than the leading shock and is characterized by a lower longitudinal stress in magnitude. columnar growth in the direction of the shock loading. This spontaneous nucleation is characterized by a notable change in sign of the shear stress \(\tau\) from negative (green) to positive (red) values. The ideal volume-reducing transition path of the strain-free mono-variant \(V_{1}\) requires a compression of \(\sim 12.5\%\) along the \(z\parallel[001]_{\mathrm{bcc}}\) direction, as defined by the \(zz\) component in eq. (2.37). This sudden nucleation event gives rise to the aforementioned traveling release-stress envelop in Fig. (2.12b), which is also characterized by a finite domain with positive shear stress values, as depicted by white double-sided arrows in M\({}_{2}\) and M\({}_{3}\). Surrounded by the initial bcc phase, the variant \(V_{1}\) is able to grow in the shock direction, whereas the confined region between the PT front and \(V_{1}\) in M\({}_{2}\) becomes an unstable zone for nucleation of high-pressure variants. Similar shock-driven regions of instabilities, within which local nucleation of hcp embryos occur, have been observed by Wang et co-workers using atomistic simulations [280]. Although \(V_{1}\) is still visible at instant \(t^{*}=0.44\), the phase-transition wave front continues to propagate in the shock direction, exhibiting the coalescence of the hcp variants and also a specific morphological fingerprint of shock-induced hcp variants with large transformed bands (due to the periodic boundary conditions) at high pressure. A thickness of \(\sim 77\)\(\mu\)m for \(V_{1}\) is found in the \(z\parallel[001]_{\mathrm{bcc}}\) direction, which also depends on the shock velocity (results not shown here). Overall, \(s_{n}\) exhibits large values in the elastically compressed zones, which significantly decrease as soon as the nucleation of growth of hcp variants take place during the polymorphic phase transitions. In turn, because both quantities \(s_{n}\) and \(s_{s}\) quantities play a complementary role in the present \(J_{2}\) plasticity theory, \(s_{s}\) gives rise to large values in the phase-transformed hcp regions. The aforementioned transformed bands are therefore considered here as an important mechanism of stress relaxation under shock compression at high strain rate, thus providing novel guidelines for future experimental diagnostics of shock wave propagation in iron. #### Dynamical instability in structural phase transitions Figure (2.14a) illustrates the shock-induced instability in the structural phase transitions by means of the magnitude of the plastic Green-Lagrange deformation \(\mathbf{Ep}\), defined by \(|\mathbf{Ep}|=|\mathbf{Fp}^{\mathrm{t}}\cdot\mathbf{Fp}-\mathbf{I}|/2\). This quantity is plotted in the Lagrangian adimensional position-time \((L^{*},t^{*})\) diagram, where \(t^{*}\) is restricted between 0 and 0.5 for clarity, so that the multiple reflections of incident waves from the free surface are conveniently omitted in the following discussion. It is shown that the propagation of the PT front gives rise to a spatially Figure 2.13: (a) Three-dimensional time snapshots of shock-induced microstructures, designated M\({}_{1}\) at \(t^{*}=0.21\), (b) M\({}_{2}\) at \(t^{*}=0.27\), and (c) M\({}_{3}\) at \(t^{*}=0.44\), from the position-time diagram in Fig. (2.12b) for simulation with plasticity. From top to the bottom, each panel captures the heterogeneous distribution of various stress quantities, namely, the longitudinal Cauchy stress \(\sigma_{zz}\), the shear stress \(\tau\), the stress-related quantities \(s_{n}\) and \(s_{s}\) to the second invariant \(J_{2}\) using eq. (2.66), as well as the polycrystalline high-pressure domains composed of six hcp variants. These variants are colored using the same code as in Fig. (2.10b), while the transparent zones are associated with the initial unshocked bcc iron. As displayed by \(*\), a dynamic instability in the polymorphic phase transitions is observed in M\({}_{2}\), leading to the nucleation of a large monovariant with columnar growth in the microstructure that is still visible (\(**\)) after the propagation of the incident phase-transition wave front. (b) The color legends associated with the stress-related quantities. (not temporally) heterogeneous distribution of \(|\mathbf{Ep}|\) with local values up to 0.25. This localization of plastic deformation is therefore strongly correlated with the specific selection of shock-induced hcp variants, which can be separated into two pertinent groups, so-called \(\mathrm{G}_{1}=\{V_{1}+V_{2}\}\) and \(\mathrm{G}_{2}=\{V_{3}+V_{4}+V_{5}+V_{6}\}\), each set involving different features of the microstructural fingerprints in shock-loaded iron. Thus, Fig. (2.14b) displays the variant selection during the shock wave propagation using a linear interpolation of color to distinguish the presence of both groups \(\mathrm{G}_{1}\) (blue) and \(\mathrm{G}_{2}\) (red) in the microstructure. As already mentioned, both \(V_{1}\) and \(V_{2}\) variants (from amongst six available variants) of \(\mathrm{G}_{1}\) are promoted by the shock direction in the first instants of the shock wave propagation. Since the two-phase mixture induces a large contraction along the loading \(\mathbf{z}\parallel[001]_{\mathrm{bc}}\) direction, the corresponding group \(\mathrm{G}_{1}\) is composed of variants designated by "release variants". However, the second group \(\mathrm{G}_{2}\), which consists of a mixture of the complementary four variants with identical volume fractions, experiences an expansive reaction in the shock direction. In contrast to \(\mathrm{G}_{1}\), these newly-formed variants of \(\mathrm{G}_{2}\) are also expected to generate an expansion (or reloading) wave, which are therefore not promoted by the initial compressive (or loading) wave. In the following, the four variants of \(\mathrm{G}_{2}\) are denoted by "reload variants", for which the nucleation is accompanied by severe plastic deformation with large values of \(|\mathbf{Ep}|\), as indicated by Fig. (2.14a). ### 2.5 Limitations While the present phase-field approach is capable of considering the elastic mismatch between low- and high-pressure variants during the pure pressure- and shock-induced phase transformations in iron, the coexistence of both solid-state phases with different crystal structures (e.g. lattice parameters) yields to the loss of lattice coherence at the interfaces. This also means that the perfect lattice correspondence across the bcc/hcp interfaces as well as the misoriented hcp/hcp grain boundaries obtained in Fig. (2.8) becomes an implausible model assumption, and that the current description of the crystalline interfaces during solid-solid phase transitions remains obviously incomplete. In fact, experimental observations of such interfaces show that coherent interfaces break down through the formations of misfit dislocation structures, as sketched in Fig. (2.15) with internal hexagonal dislocation patterns. The resulting "semicoherent interfaces" consist of coherent regions separated by these interfacial dislocation structures. Since the earliest observations of dislocation arrangements into periodic patterns along solid-state interfaces in a variety of conditions [5, 53, 6, 90], the advantages/inconveniences introduced by the presence of such crystal defects in high-technology applications have been addressed in interdisciplinary materials science and engineering [240, 96], involving chemistry, physics, electronics, metallurgy, mechanics, etc. Extensive investigations have indicated that the interfacial dislocation patterns at grain and interphase boundaries may, however, be designed to increase the unprecedented levels of high strength [122], ductility [301], and radiation-induced damage tolerances [26] in nanocrystalline polycrystals, nanolayered laminated composites, precipitation-strengthened alloys, and epitaxial free-standing thin films. In part, the fundamental problem of characterizing the dislocation structures and energetics at heterophase interfaces is treated in the following chapter 3. Figure 2.14: (a) Slice-averaged magnitude of the plastic Green-Lagrange strain tensor \(\mathbf{Ep}\) in the Lagrangian adimensional position-time \((L^{*},t^{*})\) diagram from simulation with plasticity. The elastic and plastic wave fronts with constant velocities are shown using different line types as well as two (primary and secondary) phase-transition zones that are associated with specific nucleation of release and reload variants (see text for details). (b) The corresponding selection of hcp variants, categorized into two groups, so-called \(\mathrm{G}_{1}=\{V_{1}+V_{2}\}\) (in blue) and \(\mathrm{G}_{2}=\{V_{3}+V_{4}+V_{5}+V_{6}\}\) (red). Figure 2.15: Schematics of the presence of internal dislocation structures at solid-solid interfaces. ## Chapter 3 Dislocation structures and energetics at heterophase interfaces ### 3 Selected peer-reviewed articles * [P5]**A. Vattre**, E. Pan. _Dislocation singularities in layered magneto-electro-elastic plates._ International Journal of Engineering Science, 181, 103765, 2022. * [P6]**A. Vattre**, E. Pan. _Semicherent heterophase interfaces with core-spreading dislocation structures in magneto-electro-elastic multilayers under external surface loads._ Journal of the Mechanics and Physics of Solids, 124, 929-956, 2019. * [P7]**A. Vattre**, N. Abdolrahim, S. Navale, M. Demkowicz. _The relaxed structure of intrinsic dislocation networks in semicoherent interfaces: predictions from anisotropic elasticity theory and comparison with atomistic simulations._ Extreme Mechanics Letters, 28, 50-57, 2019. * [P8] T. Jourdan, **A. Vattre**. _A continuous model including elastodiffusion for sink strength calculation of interfaces._ Computational Materials Science, 153, 473-478, 2018. * [P9]**A. Vattre**, E. Pan. _Three-dimensional interaction and movements of various dislocations in anisotropic bicrystals with semicoherent interfaces._ Journal of the Mechanics and Physics of Solids, 116, 185-216, 2018. * [P10]**A. Vattre**, E. Pan. _Interaction between semicoherent interfaces and Volterra-type dislocations in dissimilar anisotropic materials._ Journal of Materials Research, 32, 3947-3957, 2017. * [P11]**A. Vattre**. _Elastic strain relaxation in interfacial dislocation patterns: II. From long- and short-range interactions to local reactions._ Journal of the Mechanics and Physics of Solids, 105, 283-305, 2017. * [P12]**A. Vattre**. _Elastic strain relaxation in interfacial dislocation patterns: I. A parametric energy-based framework._ Journal of the Mechanics and Physics of Solids, 105, 254-282, 2017. * [P13]**A. Vattre**, T. Jourdan, H. Ding, C. Marinica, M. Demkowicz. _Non-random walk diffusion enhances the sink strength of semicoherent interfaces._ Nature communications, 7, 1-10, 2016. * [P14]**A. Vattre**. _Elastic interactions between interface dislocations and internal stresses in finite-thickness nanolayered materials._ Acta Materialia, 114, 184-197, 2016. * [P15]**A. Vattre**. _Mechanical interactions between semicoherent heterophase interfaces and free surfaces in crystalline bilayers._ Acta Materialia, 93, 46-59, 2015. * [P16]**A. Vattre**, M. Demkowicz. _Partitioning of elastic distortions at a semicoherent heterophase interface between anisotropic crystals._ Acta Materialia, 82, 234-243, 2015. * [P17]**A. Vattre**, N. Abdolrahim, K. Kolluri, M. Demkowicz. _Computational design of patterned interfaces using reduced order models._ Scientific report, Nature, 4, 2014. * [P18]**A. Vattre**, M. Demkowicz. _Effect of interface dislocation Burgers vectors on elastic fields in anisotropic bicrystals._ Computational Materials Science, 88, 110-115, 2014. * [P19]**A. Vattre**, M. Demkowicz. _Determining the Burgers vectors and elastic strain energies of interface dislocation arrays using anisotropic elasticity theory._ Acta Materialia, 61, 5172-5187, 2013. ### Motivation Far from being featureless dividing surfaces between neighboring crystals, interfaces in homo- and hetero-phase solids have internal structures of their own. These structures depend on interface crystallographic character (misorientation and interface plane orientation) and affect the physical properties of interfaces, such as interface energy [70], resistivity [42], permeability [126], mechanical properties [132], morphology and variant selection of precipitates [212], point defect sink efficiencies [228], and mobilities [150]. To better understand and control the properties of interfaces, it is desirable to be able to predict their internal structures. The first part of this chapter 3 presents a method for predicting a specific interface structural feature: the Burgers vectors of intrinsic dislocations in semicoherent homophase and heterophase interfaces. This information is then used to compute the interface elastic strain energies in standard tilt and twist GBs as well as the partition of elastic distortions at complex heterophase interfaces. An application to the sink strength of semicoherent interfaces is described in section 3.5, for which the non-random walk diffusion of radiation-induced defects is highly sensitive to the detailed character of interfacial stresses. The follow-up extensions to the elastic strain relaxation in interfacial dislocation patterns and to the elastic interaction with extrinsic dislocation arrays and loops are investigated in sections 3.6 and 3.7, respectively. One way of studying interface structure is through atomistic simulations, which explicitly account for all the atoms that make up an interface. However, this approach is not always practical or efficient: it can be very resource-intensive because it requires a separate simulation for each individual interface. Thus, it does not lend itself to rapidly scanning over many different interfaces, for example if one were searching for trends in interface structures or for tailored interfaces with a specific structure. Low-cost, analytical techniques for predicting interface structure would be preferable in such situations. One widely used analytical approach applies to semicoherent interfaces and describes interface structures in terms of intrinsic dislocations using the closely related Frank-Bilby [94, 29] and O-lattice [240, 296, 241] techniques. Both procedures require the selection of a reference state, within which the Burgers vectors of individual interface dislocations are defined. Because this choice does not affect the calculated spacing and line directions of interface dislocations, it has sometimes been viewed as if it were arbitrary. In practice, one of the adjacent crystals [111, 290, 148] or a "median lattice" [91] have often been used as the reference state. However, the choice of reference state does influence the values of far-field stresses, strains, and rotations associated with interface dislocations. These, in turn, are usually subject to constraints, namely that the far-field stresses be zero and that the far-field rotations be consistent with a prescribed misorientation. Thus, the choice of reference state is in fact not arbitrary. As discussed by Hirth and co-workers [123, 124, 125], the importance of selecting proper reference states has often been overlooked in part because the best-known applications of interface dislocation models are to interfaces of relatively high symmetry, such as symmetric tilt or twist GBs, for which correct reference states are easy to guess. Furthermore, many analyses assume uniform isotropic elasticity, which leads to equal partitioning of interface dislocation elastic fields between the neighboring crystals. In general, however, interfaces need not have high symmetry and the neighboring crystals may have unlike, anisotropic elastic constants. By use of heterogeneous and anisotropic elasticity theory, the correct selection of reference states in such general cases is far more challenging. Elasticity theory for analyzing semicoherent interfaces and determining the field solutions produced by interface dislocations has been initiated by van der Merwe [245]. The concept of misfit dislocations, which act as stress annihilators to free the total stress fields far from the interfaces, has been introduced using the Peierls-Nabarro model to formulate a misfit dislocation theory for critical thicknesses of strained and layer systems during epitaxial growth of structures with two isotropic semi-infinite solids [246, 247]. The problem of single straight screw and edge dislocations and dislocation arrays situated at the interface between two anisotropic elastic half-spaces has received special attention in the literature [284, 249, 25, 21, 35, 285, 274, 149], for which the dislocation-based calculations and also mechanisms may be significantly altered when isotropic elastic approximation is considered. By means of the Stroh sextic formalism [237, 238] with a Fourier series-based technique, the geometry of interface dislocation patterns as well as the corresponding Burgers vectors have been solved using anisotropic elasticity theory in bicrystals with two sets of dislocations [249, 250, 253]. This computational method for structural and energetic properties of individual heterophase interfaces has been extended by taking into account the presence of free surfaces in bi- and tri-layered materials [254, 255] and the local reactions between planar and crossing dislocation arrays to form new dislocation arrangements [258, 259]. Application examples have revealed the significant influence played by elastic anisotropy in the interactions between the semicoherent interfaces and radiation-induced point defects [256] as well as extrinsic dislocation loops [261]. ### Determining the Burgers vectors of interface dislocation arrays The notion of introducing Volterra dislocations into a reference state for constrained interfaces is consistently defined with the Frank-Bilby equation that are free of far-field stresses. #### Planar interfaces in linear elastic bicrystals In the present analysis, planar interfaces are considered formed by joining two semi-infinite linear elastic crystals, for which the crystallography of the interfaces has been specified completely. For a GB, this requires five parameters: three to describe the relative misorientation between neighboring crystals and two to describe the orientation of the GB plane [240]. For a heterophase interface, the number of crystallographic DoFs may be higher. For example, an interface between two fcc crystals such as Al and Ni would require the lattice parameters of the two neighboring metals to be given in addition to the five parameters needed for a GB. Interfaces between materials with differing crystal structures may require further parameters. To describe completely the crystallography of a heterophase interface between elements A and B, the notion of a "reference" state for the interface is adopted: in the reference state, the interface is coherent, i.e. the two separate crystals that meet at the interface are rotated and strained [135, 240] such that they are in perfect registry with each other across the interface plane after bonding. Thus, the reference state has the interface structure of a single perfect crystal. Starting from the reference state, materials A and B are mapped separately into new configurations that yield an interface with the required crystallographic character and zero far-field stresses, as shown in Fig. (3.1). Following Hirth, Pond, and co-workers [125], the state of the interface after this mapping is referred as the "natural" state. For a GB, the maps applied to materials A and B are proper rotations while for a pure misfit interface they are pure strains. To account for both cases as well as for heterophase interfaces between misoriented crystals, the maps are described as uniform displacement gradients \({}_{\mathrm{s}}\)F and \({}_{\mathrm{u}}\)F. In the reference state, the neighboring crystals might not be stress free, but the interface is coherent. In the natural state, the interface is not coherent, but the neighboring crystals are both free of far-field stresses. This framework is sufficiently general to describe the crystallography of many commonly studied heterophase interfaces, e.g. ones formed by fcc and bcc metals [70, 73], but not all. For example, mapping from a common reference state to an interface between a cubic and hcp crystal cannot directly be accomplished by a displacement gradient alone and requires an internal shuffle rearrangement, as mentioned in section 2.2.5. The present chapter 3 is also focused on materials that may be mapped to a common reference state using displacement gradients alone. The crystallographic considerations described above do not require a single, unique reference state. On the contrary, an infinite number of new reference states may be generated from an original one by applying to it any uniform displacement gradient \({}_{\mathrm{s}}\)F. If the original reference state may be mapped to the natural state with \({}_{\mathrm{s}}\)F and \({}_{\mathrm{u}}\)F, then the new reference state may be mapped to the same natural state using \({}_{\mathrm{s}}\)F\({}_{\mathrm{s}}\)F\({}^{-1}\) and \({}_{\mathrm{u}}\)F\({}_{\mathrm{s}}\)F\({}^{-1}\). However, a consistent description of the elastic fields of a discrete dislocation network in an interface of specified crystallography and free of far-field stresses does require a single specific reference state. Figure 3.1: Mapping from a coherent reference state to the natural state using displacement gradients \({}_{\mathrm{s}}\)F and \({}_{\mathrm{s}}\)F. Volterra dislocations introduced into the reference state remove coherency stresses and may change the misorientation of the neighboring crystals. #### 3.2.2 Volterra dislocations in the reference state The atomic structures of real interfaces are not like those generated by the linear mappings from a reference state. Instead, for any given interface crystallography, the atomic structure may undergo a variety of local relaxations or reconstructions that lower its energy. In many low-misorientation GBs and low-misfit heterophase interfaces, these changes lead to formation of regions of coherency (which generally have low energies) separated by networks of intrinsic dislocations. Many such interface dislocation networks have been imaged using transmission electron microscopy [6]. There are two common ways of describing interface dislocations. In one, they are viewed not as conventional Volterra dislocations, but rather as special kinds of interface defects with short-range elastic fields that are formed when the interface atomic structure in the natural state relaxes [120, 34]. The superimposed elastic fields of all such defects residing within an interface decay away to zero at long range and therefore do not alter the far-field stress state or the crystallography of the natural interface state. Another description-the one adopted here-views interface dislocations as genuine Volterra dislocations with resultant elastic stress fields that need not decay to zero at long range. For example, the structure of some pure misfit heterophase interfaces may be described as an array of equally spaced edge dislocations residing on the same glide plane [185]. Such an array of Volterra dislocations has a non-zero far-field stress [122]. Certain symmetric tilt GBs may be described as arrays of edge dislocations lying directly one above the other on separate glide planes. These Volterra dislocation arrays have zero far-field strains (hence, also zero stresses [122]), but possess non-zero rotations at long range [215, 167]. In general, arrays of Volterra dislocations may have non-zero far-field strains, rotations, or both. In the work described here, interface dislocations are viewed as Volterra dislocations that have been introduced into the reference state, as shown in Fig. (3.1). Therefore, the far-field stresses due to these dislocations \({}_{\lambda}\sigma^{\infty}_{\text{dis}}\) and \({}_{\lambda}\sigma^{\infty}_{\text{dis}}\) are equal and opposite to the coherency stresses \({}_{\lambda}\sigma_{\text{c}}\) and \({}_{\lambda}\sigma_{\text{c}}\) in the reference state respectively, leading to the removal of all far-field stresses in the natural state: \[{}_{\lambda}\sigma_{\text{c}}+{}_{\lambda}\sigma^{\infty}_{\text{dis}}= \mathbf{0}\,,\,\,\,\,\text{and},\,\,\,\,_{\lambda}\sigma_{\text{c}}+{}_{ \lambda}\sigma^{\infty}_{\text{dis}}=\mathbf{0}\,. \tag{3.1}\] Although free of long-range stresses, interface dislocation networks in the natural state have non-zero short-range elastic fields as a result of the superposition of the non-uniform stress fields of the Volterra dislocation networks and the uniform coherency stresses in the reference state. Additionally, the far-field rotations due to the Volterra dislocations are required to conform to the given interface crystallographic character. These requirements restrict the choice of reference states to a single specific one. The notion of introducing Volterra dislocations into the reference state primarily is treated as a hypothetical operation. However, this operation may be a physically meaningful analog of processes occurring at some real interfaces. For example, the transformation of certain coherent heterophase interfaces into ones that are not coherent, but free of far-field stresses, occurs by the deposition on the interface of Volterra dislocations that glide through the neighboring crystalline layers [185, 186]. Similarly, subgrain boundaries are thought to assemble from glide dislocations formed during plastic deformation of polycrystals [8]. #### 3.2.3 Crystallographic constraints on interface dislocations A variety of shapes of interface dislocation networks have been observed [6], such that the ones that may be represented by \(j\leq 2\) arrays of parallel dislocations with Burgers vectors \(\mathbf{b}_{j}\), line directions \(\mathbf{\xi}_{j}\), and inter-dislocation spacings \(d_{j}\). Following previous investigators [94, 29, 240], these quantities are related to the density of admissible Volterra dislocations in the reference state and interface crystallography as \[\mathbf{B}=\sum_{i=1}^{j}\left(\frac{\mathbf{n}\times\mathbf{\xi}_{i}}{d_{i}}\cdot\mathbf{p} \right)\mathbf{b}_{i}=\left({}_{\lambda}\mathbf{F}^{-1}-{}_{\lambda}\mathbf{F}^{ -1}\right)\mathbf{p}=\mathbf{T}\mathbf{p}\,, \tag{3.2}\] where \(\mathbf{n}\) is a unit vector normal to the interface and the so-called probe vector \(\mathbf{p}\) is any vector contained within the interface plane. Equation (3.2) is known as the quantized Frank-Bilby equation [290, 240], where \(\mathbf{T}\) corresponds to an average operation that maps \(\mathbf{p}\) to the resultant Burgers vector \(\mathbf{B}\) of interface dislocations intersected by \(\mathbf{p}\). The individual Burgers vectors \(\mathbf{b}_{i}\) of interface dislocations are assumed to be related to the crystal structure of the reference state. For example, if the reference state is an fcc crystal of lattice parameter \(a\), values of \(b_{i}\) may be drawn from a set of \(\frac{a}{2}(110)\)-type glide or \(\frac{a}{6}(112)\)-type Shockley partial dislocation Burgers vectors. Once the set of admissible Burgers vectors is known, well-studied methods stemming from Bollmann's O-lattice theory [31] may be used to compute \(\mathbf{n}\), \(\mathbf{\xi}_{i}\), and \(d_{i}\)[148, 290] from the O-lattice vectors \(\mathbf{p}_{i}^{0}\), defined by \[\mathbf{b}_{i}=\mathbf{T}\mathbf{p}_{i}^{0}\,. \tag{3.3}\] The O-lattice vectors \(\mathbf{p}_{i}^{0}-\)and therefore both \(\mathbf{\xi}_{i}\) and \(d_{i}-\)do not depend on the choice of reference state. If an original reference state is mapped to a new one using displacement gradient \({}_{\lambda}\mathbf{F}\), then \(\mathbf{b}_{i}\) is mapped to \(\mathbf{b}_{i}=\mathbf{s}_{i}\mathbf{F}\mathbf{b}_{i}\). Here and in the following, the superimposed inverse caret will be used to indicate trial values of variables. The new reference state may also be mapped to the natural state using \({}_{A}\hat{\mathbf{F}}={}_{A}\mathbf{F}_{k}\mathbf{F}^{-1}\) and \({}_{B}\hat{\mathbf{F}}={}_{B}\mathbf{F}_{k}\mathbf{F}^{-1}\), as discussed in section 3.2.1. Assuming that rank \(\mathbf{T}=3\), the O-lattice vectors computed from the original and new reference states are identical: \[\mathbf{p}_{i}^{\text{o}}=\mathbf{T}^{-1}\mathbf{b}_{i}=\left({}_{A}\hat{\mathbf{F}}^{ -1}-{}_{B}\hat{\mathbf{F}}^{-1}\right)^{-1}\mathbf{\tilde{b}}_{i}=\mathbf{\tilde{p}}_{ i}^{\text{o}}\,. \tag{3.4}\] This conclusion may also be shown for matrix \(\mathbf{T}\) of rank 2 or 1. Thus, for a given set of Burgers vectors \(\mathbf{b}_{i}\), interface crystallography uniquely determines interface dislocation line directions \(\mathbf{\tilde{\xi}}_{i}\) and spacings \(d_{i}\), but not the reference state. Based on this result, some authors have argued that the choice of reference state is truly arbitrary [31]. However, in different reference states, \(\mathbf{b}_{i}\) will clearly have different magnitudes and directions, both of which influence the magnitudes of the elastic fields generated by interface dislocations (the latter by altering their characters). #### 3.2.4 Solution strategy Determining the elastic fields of semicoherent interfaces requires finding the correct interface dislocation Burgers vectors, which are defined in the coherent reference state. The following five-step strategy is applied to determine the specific reference state that meets the constraints of interface crystallographic character and zero far-field stresses. **Step 1: Solving for geometry of dislocation networks** As shown in section 3.2.3, the geometry of interface dislocations (their line directions and spacings) is independent of the choice of reference state. Thus, a reference state is chosen identical to one of the crystals adjacent to the interface in its natural state. This choice provides an initial guess of the interface dislocation Burgers vectors. Then, the interface dislocation geometry is determined by using standard methods [32, 148, 111]. Multiple dislocation geometries are possible in some interfaces, but attention is restricted in this section to interfaces with unique geometries. **Step 2: Solving for interface dislocation elastic fields** The complete elastic fields, produced by the arrays of dislocations found in step 1, are determined using anisotropic linear elasticity theory in bicrystals. The elastic fields are assumed to follow the periodicity of the two-dimensional dislocation structures predicted in step 1 and must also satisfy specific boundary conditions at the interfaces. **Step 3: Solving for far-field distortions** The far-field distortions associated with each set of parallel dislocations are computed separately and then superimposed to obtain the resultant far-field distortions of the interface dislocation network as a whole. These elastic distortions are key for determining the correct reference state for the interfaces of interest. Far-field strains, stresses, and rotations may also be deduced. **Step 4: Solving for the reference state** The correct reference state is the one in which the superposition of the strains produced by interface dislocation arrays eliminate the coherency strains, giving a bicrystal that is free of far-field stresses and has far-field rotations that agree with the given interface crystallographic character. This condition is met by continuously adjusting the reference state along a specified transformation pathway, starting with the initial guess selected in step 1. **Step 5: Solving for the interface elastic strain energy** Incomplete cancellation of the coherency and Volterra fields near the interface gives rise to short-range stresses and strains. These stresses and strains are used to compute the elastic energies of semicoherent interfaces. #### 3.2.5 Elastic fields of interface dislocation arrays This section is focused on interfaces containing up to two arrays of infinitely long straight, and uniformly spaced parallel dislocations at equilibrium, as illustrated in Fig. (3.2a). The Stroh formalism of anisotropic linear elasticity [237, 238, 58] and a Fourier series-based solution technique are used to compute the elastic fields outside the cores of interface dislocations [22, 65, 35]. For clarity in this section, the pre-subscripts A and B in the field expressions will be omitted if no distinction between materials is required. #### Problem formulation The geometry of a dislocation network consisting of two arrays of straight parallel dislocations may be described by two O-lattice vectors \(p_{1}^{0}\neq p_{2}^{0}\) in the interface of interest using a Cartesian coordinate system with basis vectors \(\left(x_{1},\,x_{2},\,x_{3}\right)\), as shown in Fig. (3.2b). An interface containing only one array of straight parallel dislocations is a special case of this more general geometrical description. The unit vector normal to the interface is \(\mathbf{n}\parallel\mathbf{x}_{2}\), with the interface located at \(x_{2}=0:x_{2}>0\) for material A, and \(x_{2}<0\) for material B. The dislocation line direction \(\mathbf{\xi}_{1}\) is parallel to \(p_{2}^{0}\) and \(\mathbf{\xi}_{2}\parallel\mathbf{p}_{1}^{0}\), as illustrated in previous studies [111, 240, 290]. A representative interface unit cell of the dislocation pattern is illustrated in Fig. (3.2b). Translations of the unit cell by the basis vectors \(\mathbf{p}_{1}^{0}\) and \(\mathbf{p}_{2}^{0}\) tessellate the interface plane. It is also convenient to identify a non-orthogonal (oblique) frame with basis vectors \(\left(x_{1}^{\prime},\,x_{2},\,x_{3}^{\prime}\right)\), where \(x_{1}^{\prime}\parallel\mathbf{p}_{1}^{0}\parallel\mathbf{\xi}_{2}\) and \(x_{3}^{\prime}\parallel\mathbf{x}_{3}\parallel\mathbf{p}_{2}^{0}\parallel\mathbf{\xi}_{1}\). The oriented angle between \(\mathbf{\xi}_{2}\) and \(\mathbf{\xi}_{1}\) is denoted by \(\phi\), so that \(x_{1}^{\prime}=x_{1}\csc\phi\) and \(x_{3}^{\prime}=x_{3}-x_{1}\ctig\phi\). Thus, any position vector in this non-orthogonal frame may be expressed as \(\mathbf{r}=x_{1}^{\prime}\mathbf{p}_{1}^{0}+x_{3}^{\prime}\mathbf{p}_{2}^{0}\). Due to the periodicity of the interface dislocation structure, it is useful to seek a complete set of wavevectors \(\mathbf{k}\) such that the elastic fields in the interface may be analyzed using plane waves \(\e^{2\pi\mathbf{k}\cdot\mathbf{r}}\). The set of all \(\mathbf{k}\) is conveniently written as \(\mathbf{k}=n\mathbf{p}_{1}^{\times}+m\mathbf{p}_{2}^{\times}\) with respect to the reciprocal vectors \(\mathbf{p}_{1}^{\times}\) and \(\mathbf{p}_{2}^{\times}\), defined by the orthogonality conditions \(\mathbf{p}_{\mathbf{x}}^{\times}\cdot\mathbf{p}_{\beta}^{0}=\delta_{\alpha\beta}\), where \(n\), \(m\) are integers. The complete elastic distortion field \(\mathbf{D}\) is the superposition of the uniform coherency and the Volterra dislocation distortions, \(\mathbf{D}_{\mathrm{c}}\) and \(\mathbf{D}_{\mathrm{dis}}\), as discussed in section 3.2.2. Following the seminal work of Bonnet [35, 36, 37], outside of dislocation cores, \(\mathbf{D}\) may be expressed as the biperiodic Fourier series, i.e. \[\mathbf{D}\left(\mathbf{x}\right)=\mathbf{D}_{\mathrm{c}}+\mathbf{D}_{\mathrm{dis} }\left(\mathbf{x}\right)=\mathbf{D}_{\mathrm{c}}+\mathrm{Re}\sum_{\mathbf{k}\neq 0}\e^{2\pi\mathbf{k}\cdot\mathbf{r}}\mathbf{D}^{\mathbf{k}} \left(x_{2}\right)\,, \tag{3.5}\] with \(i=\sqrt{-1}\), while \(\mathrm{Re}\) stands for the real part of a complex quantity and the sum spans over all non-zero wavevectors \(\mathbf{k}\). The Fourier amplitudes of the complete distortion waves \(\mathbf{D}^{\mathbf{k}}\left(x_{2}\right)\) are required to converge (not necessary to zero) in the far-field, i.e. \(x_{2}\to\pm\infty\). The components \(k_{1}\) and \(k_{3}\) of the wavevector \(\mathbf{k}\) satisfy \[\mathbf{k}\,\cdot\,\mathbf{r}=k_{1}\,\,x_{1}+k_{3}\,\,x_{3}=\left(\frac{n\csc\phi}{| \mathbf{p}_{1}^{0}|}-\frac{m\ctig\phi}{|\mathbf{p}_{2}^{0}|}\right)x_{1}+\frac{m}{| \mathbf{p}_{2}^{0}|}\,\,x_{3}\,. \tag{3.6}\] The complete displacement field \(\mathbf{u}\) may be found by integrating eq. (3.5) as \[\mathbf{u}\left(\mathbf{x}\right)=\underbrace{\mathbf{u}_{0}+\mathbf{D}_{\mathrm{c}}}_{ \text{affine part}}\mathbf{x}+\mathrm{Re}\sum_{\mathbf{k}\neq 0}\e^{2\pi\mathbf{k}\cdot\mathbf{r}}\mathbf{u}^{\mathbf{k}}\left(x_{2}\right)=\mathbf{u}_{\text{ dif}}\left(\mathbf{x}\right)+\mathbf{u}_{\text{dis}}\left(\mathbf{x}\right)\,, \tag{3.7}\] where \(\mathbf{u}_{0}\) is an arbitrary constant displacement. The complete displacement field \(\mathbf{u}\) may be decomposed into an affine part \(\mathbf{u}_{\text{aff}}\) corresponding to \(\mathbf{D}_{\mathrm{c}}\) and a biperiodic Fourier series representation of displacement fields \(\mathbf{u}_{\text{dis}}\) generated by the Volterra dislocations. Figure 3.2: (a) Schematic illustration of a planar interface dislocation network formed by bonding materials A and B. (b) The geometry of an interface containing two sets of dislocations described by O-lattice vectors \(\mathbf{p}_{1}^{0}\) and \(\mathbf{p}_{2}^{0}\). Open circles represent O-lattice points and filled circles illustrate atoms with nearly matching positions in materials A and B. The Fourier amplitudes in eqs. (3.5) and (3.7) are determined from linear elasticity in the absence of body forces and subject to boundary conditions associated with interface dislocations. The complete displacement gradients \(\mathbf{D}\left(\boldsymbol{x}\right)=\text{grad }\boldsymbol{u}\left( \boldsymbol{x}\right)\) in crystals A and B must fulfill the partial differential equations of mechanical equilibrium \[\text{div}\left(\mathbf{C}:\text{grad }\boldsymbol{u}\left(\boldsymbol{x} \right)\right)=\mathbf{0}\,, \tag{3.8}\] where \(:\) denotes the double inner product and \(\mathbf{C}\) is a fourth-order anisotropic elasticity tensor. #### Complete field solutions Substituting the displacement field eq. (3.7) into eq. (3.8), the second-order differential equation applied to both half-spaces is obtained as follows \[w_{1}\mathbf{W}_{1}\boldsymbol{u}^{k}\left(x_{2}\right)+w_{2}\left(\mathbf{W} _{2}+\mathbf{W}_{2}^{\text{\tiny{t}}}\right)\frac{\partial\boldsymbol{u}^{k} \left(x_{2}\right)}{\partial\,x_{2}}+\mathbf{W}_{3}\frac{\partial^{2} \boldsymbol{u}^{k}\left(x_{2}\right)}{\partial\,x_{2}^{2}}=\mathbf{0}\,. \tag{3.9}\] with \(w_{1}=-4\pi^{2}\) and \(w_{2}=i2\pi\). Here, \({}^{\text{t}}\) denotes the matrix transpose and \(\mathbf{W}_{1}\), \(\mathbf{W}_{2}\), and \(\mathbf{W}_{3}\) are \(3\times 3\) real matrices related to the wavevectors (i.e. interface geometry) and the stiffness constants (i.e. elasticity) indexed in Voigt notation: \[\begin{split}\mathbf{W}_{1}&=\mathbf{W}_{1}^{\text{ \tiny{t}}}=\left[\begin{matrix}\mathbb{A}_{1}^{2}c_{11}+2&\mathbb{A}_{1}k_{ 3}c_{15}+\mathbb{A}_{3}^{2}c_{58}&\mathbb{A}_{1}^{2}c_{16}+k_{1}k_{3}(c_{14}+ c_{56})+\mathbb{A}_{1}^{2}c_{58}&\mathbb{A}_{1}^{2}c_{15}+k_{1}k_{3}(c_{13}+ c_{58})+\mathbb{A}_{3}^{2}c_{58}\\ &\mathbb{A}_{1}^{2}c_{56}+k_{1}k_{3}(c_{56}+c_{58})+\mathbb{A}_{1}^{2}c_{54} &\mathbb{A}_{1}^{2}c_{56}+2k_{1}k_{3}c_{54}+\mathbb{A}_{2}^{2}c_{544}\\ &\text{sym}&\begin{matrix}\mathbb{A}_{1}^{2}c_{55}+2k_{1}k_{3}c_{55}+ \mathbb{A}_{3}^{2}c_{33}\\ \end{matrix}\end{matrix}\end{split} \tag{3.10}\] As demonstrated in Appendix A from Ref. [249], the complete displacement field (3.7) is written as follows \[\boldsymbol{u}\left(\boldsymbol{x}\right)=\boldsymbol{u}_{0}+\mathbf{D}_{ \text{c}}\,\boldsymbol{x}+\text{Re}\,\frac{1}{2\pi}\sum_{k\neq 0}\mathbf{e}^{i2\pi \boldsymbol{k}\cdot\boldsymbol{r}}\sum_{\alpha=1}^{3}\lambda^{\alpha}\mathbf{e}^ {i2\pi p^{\alpha}x_{2}}\,\boldsymbol{a}^{\alpha}+\zeta^{\alpha}\mathbf{e}^{i2 \pi p^{\alpha}x_{2}}\,\boldsymbol{a}_{\alpha}^{\alpha}\,, \tag{3.11}\] where the eigenvalues \(p^{\alpha}\) and eigenvectors \(\boldsymbol{a}^{\alpha}\) are calculated by solving the sextic algebraic equation of the Stroh formalism [237, 238] for each material A and B. The asterisk indicates complex conjugates of solutions with positive imaginary parts, i.e. \(p^{\alpha+3}=p_{\alpha}^{\alpha}\) and \(\boldsymbol{a}^{\alpha+3}=\boldsymbol{a}_{\alpha}^{\alpha}\), indexed by \(\alpha=1,\,2,\,3\). The complete elastic strains and stresses are also deduced from eq. (3.11) by \[\begin{split}\mathbf{E}\left(\boldsymbol{x}\right)&= \left\{\mathbf{D}\left(\boldsymbol{x}\right)\right\}=\frac{1}{2}\left(\text{ grad }\boldsymbol{u}\left(\boldsymbol{x}\right)+\text{grad }\boldsymbol{u}^{\text{\tiny{t}}}\left(\boldsymbol{x}\right)\right)\\ \boldsymbol{\sigma}\left(\boldsymbol{x}\right)&= \mathbf{C}:\mathbf{E}\left(\boldsymbol{x}\right)\,,\end{split} \tag{3.12}\] respectively. Equation (3.12a) gives the strain-displacement relationship, where \(\left\{\mathbf{D}\left(\boldsymbol{x}\right)\right\}\) denotes the symmetric component of the distortion field, while eq. (3.12b) is the Hooke's law for small strains that determines the stress field. The general solutions of elastic fields of eqs. (3.11\(-\)3.12) are expressed as linear combinations of the eigenfunctions given by eq. (3.76), and include \(\lambda^{\alpha}\) and \(\zeta^{\alpha}\) as complex unknown quantities that are to be determined by the boundary conditions, as follows. #### Boundary condition 1: Convergence of far-field solutions In accordance with Saint Venant's principle, the convergence of the Fourier amplitudes \(\boldsymbol{u}^{k}\left(x_{2}\right)\) when \(x_{2}\to\pm\infty\) leads to the requirement that \({}_{\alpha}\zeta^{\alpha}=0\) and \({}_{\alpha}\lambda^{\alpha}=0\). This condition applies to infinite bicrystals and would not be appropriate for bicrystals terminated with free-surfaces. #### Boundary condition 2: Absence of far-field strains The elimination of the coherency strains \(\mathbf{E}_{\text{c}}\) by the far-field strains of the interface Volterra dislocations \(\mathbf{E}_{\text{dis}}^{\text{o}}\) is taken into account by requiring the total elastic strain field \(\mathbf{E}\) to decay to zero when \(x_{2}\to\pm\infty\), i.e. \[\lim_{x_{2}\to\pm\infty}\mathbf{E}\left(\boldsymbol{x}\right)=\mathbf{E}^{ \infty}=\mathbf{E}_{\text{c}}+\mathbf{E}_{\text{dis}}^{\infty}=\mathbf{0}\,, \tag{3.13}\] where \(\mathbf{E}_{\text{c}}=\left\{\mathbf{D}_{\text{c}}\right\}\) and \(\mathbf{E}_{\text{dis}}^{\infty}=\left\{\mathbf{D}_{\text{dis}}^{\infty}\right\}\) is the far-field strain produced by the interface dislocations. Equation (3.13) is equivalent to eqs. (3.1) expressed using strains rather than stresses. As detailed in Appendix B from Ref. [249], the far-field distortions, calculated individually for each set of dislocations, \(i=1\) and \(2\), and then superposed, are given as follows \[\mathbf{D}_{\text{dis}}^{\infty}=-\text{sgn}\left(x_{2}\right)\ \text{Re}\sum_{i=1}^{2}\ d_{i}^{-1}\! \sum_{\alpha=1}^{3}\lambda_{i}^{\alpha}\mathbf{G}_{i}^{\alpha}+\xi_{i}^{\alpha }\mathbf{G}_{i\kappa}^{\alpha}\,. \tag{3.14}\] Here, \(\lambda_{i}^{\alpha}=\lambda_{2}^{\alpha}=0\) and \(\lambda_{i}^{\alpha}=\lambda_{2}^{\alpha}=0\) for the reasons described in boundary condition 1. Superimposed bars are used to indicate quantities related to the far-field boundary conditions, while the complex constants \(\lambda_{i}\lambda_{i}^{\alpha}\) and \(\lambda_{i}\xi_{i}^{\alpha}\) are determined by solving a specific system of equations, as described in Ref. [249]. #### Boundary condition 3: Disregistry due to interface Volterra dislocations Disregistry is the discontinuity of displacements across an interface [122], expressed in terms of the relative displacements between neighboring atomic planes. Each dislocation produces a stepwise change in disregistry at its core with magnitude equals its Burgers vector. The disregistry at \(x_{2}=0\) of a network of two sets of dislocations may be represented by the staircase functions \[\Delta\,\boldsymbol{u}\left(x_{1},\,x_{3}\right)={}_{\boldsymbol{u}}\left(x _{1},\,x_{3}\right)-{}_{\boldsymbol{u}}\left(x_{1},\,x_{3}\right)=-\boldsymbol {b}_{1}\left|\frac{\text{csc}\,\phi\,x_{1}}{|\,\boldsymbol{p}_{1}^{0}|} \right|-\boldsymbol{b}_{2}\left|\frac{\text{f}_{3}-\text{cte}\,\phi\,x_{1}}{| \boldsymbol{p}_{2}^{0}|}\right|\, \tag{3.15}\] as illustrated in Fig. (3.3), where only one set has been displayed for clarity, for which the complete displacement discontinuity at the interface can therefore be expressed as \[\Delta\,\boldsymbol{u}\left(x_{1},\,x_{3}\right)=\Delta\,\boldsymbol{u}_{ \text{aff}}\left(x_{1},\,x_{3}\right)+\Delta\,\boldsymbol{u}_{\text{dis}} \left(x_{1},\,x_{3}\right). \tag{3.16}\] The left-hand side of eq. (3.16) gives the relative displacement field \(\Delta\,\boldsymbol{u}_{\text{aff}}\) at the interface generated by the uniform macroscopic distortions \({}_{\lambda}\mathbf{D}_{\text{c}}\) and \({}_{\boldsymbol{u}}\mathbf{D}_{\text{c}}\) in the affine form \[\Delta\,\boldsymbol{u}_{\text{aff}}\left(x_{1},\,x_{3}\right)=\Delta\, \boldsymbol{u}_{0}+\left[\left(\lambda_{\text{c}}\mathbf{D}_{\text{c}}-{}_{ \boldsymbol{u}}\mathbf{D}_{\text{c}}\right)x\right]_{\mathbb{I}_{2}=0}\,, \tag{3.17}\] where \(\Delta\,\boldsymbol{u}_{0}=-\frac{1}{2}\left(\boldsymbol{b}_{1}+\boldsymbol {b}_{2}\right)\) is chosen, without loss of generality. As shown in Fig. (3.3), eq. (3.17) may be interpreted as a continuous distribution of (fictitious) Volterra dislocations with infinitesimal Burgers vectors and spacing [29, 197]. The right-hand side of eq. (3.16) is the displacement discontinuity \(\Delta\,\boldsymbol{u}_{\text{dis}}\) produced by equilibrium interface dislocations in the natural state, shown as \(\Delta\) in Fig. (3.1). According to eqs. (3.7) and (3.11), the quantity \(\Delta\,\boldsymbol{u}_{\text{dis}}\) is given in Ref. [249] \[\Delta\,\boldsymbol{u}_{\text{dis}}\left(x_{1},\,x_{3}\right)=\frac{1}{i2\pi} \sum_{k\neq\,\boldsymbol{0}}\text{e}^{2\pi k\,\boldsymbol{r}}\sum_{\alpha=1} ^{3}\lambda^{\alpha}{}_{\alpha}{}^{\alpha}{}_{\alpha}{}^{\alpha}-{}_{\text{g} }\xi_{\text{g}}^{\alpha}{}_{\alpha}{}^{\alpha}\,, \tag{3.18}\] which may be represented by sawtooth functions [86, 35, 89], as illustrated in Fig. (3.3). Using the Fourier sine series analysis and superposing the sawtooth-shaped functions associated with the two sets of dislocations, eq. (3.18) can be expressed as \[\Delta\,\boldsymbol{u}_{\text{dis}}\left(x_{1},\,x_{3}\right)=\underbrace{ \sum_{n=1}^{\infty}-\frac{\boldsymbol{b}_{1}}{n\pi}\text{sin}\,2\pi n\frac{ \text{csc}\,\phi\,x_{1}}{|\boldsymbol{p}_{1}^{0}|}}_{\text{set}\,1}+\underbrace{ \sum_{m=1}^{\infty}-\frac{\boldsymbol{b}_{2}}{m\pi}\text{sin}\,2\pi m\frac{x_ {3}-\text{cte}\,\phi\,x_{1}}{|\boldsymbol{p}_{2}^{0}|}}_{\text{set}\,2}\,. \tag{3.19}\] Thus, the boundary condition in eq. (3.19) for equilibrium interface dislocations, combined with eq. (3.18), leads a set of 6 linear equations: \[\Sigma_{1}:\ \ \begin{cases}\text{Re}\sum_{\alpha=1}^{3}\lambda^{\alpha}{}_{ \alpha}{}^{\alpha}{}_{\alpha}{}^{\alpha}-{}_{\text{g}}\xi_{\text{g}}^{\alpha}{} _{\alpha}{}^{\alpha}=\boldsymbol{\phi}\\ \text{Im}\sum_{\alpha=1}^{3}\lambda^{\alpha}{}_{\alpha}{}^{\alpha}{}_{\alpha}{} ^{\alpha}-{}_{\text{g}}\xi_{\text{g}}{}_{\alpha}{}^{\alpha}{}_{\ast}= \boldsymbol{0}\,,\end{cases} \tag{3.20}\] where Im stands for the imaginary part of a complex quantity and \(\boldsymbol{\phi}\) is given by \[\boldsymbol{\phi}=\begin{cases}-\dfrac{\boldsymbol{b}_{1}}{n}&\text{if}\ m=0 \qquad\qquad(n\geq 1)\\ -\dfrac{\boldsymbol{b}_{2}}{m}&\text{if}\ n=0\qquad\qquad(m\geq 1)\\ \boldsymbol{0}&\text{if}\ nm\neq 0\qquad(n,m\geq 1)\,.\end{cases} \tag{3.21}\] #### Boundary condition 4: No net tractions along the interfaces The solution must satisfy the traction-free boundary condition along the interfaces: \[{}_{\alpha}\sigma\left(x_{1},\,0,\,x_{3}\right)n={}_{\alpha}\sigma\left(x_{1},\, 0,\,x_{3}\right)n\,, \tag{3.22}\] where \(\sigma\left(x_{1},\,0,\,x_{3}\right)\) is reduced to the short-range stress field produced by the interface equilibrium dislocations when eqs. (3.1) are satisfied. In that case, the tractions at the interface read \[\sigma\left(x_{1},\,0,\,x_{3}\right)n=\operatorname{sgn}\left(x_{2}\right) \sum_{k\neq\emptyset}e^{2\pi k\cdot\boldsymbol{r}}\sum_{\alpha=1}^{3}\lambda^ {\alpha}h^{\alpha}+\zeta^{\alpha}h_{\ast}^{\alpha}\,, \tag{3.23}\] where the subsidiary complex vectors \(h^{\alpha}\) are related to the vectors \(a^{\alpha}\) by \[h^{\alpha}=\left(\mathbf{W}_{2}^{\mathrm{t}}+p^{\alpha}\;\mathbf{W}_{3} \right)a^{\alpha}=-p^{\alpha-1}\left(\mathbf{W}_{1}+p^{\alpha}\;\mathbf{W}_{2} \right)a^{\alpha}\,, \tag{3.24}\] with \(h_{\mathrm{g}}^{\alpha}=H_{\mathrm{H2}}^{\mathrm{g}}\). Boundary condition in eq. (3.22) together with eq. (3.23) leads the additional system of 6 linear equations: \[\Sigma_{2}:\;\;\begin{cases}\operatorname{Re}\sum_{\alpha=1}^{3}\lambda^{ \alpha}{}_{\alpha}h^{\alpha}-{}_{\beta}\zeta^{\alpha}h^{\alpha}_{\ast}= \boldsymbol{0}\\ \operatorname{Im}\sum_{\alpha=1}^{3}\lambda^{\alpha}{}_{\alpha}h^{\alpha}-{}_ {\beta}\zeta^{\alpha}h^{\alpha}_{\ast}=\boldsymbol{0}\,.\end{cases} \tag{3.25}\] The two latter conditions 3. and 4. may be rewritten in a eigenvalue problem for equilibrium interface dislocation arrays. Indeed, the elastic fields of these dislocations in an anisotropic bicrystal free of far-field strains are given in terms of the 12 eigenvalues \(\operatorname{Eval}\) and 12 corresponding eigenvectors \(\operatorname{Evec}\) with \(\alpha=1,\,2,\,3\), i.e. (3.26) \[\operatorname{Eval} =\{\operatorname{Re},_{\beta}}{}^{\alpha},\,\operatorname{Im},_{ \beta}{}^{\alpha},\,\operatorname{Re}_{\beta}{}^{\alpha},\,\operatorname{Im} _{\beta}{}^{\alpha}\,\,\}\] \[\operatorname{Evec} =\{{}_{\alpha}{}^{\alpha}{}^{\alpha}{}_{\beta}{}^{\alpha}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta} {}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha} {}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{} ^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{ }_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{} ^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{ }_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{} ^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{ }_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{} ^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{ }_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{ \alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{ }^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{ \beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{}^{\alpha}{}_{\beta}{ ### Symmetric example applications The model described in the forgoing sections is applied to simple example interfaces: symmetric tilt and twist GBs as well as a pure misfit heterophase interface. The materials properties used in these examples are listed in Table 3.1. #### 3.3.1 Pure tilt grain boundary Tilt boundaries that contain one set of interfacial dislocations have been discussed extensively [240]. To illustrate and validate the present method, a symmetrical tilt boundary with \([001]\) tilt axis and tilt angle \(\theta=2^{\circ}\) is analyzed in detail. The calculations are carried out for Cu, which has a moderately high anisotropy ratio, \(A_{\text{Cu}}=2c_{44}/\left(c_{11}-c_{12}\right)=3.21\). The boundary consists of one set of straight parallel dislocations with Burgers vector content \(\mathbf{B}\), expressed as \[\mathbf{B}=\left(\frac{\mathbf{n}\times\xi}{d}\cdot\mathbf{p}\right)\mathbf{b}=\underbrace{ \left(\mathbf{R}_{+}^{-1}-\mathbf{R}_{-}^{-1}\right)}_{\mathbf{\Gamma}}\mathbf{p}=2 \sin\theta/2\ \mathbf{p}\times\mathbf{\omega}\,. \tag{3.31}\] Here, the "median lattice" is used as the obvious reference state: the mapping matrices \(\mathbf{F}\) have been replaced by rotation matrices \(\mathbf{R}\), with \(\mathbf{R}_{+}\) representing a rotation of the upper crystal by angle \(\theta_{+}=\theta/2\) about the tilt axis and \(\mathbf{R}_{-}\) the rotation \(\theta_{-}=-\theta/2\) of the adjacent lower crystal. Equation (3.31) is known as Frank's formula [91, 216], which gives the density of interface dislocations needed to create the tilt boundary. Selecting \(\mathbf{b}=a_{\text{Cu}}\)[010] \(\|\)\(\mathbf{n}\), eq. (3.31) shows that \(\xi=[001]\) and \(d=10.3567\) nm. As expected, the far-field stresses vanish for this choice of reference lattice, and only non-zero stresses are short-ranged. Figure (3.4) plots interface stresses as a function of \(x_{1}\) and \(x_{2}\) (the stresses are invariant along the dislocation line direction, \(x_{3}\)). The red contour illustrates where the stresses fall to zero when \(|x_{2}|\geq 7-10\) nm (depending on the stress components), showing that their range is comparable to the dislocation spacing. The far-field rotations may be calculated from the antisymmetric part of the far-field distortions, i.e. \(\mathbf{\Omega}^{\infty}=\mathbf{)}\mathbf{D}_{\text{dis}}^{\infty}\{\). They satisfy \(\mathbf{\Omega}_{+}^{\infty}-\mathbf{\Omega}_{-}^{\infty}=\mathbf{\Gamma}\) and yield a net non-vanishing rotation about the tilt axis, as excerpted [183, 121]: \[\mathbf{\omega}=\mathbf{\omega}_{+}^{\infty}-\mathbf{\omega}_{-}^{\infty}=-\begin{pmatrix} 0\\ 0\\ 0.03490\end{pmatrix}=-\frac{\mathbf{x}_{1}\times\mathbf{b}}{d}\,. \tag{3.32}\] \begin{table} \begin{tabular}{|r r||c c c c c|} \hline \multicolumn{2}{|c||}{Properties} & \multicolumn{5}{c|}{Materials} \\ Symbol & Unit & Cu & Nb & Fe & Al & Ni \\ \hline \(a\) & Å & 3.615 & 3.301 & 2.866 & 4.050 & 3.524 \\ \(c_{11}\) & GPa & 168.4 & 246.0 & 242.0 & 108.2 & 246.5 \\ \(c_{12}\) & GPa & 121.4 & 134.0 & 146.5 & 61.3 & 147.3 \\ \(c_{44}\) & GPa & 75.4 & 28.7 & 112.0 & 28.5 & 124.7 \\ \hline \end{tabular} \end{table} Table 3.1: Material properties for copper, niobium, iron, aluminium, and nickel. The values of lattice parameters \(a\) for all materials are those listed by Gray [105] and elastic components \(c_{11}\), \(c_{12}\), and \(c_{44}\) by Hirth and Lothe [122]. Figure 3.3: The disregistry \(\Delta\,\mathbf{u}\) due to interface Volterra dislocations is a staircase function. It may be decomposed into an affine part \(\Delta\,\mathbf{u}_{\text{aff}}\) generated by a uniform distortion (represented by a continuous distribution of fictitious infinitesimal dislocations) and a sawtooth function \(\Delta\,\mathbf{u}_{\text{dis}}\) associated with the equilibrium interface dislocations in the natural state. The disregistry \(\Delta\,u_{2}\) and the displacement discontinuity \(\Delta\,u_{2\,\mathrm{dis}}\) associated with the Volterra and equilibrium tilt boundary dislocations are plotted in Fig. (3.5a). They are in good quantitative agreement with the applied boundary conditions, represented by staircase and sawtooth curves. The average elastic energy per unit interface area \(\gamma_{\mathrm{e}}\) is determined for several values of the core cutoff parameter \(r_{0}\). Following eq. (3.30), \(\gamma_{\mathrm{e}}\) may be written as \[\gamma_{\mathrm{e}}\left(r_{0}\right)=\frac{1}{2}d\int_{r_{0}}^{d-r_{0}} \underbrace{\sigma_{22}\left(x_{1},\,0,\,0\right)}_{\overline{\nu}}\,\Delta\,u _{2\,\mathrm{dis}}\left(x_{1},\,0\right)}_{\overline{\nu}}\,dx_{1}\,. \tag{3.33}\] The variation of stress component \(\sigma_{22}\) at \(x_{2}=0\) with \(x_{1}\) is plotted as a black line in Fig. (3.5b). The core region is shaded in grey. Local contributions to the interface elastic energy \(W\) (values of the integrand in eq. (3.33)) are plotted in red. The average elastic energy per unit interface area will depend on the choice of \(r_{0}\). For example, \(\gamma_{\mathrm{e}}=142.8\) mJ.m\({}^{-2}\) with \(r_{0}=b/2\) and \(\gamma_{\mathrm{e}}=167.8\) mJ.m\({}^{-2}\) with \(r_{0}=b/3\), where \(b\) is the magnitude of \(\mathbf{b}\). An appropriate \(r_{0}\) value is selected by comparing the interface elastic energies computed with the present dislocation-based method to experimentally measured energies of small angle [001] tilt boundaries [103], plotted as solid triangles in Fig. (3.6). The calculations using \(r_{0}=b/2\) are in good agreement with the experiments up to \(\sim 5^{\circ}\) while \(r_{0}=b/3\) fits better in the range of \(\sim 5-12^{\circ}\). The classical energy per unit area given by Read and Shockley [215], \(E_{\mathrm{ss}}\left(\theta\right)=1450\,\,\theta\left(-3-\ln\,\theta\right)\) mJ.m\({}^{-2}\), is also shown in Fig. (3.6). It compares well with the calculations for \(r_{0}=b/3\). #### Twist grain boundary As shown in Fig. (3.7a), small-angle (010) twist GBs contain two sets of dislocations, so their dislocation content \(\mathbf{B}\) is expressed as \[\mathbf{B}=\left(\frac{\mathbf{n}\times\mathbf{\xi}_{1}}{d_{1}}\cdot\mathbf{p}\right)\mathbf{b}_{ 1}+\left(\frac{\mathbf{n}\times\mathbf{\xi}_{2}}{d_{2}}\cdot\mathbf{p}\right)\mathbf{b}_{2}= \left(\mathbf{R}_{+}^{-1}-\mathbf{R}_{-}^{-1}\right)\mathbf{p}\,. \tag{3.34}\] The twist boundaries of angle \(\theta=2^{\circ}\) is considered in Cu, where the rotation axis is perpendicular to the boundary, \(\mathbf{\omega}=\mathbf{x}_{2}=[010]\). As in the case of the tilt boundary, the obvious reference state for twist boundaries is the "median lattice" suggested by Frank [93]. In this state, the total rotation across the boundary is equally partitioned between the two grains. However, to illustrate the importance of selecting the correct reference state, other possible reference states are considered. A common choice is to use of the adjacent crystal grains as the reference state. There is a continuum of other possible reference states between these two extremes, and the angle \(\theta_{\mathrm{c}}=-\kappa\,\theta\) is introduced to define the rotation of the reference state from the case where the upper crystal above the boundary has been chosen as the reference lattice. Here, \(\kappa\) is a dimensionless parameter that varies from \(0\) to \(1\). Equipartitioning of rotations between the adjacent crystals (i.e. the "median lattice") occurs when \(\kappa=1/2\). Section 3.2.3 demonstrated that interface dislocation geometry is independent of reference state. In this example, the twist boundary contains an orthogonal grid of dislocations with line directions Figure 3.4: Contour plots of stress components (a) \(\sigma_{11}\) and (b) \(\sigma_{22}\), for the \(2^{\circ}\) symmetric tilt boundary described in the text. The negative values (compression) are plotted in light grey, and the positive values (extension) in dark grey. The stresses decay away over distances comparable to the interface dislocation spacing. In red, the stress field values are equal to zero. \(1/\sqrt{2}\left[\bar{1}01\right]\) and \(\xi_{2}=1/\sqrt{2}\left[101\right]\). The spacings between successive parallel dislocations are \(d_{1}=d_{2}=d=7.3233\) nm. Because of the pure twist misorientations, the coherency stress fields are zero for all possible reference states. Figure (3.7b) plots the dependence of non-vanishing far-field stress components on \(\kappa\). If a reference state with \(\kappa=0\) is chosen, then the interface dislocations deviate by \(1^{\circ}\) from pure screw character and possess non-zero far-field stress components \(\sigma_{11}^{\circ\circ}=\sigma_{33}^{\circ\circ}\) and \(\sigma_{11}^{\circ\circ}=\sigma_{33}^{\circ\circ}\). This demonstrates that \(\kappa=0\) does not represent the correct reference state since eqs. (3.1) (and eqs. (3.13)) are not satisfied. Furthermore, the far-field rotation with \(\kappa=0\) does not equal \(2^{\circ}\), but discrepancies on the order of \(0.001^{\circ}\) between the rotation vector component and the prescribed misorientation are found. As \(\kappa\) increases, the far-field stresses decrease and eventually reach zero at \(\kappa=1/2\), as expected. The interface dislocations have perfect screw characters for this reference state, where non-zero far-field stresses are again obtained when \(\kappa\) is increased beyond \(\kappa=1/2\). Taking \(\kappa=1/2\), the elastic strain energy per unit area \(\gamma_{\rm e}\) is calculated for the twist GB using the expression: (3.35) Figure 3.6: Interface elastic energies \(\gamma_{\rm e}\) computed using two different core cutoff parameters \(r_{0}\) for a \([001]\) tilt GB in Cu as a function of the tilt angle \(\theta\). The gray line shows the Read-Shockley solution. Experimental values are shown with solid triangles [103]. Figure 3.5: (a) Disregistries \(\Delta\,u_{2}\) (staircase function) and \(\Delta\,u_{2\,{\rm dis}}\) (sawtooth function) computed using 100 harmonics for the \(2^{\circ}\) symmetric tilt boundary described in the text. (b) Stress distribution \(\sigma_{22}\) and local elastic energy density \(\gamma_{\rm e}\) at the GB. with \(A=\left|\mathbf{p}_{1}^{0}\times\mathbf{p}_{2}^{0}\right|\) the area of the interface unit cell. Equation (3.35) is decomposed into self-energy densities \(W_{(1)}\) and \(W_{(2)}\) for each set of parallel dislocations and the interaction energy density \(W_{(1-2)}\) between the two sets. These energies are obtained from the separate elasticity solutions for each set of dislocations: \[\begin{split} W_{(1)}+W_{(2)}&=\sigma_{23\,(1)}(x _{1},\,0,\,0)\,\,\Delta\,u_{3\,\mathrm{dis}\,(1)}(x_{1},\,0)+\sigma_{12\,(2)}( 0,\,0,\,x_{3})\,\,\Delta\,u_{1\,\mathrm{dis}\,(2)}(0,\,x_{3})\\ W_{(1-2)}&=\sigma_{23\,(1)}(x_{1},\,0,\,0)\,\, \Delta\,u_{1\,\mathrm{dis}\,(2)}(0,\,x_{3})+\sigma_{12\,(2)}(0,\,0,\,x_{3})\, \,\Delta\,u_{3\,\mathrm{dis}\,(1)}(x_{1},\,0)\,.\end{split} \tag{3.36}\] The local self- and interaction energies are shown in Figs. (3.8a) and (b), respectively. The integral of the interaction energy \(W_{(1-2)}\) over area \(A\) is zero for any value \(r_{0}\), in agreement with the classical dislocation theory result that orthogonal screw dislocations do not exert any forces on each other [122]. The total elastic energy is plotted in Fig. (3.9) as a function of the twist angle up to \(12^{\circ}\) for three core cutoff parameters: \(r_{0}=b_{1}/2\), \(r_{0}=b_{1}/3\), and \(r_{0}=b_{1}/4\). #### Pure misfit interface Lastly, the model is illustrated on an Al/Ni heterophase interface. The terminal planes of both adjacent crystals are fcc \((010)\) planes. The \([100]\) and \([001]\) directions of both crystals are parallel in the interface plane. Thus, the interface is in the cube-on-cube orientation and contains two sets of parallel dislocations. Following eq. (3.2), the Burgers vector content \(\mathbf{B}\) is written as \[\mathbf{B}=\left(\frac{\mathbf{n}\times\xi_{1}}{d_{1}}\cdot\mathbf{p}\right)\mathbf{b}_{1}+ \left(\frac{\mathbf{n}\times\xi_{2}}{d_{2}}\cdot\mathbf{p}\right)\mathbf{b}_{2}=\underbrace {\left({}_{\mathrm{A}}\mathbf{S}^{-1}(r_{\mathrm{A}})\right)-{}_{\mathrm{N}}\mathbf{S} ^{-1}(r_{\mathrm{N}})}_{\mathrm{T}}\mathbf{p}\,. \tag{3.37}\] The reference state for this interface is a crystal oriented identically to the Al and Ni in their natural state, but strained such that its lattice constant in the interface plane is \(a_{\mathrm{c}}\), with \(a_{\mathrm{N}}\leq a_{\mathrm{c}}\leq a_{\mathrm{A}}\). Only strains within the Figure 3.7: (a) Small-angle twist GB on a \((010)\) plane containing two sets of orthogonal dislocations. (b) Dependence of far-field stresses on \(\kappa\) for the \(2^{\circ}\) twist boundary described in the text. Figure 3.8: Local (a) self- \(\{W_{(1)}+W_{(2)}\}\) and (b) interaction \(W_{(1-2)}\) elastic energies arising from two sets of orthogonal screw dislocations in a \(2^{\circ}\) twist boundary on a \((010)\) plane in Cu. interface are necessary to ensure coherency: normal strains are not required. Thus, the matrix \(\mathbf{T}\) in eq. (3.37) is composed of two equibiaxial stretch matrices (no rotations), \({}_{\mathrm{\tiny{A}}}\mathbf{S}^{-1}={}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{c}}+ \mathbf{I}\) and \({}_{\mathrm{\tiny{N}}}\mathbf{S}^{-1}={}_{\mathrm{\tiny{N}}}\mathbf{E}_{\mathrm{ c}}+\mathbf{I}\), where \(\mathbf{I}\) represents the identity matrix. These mapping matrices depend on the ratios of lattice parameters between Al and Ni in their natural and reference states, \(r_{\mathrm{\tiny{A}}\mathrm{l}}=a_{\mathrm{\tiny{A}}\mathrm{l}}\,/a_{\mathrm{ c}}\geq 1\) and \(r_{\mathrm{\tiny{Ni}}}=a_{\mathrm{\tiny{Ni}}}\,/a_{\mathrm{c}}\leq 1\). The matrix \(\mathbf{T}\) in eq. (3.37) may also be rewritten as the difference between the coherency strains prescribed in Al and Ni: \[{}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{c}}-{}_{\mathrm{\tiny{N}}}\mathbf{E }_{\mathrm{c}}=\mathbf{T}\,. \tag{3.38}\] Following the procedure described in section 3.2.4, Ni is initially chosen as the reference lattice, so that \(r_{\mathrm{\tiny{A}}\mathrm{l}}=a_{\mathrm{\tiny{A}}\mathrm{l}}\,/a_{\mathrm{ \tiny{Ni}}}\) and \(r_{\mathrm{\tiny{Ni}}}=1\), and identify \(\tilde{\mathbf{b}}_{1}=a_{\mathrm{\tiny{Ni}}}/\sqrt{2}\,[101]\) and \(\tilde{\mathbf{b}}_{2}=a_{\mathrm{\tiny{Ni}}}/\sqrt{2}\,[10\bar{1}]\). Then, using eq. (3.3), an interface that consists of an orthogonal grid of edge dislocations with \(\xi_{1}=1/\sqrt{2}\,[101]\) and \(\xi_{2}=1/\sqrt{2}\,[101]\) is found, and the corresponding dislocation spacings \(d_{1}=d_{2}=1.902\) nm. Using this choice of reference state, the far-field strains produced by the interface dislocations are: \[{}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{dis}}^{\infty}=\left[\begin{array}[] {ccc}0.10133&0&0\\ 0&0&0\\ 0&0&0.10133\end{array}\right]\,,\ \ \mathrm{and},\ \ {}_{\mathrm{\tiny{N}}}\mathbf{E}_{\mathrm{dis}}^{\infty}= \left[\begin{array}{ccc}-0.03243&0&0\\ 0&0&0\\ 0&0&-0.03243\end{array}\right]\,, \tag{3.39}\] such that the matrices in eqs. (3.39) satisfy \[-\left({}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{dis}}^{\infty}-{}_{\mathrm{ \tiny{N}}}\mathbf{E}_{\mathrm{dis}}^{\infty}\right)=\mathbf{T}\,. \tag{3.40}\] Combining eqs. (3.38) and (3.40), it follows \[{}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{c}}+{}_{\mathrm{\tiny{A}}}\mathbf{E }_{\mathrm{dis}}^{\infty}=\underbrace{{}_{\mathrm{\tiny{N}}}\mathbf{E}_{ \mathrm{c}}}_{\mathrm{\tiny{N}}}+{}_{\mathrm{\tiny{N}}}\mathbf{E}_{\mathrm{dis }}^{\infty}=\left[\begin{array}{ccc}-0.03243&0&0\\ 0&0&0\\ 0&0&-0.03243\end{array}\right]\neq\mathbf{0}\ \ \left(\Leftrightarrow\ {}_{\mathrm{\tiny{A}}} \mathbf{E}^{\infty}={}_{\mathrm{\tiny{N}}}\mathbf{E}^{\infty}\ \right)\,, \tag{3.41}\] with \({}_{\mathrm{\tiny{N}}}\mathbf{E}_{\mathrm{c}}=\mathbf{0}\) here, because Ni has been chosen as the reference lattice. However, according to eq. (3.41b), condition 2. given by eq. (3.13) is not satisfied since the total far-field strains in each individual material do not decay to zero when \(x_{2}\to\pm\infty\). This demonstrates that the initial choice of reference state is not correct. To find the correct reference state, a variable \(\delta\), with \(0\leq\delta\leq 1\) that interpolates \(a_{\mathrm{c}}\) between \(a_{\mathrm{\tiny{A}}\mathrm{l}}\) and \(a_{\mathrm{\tiny{N}}\mathrm{l}}\) is introduced as follows \[a_{\mathrm{c}}=\delta a_{\mathrm{\tiny{A}}\mathrm{l}}+\left(1-\delta\right)a_{ \mathrm{\tiny{Ni}}}\,. \tag{3.42}\] It is shown that the far-field strains in Al and Ni are equal for all \(\delta\), so that eq. (3.41a) is always satisfied, i.e. \({}_{\mathrm{\tiny{A}}}\mathbf{E}^{\infty}={}_{\mathrm{\tiny{N}}}\mathbf{E}^{ \infty}\) with \({}_{\mathrm{\tiny{N}}}\mathbf{E}_{\mathrm{c}}=\mathbf{0}\) if \(\delta=0\), and \({}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{c}}=\mathbf{0}\) if \(\delta=1\). However, only one unique reference state (corresponding to an unique value of \(\delta\)) gives vanishing far-field strains in the bicrystal in its natural state by satisfying eq. (3.13) as well. The pure misfit interface example serves to show that eq. (3.41a) is a necessary, but not sufficient condition for determining the reference state. The total far-field strain component \({}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{11}}^{\infty}\) in Al is plotted in Fig. (3.10) as a function of \(\delta\) and is identical to the component \({}_{\mathrm{\tiny{A}}}\mathbf{E}_{\mathrm{32}}^{\infty}\), according to the interface symmetry (all other strain components are zero). Because eq. (3.41a) is verified for all \(\delta\), the same components in Ni give the same plot as in Fig. (3.10). The far-field strains vary linearly with \(\delta\) and become zero when \(\delta=0.21787\), so that \(a_{\mathrm{c}}=0.36386\) nm. This value of \(a_{\mathrm{c}}\) is the unique coherent reference state for which the pure misfit Al/Ni interface of interest is consistent with the Frank-Bilby equation. It is closer to \(a_{\mathrm{\tiny{Ni}}}\) than to \(a_{\mathrm{\tiny{Al}}}\) because Ni is the stiffer of these two materials and so Figure 3.9: Elastic energies per unit area \(\gamma_{\mathrm{e}}\) as a function of the rotation angle \(\theta\) of twist GBs along \((010)\) planes in Cu for three core cutoff parameters \(r_{\mathrm{o}}\). carries a lower coherency strain in the reference state. The far-field rotations are zero for all values of \(\delta\), as excepted. To demonstrate the errors that come about from ignoring the unequal partitioning of elastic fields and to validate the current calculation, \(a_{\mathrm{c}}\) is recomputed under the assumption that both sides of the interface have the same stiffness (equal to that of Al or Ni), but different natural lattice parameters (\(a_{\mathrm{Al}}\) and \(a_{\mathrm{Ni}}\), as the original calculation). For this case, the calculated value for \(a_{\mathrm{c}}\) is in very good agreement with the well-known approximate result \(\tilde{a}=2a_{\mathrm{Al}}\,a_{\mathrm{Ni}}\,/\,(a_{\mathrm{Al}}+a_{\mathrm{Ni }})=0.37687\) nm [94, 135], corresponding to \(\delta=0.46521\). This value, however, is far from the correct lattice parameter of the reference state when the differing stiffnesses of Al and Ni are taken into account, as illustrated by cross symbols in Fig. (3.10). It is also shown that \(\tilde{a}\) deviates from the prediction and is not consistent with the Frank-Bilby equation when the heterogeneous distortions of bicrystals are explicitly described at equilibrium. ### Partitioning of elastic distortions at fcc/bcc interfaces In this section, the study is focused on semicoherent heterophase interfaces comprised of two sets of dislocations and formed along closest-packed planes in fcc/bcc bimetals, especially for fcc\(\{111\}\)/bcc\(\{110\}\) (Cu/Nb, Ag/V, and Cu/Mo) interfaces in the Nishiyama-Wassermann (NW) orientation relations (OR) [281, 195] as well as in ORs that differ from the NW by an in-plane twist rotation. It is showed that elastic distortions, i.e. strains as well as tilt and twist rotations, are in general unequally partitioned at such interfaces. The correct partitioning of these fields determines the coherent reference state for which the bicrystal of interest is free of far-field strains. Using these results, the stress fields generated by misfit dislocation patterns are computed and analyzed for the Cu/Nb system in the NW and Kurdjumov-Sachs (KS) [157] ORs. The dislocation structure (i.e. the Burgers vectors, spacings, and line directions) is also determined in lowest strain energy solutions of the Frank-Bilby equation along a specific transformation pathway between the NW and KS ORs. Similarly to Fig. (3.1), the concept of reference and natural states of an interface is depicted in Fig. (3.11). The natural state contains an interface formed by joining two crystals with prescribed misorientation and interface planes as well as vanishing far-field strains. This state is also related to a single crystal, coherent reference state by uniform displacement gradients \({}_{\mathrm{A}}\mathrm{F}=\omega_{\mathrm{c}}\mathrm{F}\) and \({}_{\mathrm{B}}\mathrm{F}=\omega_{\mathrm{c}}\mathrm{F}\), which map the reference state to the natural state, as shown in Fig. (3.11a). In the reference state, the two adjacent materials that meet at the interface are rotated and strained such that they are in perfect registry with each other across the \(\hat{\mathbf{X}}-\hat{\mathbf{Z}}\) interface plane after bonding. In general, these displacement gradients entail interface misorientations that have both tilt and twist components [122, 240, 125]. Again, the interface along the \(\hat{\mathbf{x}}-\hat{\mathbf{z}}\) plane is not coherent in the natural state, but rather semiconetent due to the presence of misfit dislocations. The atomically sharp fcc\(\{111\}\)/bcc\(\{110\}\) interfaces in NW and in-plane twisted-NW ORs contain two periodic arrays of infinitely long, straight, and uniformly spaced dislocations. In the NW OR, one of the \(\langle 110\rangle\) directions in a fcc \(\{111\}\) plane lies parallel to the \(\langle 100\rangle\) direction in a bcc \(\{110\}\) plane [281, 195]. The in-plane twisted-NW ORs considered here differ from the NW OR only by a twist rotation of one crystal (here, the bcc material) with respect to the adjacent (fcc) crystal about the axis normal to the interface. The procedure described in section 3.2.4 is adopted to determine the unique reference states that meet the condition of vanishing far-field strains and prescribed misorientation for such interfaces. Thus, the dislocation content \(\mathbf{B}\) of an interface, intersected by a probe vector \(\mathbf{p}\) contained within the interface plane Figure 3.10: Dependence of the total far-field strain component \({}_{\mathrm{Al}}\mathbf{E}_{11}^{\mathrm{o}}\) in Al on \(\delta\) for a Al/Ni heterophase interface. The red dotted line gives the unique reference state, for which the far-field decay to zero and the coherent parameter \(a_{\mathrm{c}}\) is defined. The lattice parameter \(\tilde{a}=2a_{\mathrm{Al}}\,a_{\mathrm{Ni}}\,/\,(a_{\mathrm{Al}}+a_{\mathrm{Ni }})\), which is a good approximation for an interface between crystals of different lattice parameters but identical elastic constants [94, 135], is marked by a grey cross symbol. as illustrated in Fig. (3.11b), is described by the Frank-Bilby equation in eq. (3.2). For interfaces in the NW OR, a transformation pathway is defined by continuously adjusting the reference state from the strain-free state of the fcc crystal present at the interface to that of the adjacent bcc crystal. For all reference states along this path, the method described in section 3.2 is used to compute the superposition of the uniform coherency strains, \(\mathbf{E}_{\mathrm{c}}\), needed to maintain perfect registry and the far-field strain fields produced by the Volterra dislocation arrays, \(\mathbf{E}_{\mathrm{dis}}^{\mathrm{eq}}\). In the correct reference state, these quantities cancel and the total far-field strain field \(\mathbf{E}\) vanishes in both upper fcc (\(y>0\)) and lower bcc (\(y<0\)) materials, as defined by eqs. (3.13), as \[\lim_{y\to\pm\infty}\mathbf{E}\left(\hat{x},\hat{y},\hat{z}\right)=\mathbf{0} \;\;\Leftrightarrow\;\;\begin{cases}{}_{\mathrm{sc}}\mathbf{E}^{\infty}={}_{ \mathrm{sc}}\mathbf{E}_{\mathrm{c}}+{}_{\mathrm{sc}}\mathbf{E}_{\mathrm{dis} }^{\infty}=\mathbf{0}\\ {}_{\mathrm{sc}}\mathbf{E}^{\infty}={}_{\mathrm{sc}}\mathbf{E}_{\mathrm{c}}+{} _{\mathrm{sc}}\mathbf{E}_{\mathrm{dis}}^{\infty}=\mathbf{0}\,,\end{cases} \tag{3.43}\] for which the far-field rotation state in the NW OR is consistent with the given crystallographic character (interface plane and misorientation). To find the reference state for interfaces differing from those in the NW ORs by an in-plane twist angle \(\theta\), a second pathway is defined by rotating the previously determined reference state in the NW OR from \(0\) to \(\theta\). Along this second path, the rotated reference state, for which eqs. (3.43) are satisfied, also yields far-field rotations that must be consistent with the in-plane prescribed twist misorientations. Using the correct reference states for all ORs, the short-range interface strains and stresses that arise from the incomplete cancellation of the coherency and Volterra dislocation fields near the interfaces are also computed as well as the interface elastic energy \(\gamma_{\mathrm{e}}\) from eq. (3.30) as a surface integral over a unit cell. The domain of integration is related to a pre-determined cutoff distance \(r_{0}\) of the dislocation cores to determine the likeliest interface misfit dislocation configurations whenever the Frank-Bilby equation (eq. (3.2)) admits multiple solutions. In this section, a detailed discussion of partitioning of distortions at Cu/Nb interfaces is presented, while analogous results for Ag/V and Cu/Mo interfaces are shown, albeit without going into detail. The material properties (elastic constants and lattice parameters) used in all calculations for these three interface types are listed in Table 3.2. Figure 3.11: (a) The reference and natural states of an interface are related by transformation matrices \({}_{\mathrm{sc}}\mathbf{F}\) and \({}_{\mathrm{sc}}\mathbf{F}\). (b) The correspondence between a closed right-handed circuit enclosing the probe vector \(\mathbf{p}\) in the natural state and its corresponding path with closure failure \(\mathbf{B}\) in the reference state. \begin{table} \begin{tabular}{|c|c c c|c|} \hline Systems & \(\mathcal{E}_{11}\) (GPa) & \(\mathcal{E}_{12}\) (GPa) & \(\mathcal{E}_{44}\) (GPa) & \(a\) (A) \\ \hline \hline Cu & 178.8 & 122.6 & 81.03 & 3.615 \\ Nb & 245.6 & 133.7 & 28.8 & 3.3008 \\ \hline Ag & 124.2 & 93.9 & 46.1 & 4.090 \\ V & 220.15 & 130.7 & 42.8 & 3.039 \\ \hline Cu & 187.8 & 125.7 & 70.6 & 3.615 \\ Mo & 545.9 & 219.3 & 108.8 & 3.147 \\ \hline \end{tabular} \end{table} Table 3.2: Material properties for copper (Cu), niobium (Nb), silver (Ag), vanadium (V), and molybdenum (Mo). The values of stiffness constants \(\mathcal{E}_{11}\), \(\mathcal{E}_{12}\), \(\mathcal{E}_{44}\), and lattice parameters \(a\) for all materials are those listed in Ref. [250]. #### Mapping between states in the Nishiyama-Wassermann orientations Without loss of generality, the following specific relation is used among 12 possible equivalent variants of the NW OR [107] to construct the mapping from the fcc to the bcc crystal: \[\text{NW}:\ \left\{\begin{array}{rcl}\hat{\mathbf{x}}&\parallel\ \mathbf{x}_{\text{scc}}= \left[1\bar{1}2\right]_{\text{scc}}&\parallel\ \mathbf{x}_{\text{scc}}=\left[01\bar{1}\right]_{\text{bcc}}\\ \mathbf{n}&\parallel\ \hat{\mathbf{y}}&\parallel\ \mathbf{y}_{\text{scc}}=\left[11\right]_{\text{ fcc}}&\parallel\ \mathbf{y}_{\text{scc}}=\left[01\right]_{\text{bcc}}\\ \hat{\mathbf{z}}&\parallel\ \mathbf{z}_{\text{scc}}=\left[1\bar{1}0\right]_{\text{ fcc}}&\parallel\ \mathbf{z}_{\text{bcc}}=\left[100\right]_{\text{ bccc}}\end{array}\right.. \tag{3.44}\] Here and in the following, the superimposed hat will indicate quantities expressed in a frame with basis vectors, \(\hat{\mathbf{x}}=\left[100\right]\), \(\hat{\mathbf{y}}=\left[010\right]\), and \(\hat{\mathbf{z}}=\left[001\right]\). A schematic representation of a Cu/Nb interface in the NW OR is shown in Fig. (3.12a). Labeling of Burgers vectors for the other fcc/bcc systems of interest here follows the same pattern as shown for NW Cu/Nb in Figs. (3.12a) and (b). If the fcc Cu material is used as the reference state, then three trial Burgers vectors may be selected in the interface plane: \[\mathbf{b}_{1}^{\text{ccc}}=\frac{a_{\text{Ca}}}{2}\left[\bar{1}01\right]\,\ \mathbf{b}_{2}^{\text{ccc}}=\frac{a_{\text{Ca}}}{2}\left[0\bar{1}1\right]\,\ \text{and}\,\ \mathbf{b}_{3}^{\text{cc}}=\frac{a_{\text{Ca}}}{2}\left[\bar{1}10\right]\,. \tag{3.45}\] The transformation matrix \(\mathbf{T}_{\text{Nb}\to\text{Ca}}\) that represents the transformation of the bcc Nb material to the fcc Cu material may be written as \[\mathbf{T}_{\text{Nb}\to\text{Cu}}=\mathbf{I}-\mathbf{F}_{\text{Ca}\to\text{Nb }}^{-1}\,, \tag{3.46}\] where \(\mathbf{I}\) is the identity matrix and \(\mathbf{F}_{\text{Cu}\to\text{Nb}}\)--the mapping that transforms the fcc Cu to the bcc Nb crystal--is written in the fcc reference system (\(\mathbf{x}_{\text{sc}}\), \(\mathbf{y}_{\text{sc}}\), \(\mathbf{z}_{\text{sc}}\)) as: \[\mathbf{F}_{\text{Ca}\to\text{Nb}}=\left[\begin{array}{rrr}1.281998&-0.00929 8&0.109180\\ -0.009298&1.281998&0.109180\\ -0.154404&-0.154404&0.899935\end{array}\right]\,. \tag{3.47}\] For this interface, the Frank-Bilby equation has three different solutions, namely \(c1\), which uses the pair \(\{\mathbf{b}_{1}^{\text{cc}},\mathbf{b}_{2}^{\text{cc}}\}\), \(c2\) with \(\{\mathbf{b}_{2}^{\text{cc}},\mathbf{b}_{3}^{\text{cc}}\}\), and \(c3\) with \(\{\mathbf{b}_{2}^{\text{cc}},\mathbf{b}_{3}^{\text{cc}}\}\). Due to the crystal symmetry along \(\hat{\mathbf{z}}\) in the NW OR, which exhibits the \(p2/m111\) layer space group, two of the three solutions (\(c2\) and \(c3\)) are mirror images. Analysis of dislocation structures for all three cases are given in Table 3.3, with \(\phi\) the angle between the two sets of dislocations and \(\phi_{i}\) their individual characters. The dislocation line directions and spacings are schematically depicted in Fig. (3.12c), where the filled circles represent the O-lattice points [31, 240]. If the bcc Nb lattice is used as the reference state, then corresponding expressions for \(\mathbf{F}_{\text{Nb}\to\text{Cu}}\) and \(\mathbf{T}_{\text{Ca}\to\text{Nb}}\), may also be obtained. In this case, Burgers vectors are equivalently expressed in the bcc crystal structure and the same dislocation geometries are found. Neither the fcc nor the bcc reference states satisfy the condition of vanishing far-field strains and stresses [125, 249] because neither accounts for the required partitioning of strains and rotations between the adjacent crystals [123]. There is a continuum of other possible reference states between these two extreme cases. To find the correct reference state, a dimensionless variable \(\delta\) that interpolates linearly between the pure Cu and Nb materials is introduced as follows \[\left\{\begin{array}{l}\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $}}}$}$}$}}}}}$}}$}$\mathbf{F}=\left(1- \delta\right)\mathbf{I}+\delta\ \mathbf{F}_{\text{Nb}\to\text{Cu}}\\ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$}}}$}}$}}}}}$}}$}$\mathbf{F}= \delta\ \mathbf{I}+\left(1-\delta\right)\mathbf{F}_{\text{Co}\to\text{Nb}}\,.\end{array}\right. \tag{3.48}\] For \(\delta=0\), \(\mathbf{T}=\mathbf{T}_{\text{Nb}\to\text{Cu}}\) and for \(\delta=1\), \(\mathbf{T}=\mathbf{T}_{\text{Ca}\to\text{Nb}}\). Along the transformation pathway characterized by \(\delta\), the elastic distortions (strain and rotation fields) in the NW ORs can also be computed. \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{5}{|c|}{Dislocation structures in NW Cu/Nb} \\ \multicolumn{5}{|c|}{* solutions by selecting the fcc Burgers vectors} \\ \multicolumn{5}{|c|}{Cases} & \(d_{1}\) (nm) & \(d_{2}\) (nm) & \(\phi^{\circ}\) & \(\phi_{1}^{\circ}\) & \(\phi_{2}^{\circ}\) \\ \(c1:\{\mathbf{b}_{1}^{\text{sc}},\mathbf{b}_{2}^{\text{cc}}\}\) & 1.1234 & 1.1234 & 15.03 & 37.51 & 37.51 \\ \(c2:\{\mathbf{b}_{1}^{\text{cc}},\mathbf{b}_{3}^{\text{cc}}\}\) & 4.2953 & 1.1234 & 82.49 & 60.00 & 82.49 \\ \(c3:\{\mathbf{b}_{2}^{\text{cc}},\mathbf{b}_{3}^{\text{cc}}\}\) & 4.2953 & 1.1234 & 82.49 & 60.00 & 82.49 \\ \hline \hline \multicolumn{5}{|c|}{* solutions by selecting the proper reference Burgers vectors} \\ \multicolumn{5}{|c|}{Cases} & \(d_{1}\) (nm) & \(d_{2}\) (nm) & \(\phi^{\circ}\) & \(\phi_{1}^{\circ}\) & \(\phi_{2}^{\circ}\) \\ \(c1:\{\mathbf{b}_{2}^{\text{sc}},\mathbf{b}_{2}^{\text{sc}}\}\) & 1.1234 & 1.1234 & 15.03 & 39.62 & 39.62 \\ \(c2:\{\mathbf{b}_{1}^{\text{sc}},\mathbf{b}_{3}^{\text{sc}}\}\) & 4.2953 & 1.1234 & 82.49 & 57.89 & 82.49 \\ \(c3:\{\mathbf{b}_{2}^{\text{sc}},\mathbf{b}_{3}^{\text{sc}}\}\) & 4.2953 & 1.1234 & 82.49 & 57.89 & 82.49 \\ \hline \end{tabular} \end{table} Table 3.3: Dislocation spacings \(d_{i}\), angle between the two sets of dislocations \(\phi\), and characters \(\phi_{i}\) for three solutions, namely \(c1\), \(c2\), and \(c3\), for which the fcc (here, Cu) and the proper Burgers vectors have been selected as the reference state in NW Cu/Nb interface. #### Far-field strains and rotations As shown in Refs. [129, 125, 249], and illustrated in Fig. (3.11b) the natural state of semi-infinite bicrystals is homogeneously transformed into a reference state by biaxial distortions parallel to the plane with normal \(\mathbf{n}\parallel\hat{\mathbf{y}}\), so that the removal of the strains \(\hat{e}_{2/\mathrm{tot}}=*\), with \(j=1,2,3\)[129, 125]. Thus, only six components (three for strains and three for rotations) of the distortion matrices are needed to meet the condition of vanishing total far-field strains and prescribed misorientations. In the linear-elastic approximation, the distortion matrices \(\hat{\mathbf{D}}\) may also be separated into symmetric \(\hat{\mathbf{E}}\) and antisymmetric \(\hat{\mathbf{\Omega}}\) parts: \[\hat{\mathbf{D}}=\underbrace{\left[\begin{array}{ccc}\varepsilon_{11}&*& \varepsilon_{13}\\ *&*&*&*_{33}\end{array}\right]}_{\hat{\mathbf{E}}}+\underbrace{\left[ \begin{array}{ccc}0&-\hat{\omega}_{12}&\hat{\omega}_{13}\\ \hat{\omega}_{12}&0&-\hat{\omega}_{23}\\ -\hat{\omega}_{13}&\hat{\omega}_{23}&0\end{array}\right]}_{\hat{\mathbf{ \Omega}}}\,. \tag{3.49}\] The coherency strain fields \(\hat{\mathbf{E}}_{\mathrm{c}}\) on both sides of the interface are given by \[\varsigma_{\mathrm{c}}\hat{\mathbf{E}}_{\mathrm{c}}=\mathrm{sym}\ \varsigma_{ \mathrm{c}}\hat{\mathbf{F}}^{-1}-\mathbf{I}\,,\ \ \text{and},\ \ \varsigma_{\mathrm{n}}\hat{\mathbf{E}}_{\mathrm{c}}= \mathrm{sym}\ \varsigma_{\mathrm{n}}\hat{\mathbf{F}}^{-1}-\mathbf{I}\,, \tag{3.50}\] where \(\varsigma_{\mathrm{c}}\hat{\mathbf{F}}^{-1}\) and \(\varsigma_{\mathrm{n}}\hat{\mathbf{F}}^{-1}\) are obtained from eqs. (3.48). Superposing the elastic strains produced by the interface dislocations in Cu and Nb, i.e. \(\varsigma_{\mathrm{c}}\hat{\mathbf{E}}_{\mathrm{dis}}^{\infty}\) and \(\varsigma_{\mathrm{n}}\hat{\mathbf{E}}_{\mathrm{dis}}^{\infty}\), the total far-field strain state in the entire bicrystal may be calculated [249]. Figure (3.13) shows the total strain component \(\varsigma_{\mathrm{c}}\hat{\mathbf{E}}_{33}^{\infty}\) in Cu as a function of \(\delta\) (black line). This strain vanishes, i.e. \(\varsigma_{\mathrm{c}}\hat{\mathbf{E}}_{\mathrm{dis}}^{\infty}=0\), for \(\delta_{\mathrm{Cu/Nb}}=0.429103\). All other elastic components are consistent with the absence of strains in the far-field and the total far-field strain in Nb vanishes at the same \(\delta\) as in Cu. Thus, the reference state is closer to Cu than to Nb, i.e. \(\delta_{\mathrm{Cu/Nb}}<0.5\). This result cannot be easily predicted from inspection of the stiffness constants alone (see Table 3.2). Figure (3.13) also shows that \(\delta_{\mathrm{Ag/V}}=0.623359\) and \(\delta_{\mathrm{Cu/Mo}}=0.701109\), i.e. the reference state is closer to the bcc material (V and Mo) in both cases. Knowing the \(\delta\) value at which far-field stresses vanish, the crystal structure of the reference state is given by the uniform displacement gradients, obtained using eqs. (3.48) and (3.50): \[\varsigma_{\mathrm{c}}\hat{\mathbf{E}}_{\mathrm{c}}=-\varsigma_{ \mathrm{c}}\hat{\mathbf{E}}_{\mathrm{dis}}^{\infty}=\left[\begin{array}{ccc} 0.02615&0.072664&0\\ 0.072664&0.047550&0\\ 0&0&0.107173\end{array}\right]\] \[-\varsigma_{\mathrm{n}}\hat{\mathbf{E}}_{\mathrm{c}}=\varsigma_{ \mathrm{n}}\hat{\mathbf{E}}_{\mathrm{dis}}^{\infty}=\left[\begin{array}{ccc} 0.030089&0.096675&0\\ 0.096675&0.063262&0\\ 0&0.154414&0.142588\end{array}\right]\,. \tag{3.51}\] Figure 3.12: (a) Representation of the NW OR between fcc \(\{111\}\) (blue atoms) and bcc \(\{110\}\) (red atoms) close packed planes in Cu/Nb interfaces. (b) The reference state is depicted by the dashed black polyhedron, within which the Burgers vectors (corresponding to the sides of each polyhedron) are defined. The difference between the positions of the fcc and bcc atoms have been exaggerated for clarity. (c) Schematic illustrations of two admissible dislocation structures (solutions \(c1\) and \(c2\)) with O-lattice points (black circles) and the local elastic energy densities stored in a representative unit cell of the dislocation patterns. The colors of the dislocations are associated with the Burgers vectors that are colored in (b). Contour values (from the center of the patterns to the dislocation lines): \(\{0,0.2,0.6,1.2,2.0,3.2,5.2\}\) J.m\({}^{-2}\). The Burgers vectors of the interfacial misfit dislocations are to be drawn from this reference state. The correct reference state of the NW OR is depicted by the dashed polyhedron in Fig. (3.12b), within which the Burgers vectors are defined by: \[\Sigma_{\text{NW}}\ \left\{\begin{array}{l}\mathbf{\hat{b}}_{1}^{\text{ref}}=-0.22637 9\;\hat{\mathbf{x}}-0.141507\;\hat{\mathbf{z}}\;\;(\text{nm})\\ \mathbf{\hat{b}}_{2}^{\text{ref}}=-0.226379\;\hat{\mathbf{x}}+0.141507\;\hat{\mathbf{z}}\; \;(\text{mm})\\ \mathbf{\hat{b}}_{3}^{\text{ref}}=\mathbf{\hat{b}}_{1}^{\text{ref}}-\mathbf{\hat{b}}_{2}^{ \text{ref}}=-0.283015\;\hat{\mathbf{z}}\;\;(\text{nm})\,.\end{array}\right. \tag{3.52}\] In addition to completely accommodating the coherency strains, interface dislocations also give rise to unequally partitioned rotation fields, given in the case of Cu/Nb in the NW OR by \[{}_{\text{Cu}}\mathbf{\hat{\mathbf{\alpha}}}_{\text{dis}}^{\text{so}} =-0.072664\;(-\hat{\mathbf{x}}\otimes\hat{\mathbf{y}}+\hat{\mathbf{y}}\otimes \hat{\mathbf{x}})\] \[{}_{\text{Nb}}\mathbf{\hat{\mathbf{\alpha}}}_{\text{dis}}^{\text{so}} =-0.096675\;(\hat{\mathbf{x}}\otimes\hat{\mathbf{y}}-\hat{\mathbf{y}}\otimes \hat{\mathbf{x}})\, \tag{3.53}\] yielding a net non-vanishing rotation vector, i.e. \[\hat{\mathbf{\omega}}={}_{\text{c}},\hat{\mathbf{\omega}}^{\text{so}}-{}_{\text{Nb}} \hat{\mathbf{\omega}}^{\text{so}}=\left(-0.072664-0.096675\right)\;\hat{\mathbf{z}}=-0. 169339\;\hat{\mathbf{z}}\,, \tag{3.54}\] about the \(\hat{\mathbf{z}}\) tilt axis. The unequal partition of far-field rotations given by eqs. (3.53) shows that, to achieve the NW OR, the upper material in the reference state must be rotated by a rigid-body rotation through a tilt angle \(\vartheta_{\text{co}}\sim-4.17^{\circ}\) about the tilt axis \(\hat{\mathbf{z}}\parallel\mathbf{z}_{\text{cc}}=[1\overline{1}0]_{\text{sc}}\) to the Cu material in the natural state. In addition, the lower material must be rotated through a tilt angle \(\theta_{\text{Nb}}\sim 5.55^{\circ}\) about the tilt axis \(\hat{\mathbf{z}}\parallel\mathbf{z}_{\text{loc}}=[100]_{\text{sc}}\) to form the Nb material. Thus, the net rotation angle is \(\sim 9.72^{\circ}\) about \(\hat{\mathbf{z}}\), as discussed in Ref. [107]. This result can be shown by computing the polar decomposition of eq. (3.47) such that \(\mathbf{F}_{\text{Co}\to\text{Nb}}=\mathbf{R}(\sim 9.72^{\circ},[1\overline{1}0]_{\text{sc}}) \cdot\mathbf{B}\), i.e. \[\mathbf{R}=\left[\begin{array}{ccc}0.992799&-0.007201&0.119573\\ -0.007201&0.992799&0.119573\\ -0.119573&-0.119573&0.985599\end{array}\right]\,\ \ \text{and},\ \ \mathbf{B}=\left[\begin{array}{ccc}1.291296&0&0\\ 0&1.291296&0\\ 0&0&0.913084\end{array}\right]\, \tag{3.55}\] with \(B_{11}=B_{22}=\sqrt{2}/\lambda\), \(B_{33}=1/\lambda\) and the lattice parameter ratio \(\lambda=a_{\text{Co}}/a_{\text{Nb}}\). In eqs. (3.55), the matrix \(\mathbf{R}\) corresponds to a rigid-body rotation matrix of angle \(\sim 9.72^{\circ}\) about \([1\overline{1}0]_{\text{sc}}\) and \(\mathbf{B}\) is the Bain strain matrix [16, 297]. The compression axis for the Bain strain is \([1\overline{1}0]_{\text{sc}}\parallel\mathbf{z}\), because \(B_{33}<1\). Table 3.4 summarizes the main results of unequal partitioning of elastic strains and tilt rotations between the adjacent materials of Cu/Nb, Ag/V and Cu/Mo systems in the NW OR. #### 3.4.3 Spurious fields from incorrect reference states As indicated in Table 3.3, the correct dislocation Burgers vectors for the Cu/Nb interface in the NW OR differ from what they would have been had the fcc crystal (Cu) been selected as the coherent reference state. Their directions differ by \(\sim 2.11^{\circ}\), which affect the character of the interface dislocations. The magnitudes of the Burgers vectors in the fcc crystal and the correct reference state also differ, with \(|b_{j}^{\text{sc}}|:|b_{j}^{\text{ref}}|=0.90\) Figure 3.13: Dependence of the total far-field strain component \({}_{\text{co}}\mathbf{\hat{\mathbf{\alpha}}}_{33}^{\text{so}}\) on \(\delta\) in the fcc material for the Cu/Nb, Ag/V and Cu/Mo heterophase interfaces. The vertical dotted line shows the \(\delta\) under the assumption that both materials at the interface have the same stiffness. The consequences of these deviations in character and magnitude may be seen in Fig. (3.14): a residual stress state in Cu persists with \(\varsigma_{\omega}\phi_{33}^{\infty}=-20.01\) GPa, corresponding to a residual strain state \(\varsigma_{\omega}\phi_{33}^{\infty}=-0.10\), as shown in Fig. (3.13). A residual stress field exists in Nb as well, with \(\varsigma_{\omega}\phi_{33}^{\infty}=16.67\) GPa. Figure (3.14) illustrates the variations of the spurious stress field component \(\phi_{33}^{\infty}\) in the neighboring materials as a function of \(\delta\). This elastic field arises when an incorrect reference state is selected. To emphasize the need for accounting for the unequal partitioning of elastic distortions, the coherency strain matrices is recomputed under the assumption that both sides of the interface have the same stiffness (i.e. homogeneous elasticity problem), equal to that of Cu, but with their natural (unequal) lattice parameters, as in the original calculation for the Cu/Nb interface. The results are in agreement with the well-known approximate calculation for equally partitioned strains due to simple geometrical considerations [125], i.e. \[\varsigma_{\omega}\tilde{\epsilon}_{\rm c}^{\rm iso}=-\varsigma_{\rm N}\tilde {\epsilon}_{\rm c}^{\rm iso}=\left[\begin{array}{ccc}0.026451&0.085660&0\\ 0.085660&0.058545&0\\ 0&0&0.127132\end{array}\right]\,, \tag{3.56}\] with \(\varsigma_{\omega}\tilde{\epsilon}_{33}^{\rm iso}=\varsigma_{\omega}\tilde{ \epsilon}_{33}^{\rm iso}=(a_{\rm Nb}-a_{\rm Cu}/\sqrt{2})/(a_{\rm Nb}+a_{\rm Cu }/\sqrt{2})\) and a net rotation vector \(\tilde{\mathbf{\omega}}^{\rm iso}=-2\times 0.085660\)\(\mathbf{\hat{z}}\), corresponding to equipartitioning of rotations with tilt angles \(-\vartheta_{\rm Cu}=\vartheta_{\rm Nb}\sim 4.91^{\circ}\). In the nomenclature given by eqs. (3.48), the homogeneous anisotropic (or isotropic) case is associated with \(\delta=0.5\), as depicted by the vertical dotted lines in Figs. (3.13) and (3.14). The vertical dotted line in Fig. (3.14) shows a (non-zero) excess far-field stress state with \(\varsigma_{\omega}\phi_{33}^{\infty}=3.69\) GPa in Cu and \(\varsigma_{\omega}\phi_{33}^{\infty}=-3.09\) GPa in Nb in the NW Cu/Nb interface or \(\varsigma_{\omega}\hat{\sigma}_{33}^{\infty}=-7.36\) GPa and \(\varsigma_{\omega}\hat{\sigma}_{33}^{\infty}=19.08\) GPa in the NW Cu/Mo interface. Thus, even if the choice of equipartitioning of strains and (tilt) rotations is better than selecting the fcc material as the reference state, a spurious far-field stress field still remains. As a consequence, the associated dislocation structures for the homogeneous anisotropic (or isotropic) elasticity case of the Cu/Nb bicrystal are designated as non-equilibrium structures. #### Orientations differing from the Nishiyama-Wassermann relations Another commonly studied misorientation of interfaces between close-packed planes of neighboring \(\{111\}\) fcc and \(\{110\}\) bcc solids is the KS OR [157]. In the KS OR, one of the \(\langle 110\rangle\) directions in a fcc \(\{111\}\) plane lies parallel to one of the \(\langle 111\rangle\) directions in a bcc \(\{110\}\) plane. A schematic representation of a Cu/Nb interface in the KS OR is shown in Fig. (3.15a), where the bcc atoms have been rotated by 5.26\({}^{\circ}\) from their positions in the NW OR. The geometrical characteristics (line directions and spacings) of dislocation structures in the KS OR for the three cases are given in Table 3.5 and depicted in Fig. (3.15c). \begin{table} \begin{tabular}{|l|c c c|c c|} \hline Systems & & strains & tilt rotations \({}^{\circ}\) & & \\ \(\delta\) & & \(\varsigma_{\omega}\phi_{33}^{\infty}\) & \(\varsigma_{\omega}\phi_{33}^{\infty}\) & \(\phi_{\rm fcc}\) & \(\phi_{\rm bcc}\) \\ Cu/Nb & 0.429103 & 0.107173 & \(-\)0.142588 & \(-\)4.17 & 5.55 \\ Ag/V & 0.623359 & 0.031076 & \(-\)0.018777 & \(-\)6.03 & 3.68 \\ Cu/Mo & 0.701109 & 0.152295 & \(-\)0.064925 & \(-\)6.91 & 2.88 \\ \hline \end{tabular} \end{table} Table 3.4: Partitioning of strains and rotations for various fcc/bcc bicrystals. Figure 3.14: Dependence of the total far-field stress component \(\hat{\sigma}_{33}^{\infty}\) on \(\delta\) in the fcc and bcc materials for the Cu/Nb, Ag/V and Cu/Mo interfaces. To treat the KS OR and other ORs related to the NW by an in-plane twist, the rigid-body rotation matrix \(\mathbf{R}\left(\theta\right)\) that rotates all bcc atoms in the natural state is introduced with respect to the fixed fcc atoms by angle \(\theta\) about the interface normal \(\mathbf{n}\). The NW OR corresponds to \(\theta=0^{\circ}\). The KS OR differs from the original NW OR by a twist rotation of angle \(\theta\sim 5.26^{\circ}\) about the interface normal axis \(\mathbf{n}\). To describe the relation between the natural and reference states for fcc/bcc in the in-plane twisted ORs, \(\approx_{\mathrm{Cu}}\mathbf{F}^{-1}\) and \(\approx_{\mathrm{bc}}\mathbf{F}^{-1}\) in eq. (3.2) are replaced by \(\approx_{\mathrm{Cu}}\mathbf{R}\left(\kappa\right)\approx_{\mathrm{FN}}^{-1}\) and \(\approx_{\mathrm{bc}}\mathbf{R}\left(\kappa\right)\approx_{\mathrm{FN}}^{-1}\), where \(\kappa\) is a dimensionless parameter that varies from \(0\) to \(1\), such that \(\mathbf{R}\left(\kappa\right)\) is the rotation matrix that continuously adjusts the reference state in the KS OR from the one determined in the NW OR. This rotation matrix is expressed in the fcc (\(\pi_{\mathrm{tc}}\), \(\mathbf{y}_{\mathrm{tscr}}\), \(\mathbf{z}_{\mathrm{tc}}\)) and bcc (\(\mathbf{z}_{\mathrm{tscr}}\), \(\mathbf{y}_{\mathrm{tscr}}\), \(\mathbf{z}_{\mathrm{bc}}\)) systems by \(\propto_{\mathrm{Cu}}\mathbf{R}\left(\kappa\right)\) and \(\approx_{\mathrm{N}}\mathbf{R}\left(\kappa\right)\) in the Cu/Nb bicrystal, respectively. Equipartitioning of twist between the adjacent crystals occurs when \(\kappa=0.5\)[22, 125]. The condition that determines \(\kappa\) is that the far-field rotations produced by the interface dislocations must be in accordance with the prescribed twist misorientation. The \(\kappa\) value that satisfies this condition for Cu/Nb in the KS OR is \(\kappa=0.570897\), yielding unequal partitioning of the twist rotations \(\theta_{\mathrm{Cu}}\sim 3.20^{\circ}\) and \(\theta_{\mathrm{Nb}}\sim-2.06^{\circ}\). The correct Burgers vectors associated with this reference state are illustrated in Fig. (3.15b). If the approximation of equipartitioning of distortions is considered, i.e. \(\kappa=0.5\), the partitioning of rotations gives rise to \(\theta_{\mathrm{Cu}}=\theta_{\mathrm{Nb}}=2.63^{\circ}\), such that the dislocation characters differ by \(\sim 0.57^{\circ}\) from the results obtained with the unequally partitioned distortions. This difference is not large because \(\theta\sim 5.26^{\circ}\) is small, but the elastic (short- and long-range) fields may be significantly affected by deviations associated with larger twist rotations [125]. Figure 3.15: Similar illustration as in Fig. (3.12), but for a Cu/Nb interface in the KS OR. Contour values (from the center of the patterns to the dislocation lines): \(\{0,0.2,0.4,0.6,1.0,1.4,2.8,4.8\}\) J.m\({}^{-2}\). \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline \multicolumn{6}{|c|}{Dislocation structures in KS Cu/Nb} \\ \multicolumn{6}{|c|}{* solutions by selecting the proper reference Burgers vectors} \\ Cases & \(d_{1}\) (nm) & \(d_{2}\) (nm) & \(\phi^{\circ}\) & \(\phi_{1}\) & \(\phi_{2}\) & \(\phi_{2}\) \\ \(c1:\{\{\mathbf{b}_{1}^{\mathrm{st}},\mathbf{b}_{2}^{\mathrm{st}}\}\}\) & 0.9073 & 1.2394 & 22.04 & 21.06 & 65.00 \\ \(c2:\{\{\mathbf{b}_{1}^{\mathrm{st}},\mathbf{b}_{3}^{\mathrm{st}}\}\}\) & 2.1457 & 1.2394 & 62.54 & 61.57 & 57.02 \\ \(c3:\{\{\mathbf{b}_{2}^{\mathrm{st}},\mathbf{b}_{3}^{\mathrm{st}}\}\}\) & 2.1457 & 0.9073 & 40.51 & 2.45 & 79.05 \\ \hline \end{tabular} \end{table} Table 3.5: Dislocation structures associated with Cu/Nb in the KS OR. See the caption of Table 3.3 for definitions of notation. #### Short-range elastic fields Although the far-field strains vanish when the correct reference state for ORs differing from the NW by an in-plane twist is used, the dislocation structures depicted in Figs. (3.12b) and (3.15b) nevertheless generate non-zero short-range strains and stresses. For instance, Fig. (3.16) plots stress components \(\sigma_{21}\) and \(\sigma_{22}\) for set 1 only and for both sets of dislocations of \(c1\) for the Cu/Nb interface in the NW OR, as a function of \(x^{\prime}\) (\(x^{\prime}\perp\xi_{1}\)) and \(y\) (\(\hat{y}\parallel\mathbf{n}\)), with \(z=0\). Negative values (compression) are plotted in light grey and the positive values (extension) in dark grey. The thick black lines show the locations where the stresses are equal to zero. The fields are asymmetric due to the material elastic anisotropy and the characters of the dislocation arrays. Using these short-range fields at the interface, i.e. \(y=0\), the local self- and interaction energy densities are computed as a function of \(x\) and \(z\), as shown in Figs. (3.12c) and (3.15c) for all potential solutions predicted by the Frank-Bilby equation in the Cu/Nb NW and KS ORs, respectively. The unique solution of the Frank-Bilby equation is predicted by integrating the strain energy densities over each candidate solution and choosing the dislocation pattern with lowest elastic energy [250]. It is illustrated in the next section 3.4.6 that the present formalism predicts that \(c3\) is in near perfect quantitative agreement with atomistic simulations for \(\theta>1^{\circ}\). For instance, both approaches predict that Cu/Nb interface energy is minimized at \(\theta=2^{\circ}\). The insets of Fig. (3.17) illustrates a qualitative comparison between the elasticity and atomistic calculations. Using the minimum strain energy criterion for finding the likeliest dislocation structures, Fig. (3.17) plots the geometrical characteristics in terms of dislocation spacings, \(d_{i}\) (in black), and characters, \(\phi_{i}\) (light grey), for both sets of dislocations as a function of \(\theta\) (between the NW and KS ORs). The geometry (i.e. dislocation spacing and character) of set 2 does not vary significantly as a function of \(\theta\). In particular, the low spacing between misfit dislocations of set 2 is \(d_{2}\sim 1\) nm and is almost perfectly edge for \(\theta=2^{\circ}\). On the other hand, the dislocation spacing and character of set 1 change markedly with \(\theta\), e.g. from mixed dislocation character to almost perfectly screw character, and the dislocation spacing decreases almost by a factor 2. Set 1 is almost perfectly screw for \(\theta=4.75^{\circ}\). The vertical line in Fig. (3.17) shows the lowest interface energy reported in Ref. [250] with the corresponding geometrical characteristics, i.e. dislocation spacings and characters. Surprisingly, this interface does not correspond to the interface with the largest dislocation spacings or nearly perfectly screw dislocation characters, contrary to what may be expected based on the theory of dislocations in uniform isotropic solids [122]. However, the approach predicts a dislocation structure with \(d_{1}=3.5856\) nm, \(\phi_{1}=24.37^{\circ}\), \(d_{2}=1.0426\) nm, \(\phi_{2}=89.61^{\circ}\), which is in agreement with the atomistic calculations [250]. Figure 3.16: Contour plots of short-range stress component \(\sigma_{21}\) and \(\sigma_{22}\) for the Cu/Nb interface in the NW OR of \(c1\), related to (a) the set 1, \(\perp\), only and (b) both sets, \(\perp\) and \(\perp\), of interface dislocations. Contours with negative values (compression) are plotted in light gray while positive values (extension) are shown in dark gray. The thick black lines show the locations where stresses are zero. #### Comparison with atomistic simulations The present approach to interface design is to construct a mesoscale (as opposed to atomic-level) model that predicts misfit dislocation patterns with accuracy comparable to atomistic simulations, but at a fraction of the cost. The model is a reduced order model because it replaces the millions of variables associated with atomic positions with \(\leq 15\) variables needed to describe misfit dislocations. The misfit dislocations are viewed as Volterra dislocations that have been inserted into the coherent reference state, suggesting that the total interface energies \(\gamma\) be expressed as \[\gamma=\gamma_{\rm e}\left(r_{0}\right)+\gamma_{\rm core}+\gamma_{\rm relax}+\dots\,. \tag{3.57}\] with \(\gamma_{\rm e}\) the elastic strain energy due to misfit dislocations from eq. (3.30), \(\gamma_{\rm core}\) the core energy, \(\gamma_{\rm relax}\) the energy part due to relaxations of the misfit dislocation network, and perhaps additional terms that have not yet been recognized. For the present purposes, it is not necessary to calculate the absolute value of \(\gamma\), but rather only differences in \(\gamma\) between the candidate solutions of the Frank-Bilby equation. The outputs of the elasticity-based model are compared with atomistic calculations, which provide an opportunity for rigorous validation of the elasticity theory of dislocations. They are also convenient for atomistic simulations because embedded atom method potentials are available for several fcc/bcc binaries. The elasticity-based model is validated against the interface compositions: Cu/Nb [71], Ag/V [283], Cu/Fe [177], and Cu/Mo [104]. These choices fix the elastic constants, crystal structures, and lattice parameters of the adjoining constituents. Because attention is restricted to interfaces along fcc \(\langle 111\rangle\) and bcc \(\langle 110\rangle\) planes, only one crystallographic DoF remains to be specified: the twist angle \(\theta\) describing the relative rotation of the crystals parallel to the interface plane. The \(\theta\) is measured with respect to the NW OR, where a bcc \(\langle 100\rangle\) direction is parallel to a fcc \(\langle 110\rangle\) direction, such that \(\theta=\pi/3-\cos^{-1}(1/\sqrt{3})\sim 5.26^{\circ}\) yields the KS OR. Due to the symmetry of the interface planes, all crystallographically distinct interfaces fall within \(0^{\circ}\leq\theta\leq 15^{\circ}\). However, the analysis limited to \(0^{\circ}\leq\theta\leq 10^{\circ}\) because for greater twists, misfit dislocations are too closely spaced to characterize reliably in atomic models. For any composition and \(\theta\), the Frank-Bilby equation has three distinct candidate solutions, as illustrated in Fig. (3.15b), which corresponds to one of three combinations of interfacial Burgers vectors, as described in the previous sections. The first candidate, termed "case 1" (\(\equiv c1\)), uses Burgers vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\). "Case Figure 3.17: Dislocation spacings and characters predicted by the Frank-Bilby equation for both sets of dislocations in the Cu/Nb interface as a function of \(\theta\) (from the NW, i.e. \(\theta=0^{\circ}\) to the KS, i.e. \(\theta\sim 5.26^{\circ}\), ORs). The red line corresponds to the lowest energy interface for \(\theta=2^{\circ}\), reported in Ref. [250]. In insets: comparison of the dislocation geometries in the minimum energy state computed by the elasticity and atomistic approaches. 2" (\(\equiv c2\)) and "case 3" (\(\equiv c3\)) use Burgers vectors \(\mathbf{b}_{1}\), \(\mathbf{b}_{3}\), and \(\mathbf{b}_{2}\), \(\mathbf{b}_{3}\), respectively. Using the elasticity-based model, \(\gamma_{\rm e}\) of all three cases is computed for each composition and \(\theta\) of interest. For all interfaces, the atomic-scale models are also constructed by joining cylindrical fcc and bcc blocks following the required interface crystallography. The models are large enough to contain a representative area of the misfit dislocation pattern and to avoid elastic images from free surfaces. Figure (3.18a) compares \(\gamma_{\rm e}\) from the elasticity-based model with \(\gamma\) from atomistic simulations for Cu/Nb interfaces. Because the relative energies of the three cases are the key quantities for comparison, both the elasticity-based model and atomistic data are shifted so that their energy minima occur at 0 J/m\({}^{2}\). The elasticity-based model predicts that case 3 has lowest \(\gamma_{\rm e}\) for all \(\theta\). Furthermore, \(\gamma_{\rm e}\) for case 3 is in near perfect quantitative agreement with \(\gamma\) for \(\theta>1^{\circ}\). Figure (3.18b) shows a similar comparison for Ag/V interfaces. Here, the elasticity-based model predicts that case 1 has lowest \(\gamma_{\rm e}\) for all \(\theta\) outside \(4.25^{\circ}<\theta<5.25^{\circ}\), where \(\gamma_{\rm e}\) is lowest for case 2. \(\gamma_{\rm e}\) and \(\gamma\) are in qualitative agreement over the entire twist angle range and in quantitative agreement for \(\theta>5^{\circ}\). As described in the Supplementary Note from Ref. [250], it is found comparable agreement between the elasticity-based model and atomistic interface energies for the remaining two compositions. Agreement between \(\gamma_{\rm e}\) and \(\gamma\) is not sufficient to validate the present formalism. For that, it must be determined whether the lowest energy cases predicted by the elasticity-based model match the misfit dislocation patterns in atomistic simulations. Each of the three Frank-Bilby solutions predicts a different misfit dislocation pattern and therefore also a different disregistry. The present goal is to compare the disregistries of all three cases with that found in atomistic simulations. The model is validated if the case with lowest \(\gamma_{\rm e}\) has the best match with the atomistic disregistry. As shown in Figs. (3.18a) and (b), and detailed in Ref. [250], the disregistry analysis is in agreement with the elastic predictions for all Cu/Nb and Ag/V interfaces (circle filled with light grey) except Cu/Nb at \(\theta=0^{\circ}\). The disagreement is attributed to the reconstruction of the misfit dislocation network that is known to occur at that interface [278], which can be treated by further extensions from section 3.6. One further case of disagreement where dislocation network reconstruction occurs is found for Cu/Mo at \(\theta=0^{\circ}\) (see Supplementary Note). However, the agreement between the elasticity-based model and the atomistic models is excellent, overall. The general approach may be compared with several ad hoc parameters proposed previously to deter Figure 3.18: Interface energies computed as a function of \(\theta\) using the elasticity-based model (designated by ROMM as “Reduced Order Mesoscale Model”) and atomistic modeling for (a) Cu/Nb and (b) Ag/V. Filled circles indicate atomic models whose disregistry was analyzed. The ringed numbers next to them state the case that best matches the atomic disregistry. Ad hoc parameters \(P\), \(Q\), and \(R\) for (c) Cu/Nb and (d) Ag/V. mine which of the cases predicted by the Frank-Bilby equation is likeliest. Bollmann suggested that the likeliest case minimizes [31] \[P=\sum_{i}\frac{b_{i}^{2}}{d_{i}^{2}}\,, \tag{3.58}\] which is analogous to the Frank rule for predicting dislocation reactions [122]. Similarly, Ecob and Ralph propose two parameters [80] to distinguish between cases, defined by \(Q\) and \(R\), as follows \[Q=\sum_{i}\sum_{j}\frac{b_{i}b_{j}}{d_{i}d_{j}}\,,\,\,\,\,\text{and},\,\,\,\,R= \sum_{i}\sum_{j}\sqrt{\frac{b_{i}b_{j}}{d_{i}d_{j}}}\,, \tag{3.59}\] using geometrical arguments for the energy of semicoherent interfaces. Figures (3.18c) and (d) plot these parameters for Cu/Nb and Ag/V interfaces. Comparing with Figs. (3.18a) and (b), none of them predicts the misfit dislocation patterns seen in atomistic models. For example, for Cu/Nb, all three parameters favor case 2, while the true interface structure is case 3. The elasticity-based model is therefore viewed as superior to these parameters and as validated for the purpose of computational design of patterned interfaces. ### Application to the sink strength of semicoherent interfaces Clean, safe, and economical nuclear energy requires new materials capable of withstanding severe radiation damage. One way of removing radiation-induced defects is to provide a high density of sinks, such as GBs or heterophase interfaces [230] that continually absorb defects as they are created. This motivation underlies ongoing exploration of the radiation response of nanocomposite materials [72, 56], due to the large total interface area per unit volume they contain. These investigations have demonstrated wide variations in sink behavior of different interfaces. Some easily absorb defects, preventing damage in neighbouring material, but become damaged themselves [112]. Others are poor sinks for isolated defects, but excellent sinks for defect clusters [73]. The sink behavior of yet others changes with radiation dose [14, 15]. This wide variety of radiation responses prompts the physicists to ask: * Are some specific interfaces best suited to mitigate radiation damage? * Is it possible to identify them without resorting to resource-intensive irradiation experiments? Here it is demonstrated that elastic interactions between point defects and semicoherent interfaces lead to a marked enhancement in interface sink strength. The conclusions stem from simulations that integrate first principles, object kinetic Monte Carlo, and anisotropic elasticity calculations. Surprisingly, the enhancement in sink strength is not due primarily to increased thermodynamic driving forces [147, 137], but rather to reduced defect migration barriers, which induce a preferential drift of defects towards interfaces. The sink strength enhancement is highly sensitive to the detailed character of interfacial stresses, suggesting that "super-sink" interfaces may be designed by optimizing interface stress fields. These findings motivate a computational search for "super-sink" interfaces: ones that optimally attract, absorb, and annihilate radiation-induced defects. #### Computational multi-model strategy To answer the aforementioned questions, an improved computational method for rapidly assessing the vacancy and interstitial sink strength of semicoherent interfaces is proposed. This method builds on the interfacial dislocation-based model for elastic fields of heterophase bicrystals, previously described. Such interfaces are of particular interest because many of them contain a high density of defect trapping sites [70, 223]. Moreover, semicoherent interfaces generate elastic fields that interact directly with radiation-induced defects [256]. These elastic fields have an unexpectedly large influence on interface sink strength, as quantified by the following computational multi-model approach. **Elastic dipole tensor calculation** Defect **P**-tensors are calculated using VASP [152], a plane wave-based, first principles density functional theory code. A fcc supercell containing \(256\pm 1\) atoms (\(+1\) and \(-1\) for interstitial and vacancy, respectively) is used. Calculations are also performed LAMMPS [208] classical potential simulations using embedded atom method potentials for Ag [87] and Cu [189] to study the convergence of the elastic dipole tensors up to supercell sizes of 2048 atoms. The discrepancy in the elastic **P**-tensor components between the 256-atom supercell and that of 2048-atom supercell is found lower than 4%. This supercell size ensures the convergence of defect formation energies to within few meV, as detailed in the Supplementary Note from Ref. [256]. The 256-atom density functional theory simulations is therefore viewed as well converged with respect to model size. A \(3\times 3\times 3\) shifted Monkhorst-Pack \(K\)-point grid mesh, a Hermite-Gaussian broadening of 0.25 eV [187], and a plane wave cutoff energy of 400 eV are used. The change of the elastic dipole tensors is less than 0.5% compared to tighter settings. The Perdew-Burke-Erznerhof [207] exchange-correlation functional is conveniently used within the projector-augmented-wave approach [153]. The structures are internally relaxed with a force convergence criterion of \(10^{-3}\) eV/A. The climbing image nudged elastic band method [118] is employed to find the saddle points for defect migration. #### Object kinetic Monte Carlo algorithm The defect diffusion is investigated by using an object kinetic Monte Carlo code with a residence time algorithm to advance the simulation clock [39, 102]. At time \(t\), the time step is chosen according to \(\Delta t=-(\ln r_{1})/w_{\text{tot}}\), where \(r_{1}\) is a random number with \(r_{1}\in]0,1]\) and \(w_{\text{tot}}\) is the sum of frequencies of all events that may occur at \(t\), i.e. \(w_{\text{tot}}=\sum_{i}^{N}w_{i}\). The chosen event \(j\) is such that \(\sum_{i}^{j-1}w_{i}<r_{2}w_{\text{tot}}\leq\sum_{i}^{j}w_{i}\), where \(r_{2}\) is another random number with \(r_{2}\in]0,1]\). Three kinds of events are considered in the simulations: the jump of a point defect from one stable point to a neighbouring one, the absorption of a defect by an interface, and the creation of a new point defect through irradiation. Jump frequencies are given by \(w_{i}=\nu\exp\left(-\Delta E_{i}/\left(kT\right)\right)\), where \(\nu\) is an attempt frequency and \(\Delta E_{i}=E_{i}^{\text{sd}}-E_{i}^{\text{sta}}\) is the energy difference between the saddle position and the initial stable position of the jump considered. The stable point energy is \[E_{i}^{\text{sta}}=-\sum_{k,l}p_{k,l}^{\text{sta}}\,e_{kl}^{\text{int}}(r_{i} ^{\text{sta}})\,, \tag{3.60}\] while the saddle point energy is \[E_{i}^{\text{sad}}=E^{\text{m}}-\sum_{k,l}P_{kl,i}^{\text{sad}}\,e_{kl}^{ \text{int}}(r_{i}^{\text{sad}})\,, \tag{3.61}\] with \(E^{\text{m}}\) the migration energy in the absence of elastic interactions. Here, \(\mathbf{P}^{\text{sta}}\) and \(\mathbf{P}^{\text{sad}}\) are the defect \(\mathbf{P}\)-tensors in the ground state and saddle point configurations, respectively. For simplicity, the position of the saddle point \(r_{i}^{\text{sad}}\) is taken mid-way between the two stable points explored by the jump [239]. The defect is considered to have been absorbed by an interface if it reaches the nearest atomic row to the interface. It is then simply removed from the simulation. This absorption condition is used to obtain a first estimate of sink strength, without taking into account the diffusion of point defects along interfaces or their possible reemission. The irradiation rate is fixed at the beginning of each simulation to keep the average number of point defects equal to 200 in the material where the measurements are performed, if no elastic interactions are considered. The actual number of point defects in the system, averaged over the simulation time when steady state is reached, constitutes the basis for the sink strength calculation. The concentration of defects is recorded every \(10^{4}\) iterations, after the concentration has become stationary. At the end of the simulation, an estimate of the average defect concentration \(\overline{C}\) is computed by averaging over the values \(C_{j}\), with \(j=1,\dots,n\), as follows \[\overline{C}_{n}=\frac{1}{n}\sum_{j=1}^{n}C_{j}\,. \tag{3.62}\] The final time is adjusted to obtain sufficient accuracy on \(\overline{C}\) and thus on the associated sink strength \(k^{2}\) in accordance with the mean field rate theory formalism [303]. For this purpose, the estimation of the error on the concentration is given by the standard error of the mean value, i.e. \[\delta\overline{C}_{n}=\frac{\sigma_{n}}{\sqrt{n}}\,, \tag{3.63}\] where \[\sigma_{n}^{2}=\frac{1}{n-1}\sum_{j=1}^{n}\left(C_{j}-\overline{C}_{n}\right) ^{2}\,. \tag{3.64}\] The final time for each system is chosen so that the relative error on \(\overline{C}\) and \(k^{2}\) is less than 0.5%. #### Kinetic Monte Carlo simulations with elastic interactions Modelling the removal of radiation-induced point defects at sinks is a challenging task: on one hand, the variety and complexity of defect behaviors call for the flexibility of atomistic modelling. On the other, the relatively slow, thermally activated mechanisms of defect motion require longer simulation times than may be reached using conventional atomistic techniques, such as molecular dynamics. The object kinetic Monte Carlo (OKMC) method [39, 102, 55, 139] is employed, which is well suited to modeling long-time, thermally activated processes yet is also able to account for nuances of defect behavior uncovered through atomistic modeling. Figure (3.19) illustrates the setup of the simulations containing two crystalline layers\(-\)A and B\(-\)separated by semicoherent interfaces. Periodic boundary conditions are applied in all directions, so each model contains two A-B interfaces. Due to their inherent internal structure, the interfaces create characteristic stress fields in the neighbouring crystalline layers. These stress fields interact with radiation-induced point defects, modifying their diffusion. The interface stress fields is computed by the approach discussed in section 3.2. For illustration, two specific interfaces are treated in the present work: a low-angle twist GB on a (001) plane in Ag and a pure misfit (zero misorientation) heterophase interface between (001) planes of Ag and Cu. Figure (3.20a) shows a plan view of the Ag twist GB, where the adjacent GB planes have been rotated by \(\pm\theta/2\) (\(\theta\): twist angle). The boundary plane contains two sets of parallel, pure screw dislocations: one aligned with the \(\mathbf{x}=[110]\) direction and the other with the \(\mathbf{y}=[\bar{1}10]\) direction. For a relative twist angle of \(\theta=7.5^{\circ}\), the spacing between dislocations within each set is \(\sim 2.2\) nm. Figure (3.20b) shows the interface plane of the Ag/Cu pure misfit interface. Similar to the twist boundary in Fig. (3.20a), this interface also contains two sets of parallel dislocations aligned with the \(\mathbf{x}=[110]\) and \(\mathbf{y}=[\bar{1}10]\) directions. Furthermore, the spacing between dislocations in the Ag/Cu interface is the same as in the twist boundary of Fig. (3.20a): \(\sim 2.2\) nm. However, unlike in the twist boundary, both sets of dislocations in the misfit interface are of pure edge type. The two interfaces in Fig. (3.20) have identical dislocation arrangements, but different dislocation char Figure 3.19: Schematic illustration of the diffusion of radiation-induced point defects (illustrated by ovals) to interfaces under the influence of interface elastic fields. In general, materials A and B may be any two crystalline solids. In the present work, they are chosen to be either Cu or Ag. Figure 3.20: Planar semicoherent interfaces with identical misfit dislocation arrangements in (a) Ag twist GB with pure screw dislocations and (b) a Ag/Cu misfit interface with pure edge dislocations. acters. Thus, they contain identical dislocation densities, but have differing stress fields. For instance, all normal stress components for the twist GB are zero throughout the entire bicrystal. This stress field is therefore purely deviatoric. By contrast, due to symmetry, the shear stress \(\sigma_{12}\) is everywhere zero for the Ag/Cu interface, but all of its other stress components are in general non-zero. In particular, this interface generates significant hydrostatic stresses. These differences have important implications for interface-defect interactions and defect migration pathways. The force dipole moment approximation is used to compute elastic interaction energies between point defects and interfaces, [144, 229, 69]: \[E^{\text{PD/int}}=-P_{ij}\,\varepsilon_{ij}^{\text{int}}\left(x,y,z\right)\,. \tag{3.65}\] Here, \(\varepsilon_{ij}^{\text{int}}\left(x,y,z\right)=E_{ij}\left(x,y,z\right)\) are the short-range components of the previously calculated interface strain field, given by eq. (3.12a). On the other hand, \(P_{ij}\) are the components of the elastic dipole tensor (the "\(\mathbf{P}\)-tensor"), which describes the elastic fields generated by a point defect. \(E^{\text{PD/int}}\) values are used to compute stress-dependent energy barriers for defect migration at each location in the simulation cell. A similar approach has been adopted in previous OKMC studies to describe point defect interactions with dislocations [231, 239]. The density functional theory is used to calculate \(\mathbf{P}\)-tensors for two types of point defects in Ag and Cu: vacancies and self-interstitials of lowest formation energy, namely \(\left\langle 100\right\rangle\)-split dumbbells [82]. The \(\mathbf{P}\)-tensor values for these defects are obtained in their ground states as well as at their saddle point configurations during migration (found using the climbing image nudged elastic band method [118]). Starting from a simulation cell containing a perfect, stress-free crystal, the point defect of interest is inserted in the desired location and relax the atom positions while keeping the simulation cell shape fixed. The point defect induces stresses, \(\sigma_{ij}\), in the simulation cell. They are related to the defect \(\mathbf{P}\)-tensor through \[P_{ij}=V\,\sigma_{ij}=P_{ij}^{\text{d}}+p^{\text{h}}\,\delta_{ij}\,, \tag{3.66}\] where \(V\) is the simulation cell volume. \(P_{ij}^{\text{d}}\) and \(p^{\text{h}}\) are the deviatoric and hydrostatic (isotropic) \(\mathbf{P}\)-tensor components, respectively. The former is associated with a pure shear (no volume change) while the latter is related to isotropic tension (interstitials) or compression (vacancies), which leads to a volume change. Table 3.6 lists the \(\mathbf{P}\)-tensors used in the present study. All of them are expressed in the Nye frame, where the \(\mathbf{X}\)-, \(\mathbf{Y}\)-, and \(\mathbf{Z}\)-axes are aligned with the \([100]\), \([010]\), and \([001]\) Miller index directions, respectively. The form of the \(\mathbf{P}\)-tensor reflects the symmetry of the corresponding defect. Thus, the \(\mathbf{P}\)-tensor for a vacancy in its ground state is isotropic while that of an interstitial is tetragonal. \(\mathbf{P}\)-tensors for defect orientations other than those given in Table 3.6 may be calculated using coordinate system rotations. The \(\mathbf{P}\)-tensors for \(\left\langle 100\right\rangle\)-split dumbbell self-interstitials and vacancies in Cu agree with experimental data [115, 82, 287]. Furthermore, the present calculations of relaxation volumes of a vacancy in Ag and Cu are in very good agreement with recent ab-initio predictions [194]. Figure (3.21) shows the distribution of ground state interstitial and vacancy interaction energies with the Ag twist GB and the Ag/Cu misfit interface. A \(\left\langle 100\right\rangle\)-split dumbbell interstitial may take on three different orientations. Figure (3.21) uses the orientation with lowest \(E^{\text{PD/int}}\). For the Ag twist GB, interstitial interaction energies are negative at all locations, as shown in Fig. (3.21a). Thus, all interstitials in the vicinity of this GB experience a thermodynamic driving force to migrate towards the boundary. The interstitials, however, have nearly isotropic \(\mathbf{P}\)-tensors (see Table 3.6), so their interaction energies with the Ag twist GB are very small. The interaction energy of vacancies with the Ag twist GB is everywhere zero due to the absence of hydrostatic stresses near this interface. However, the anisotropy of the vacancy saddle point configuration leads to non-zero interaction energies of migrating vacancies with the GB. Figure (3.21c) shows the interaction energy between vacancies and the Ag/Cu misfit interface. The spatial variation of this interaction energy is similar to that of the interstitials, but with opposite sign. The OKMC simulations assume a constant, uniform defect creation rate, \(G\). Defects diffuse until they are absorbed by an interface. Only individual interstitials or vacancies are tracked in the simulations: defect reactions, such as clustering or recombination, are not considered. After a certain simulation time, defect distributions reach a steady state, whereupon the defect concentration is computed as a function of position along the \(z\)-direction (normal to the interface plane) based on the time spent by each defect on a given atomic site. #### Effect of elastic interactions on interface sink strength Figure (3.22) shows steady-state vacancy and interstitial concentrations for the two types of interfaces described above for models with 10 nm-thick Ag and Cu layers. In the absence of elastic interactions between defects and interfaces, steady-state defect concentrations may be computed analytically, which are successfully compared with the simulation results. Elastic interactions have a dramatic effect on defect concentration profiles. In all cases shown in Fig. (3.22) except vacancies near Ag/Cu interfaces, there are nearly no defects within \(\sim 2\) nm-wide zones adjacent to the interfaces. By contrast, without elastic interactions, defect concentrations are zero only at the interfaces themselves. Moreover, even though defect-interface elastic interaction energies are negligible beyond \(\sim 2\) nm, the zones depleted of defects near the interfaces have a pronounced effect on defect concentrations throughout the entire layer, markedly reducing the average defect concentration. For the simulations in Fig. (3.22), elastic interactions reduce defect concentrations by about a factor of two even in the middle of the layers. This effect is even more pronounced for thinner layers. For vacancies in Ag/Cu, local traps are responsible for the sharp increase in concentration near the interface. The simulations account for numerous aspects of defect-interface elastic interactions, such as defect anisotropy or differences in defect ground state and saddle point properties. To discover which ones are primarily responsible for the defect concentrations shown in Fig. (3.22), some of these characteristics are artificially "switched off" and repeated the OKMC simulations to see whether doing so changes the steady-state defect concentrations. These calculations demonstrate that the anisotropy of the \(\mathbf{P}\)-tensor in the saddle point configurations is primarily responsible for the reduced defect concentrations in Figs. (3.22a) and (3.22b). The saddle point anisotropy is "switched off" by replacing the saddle point \(\mathbf{P}\)-tensor with \(\mathbf{P}^{\mathrm{sad}}=p_{\mathrm{sad}}^{\mathrm{h}}\)\(\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix and \(p_{\mathrm{sad}}^{\mathrm{h}}\) is one third of the trace of the true saddle point \(\mathbf{P}\)-tensor. This assumption is tantamount to modelling defects at saddle points as misfitting spherical inclusions in isotropic media. Concentration profiles obtained with this approximation are markedly different from the anisotropic case, as shown in Fig. (3.22). In the case of the Ag twist GB (Figs. (3.22c) and (3.22d)), isotropic saddle points yield the same defect concentrations as when there are no defect-interface interactions at all. Indeed, since the twist interface generates no hydrostatic strain field, only the deviatoric components of defect \(\mathbf{P}\)-tensors may interact with these interfaces. Ground state vacancies have zero deviatoric \(\mathbf{P}\)-tensor components, so the interaction energy with the Ag twist GB vanishes, similar to ground state interstitials with nearly isotropic \(\mathbf{P}\)-tensors (Table 3.6). The same conclusions hold at saddle positions if saddle point anisotropy is "switched Figure 3.21: Elastic interaction energy between (a) an interstitial with the Ag twist GB (\(E^{\mathrm{PD/int}}<-0.002\) eV in the blue isovolume), and between the Ag/Cu misfit interface with (b) an interstitial and (c) a vacancy (\(E^{\mathrm{PD/int}}<-0.06\) eV in the blue isovolume; \(E^{\mathrm{PD/int}}>0.06\) eV in the red; gray contours are locations with zero interaction energy). off", as describe above. Elastic interactions then do not affect migration energies, explaining why defect concentrations are identical to the case without elastic interactions. For the Ag/Cu interface, concentration profiles computed without saddle point anisotropy lie between the non-interacting and fully anisotropic cases, as shown in Figs. (3.22a) and (3.22b). Vacancy concentrations are only marginally lower than the non-interacting case (Fig. (3.22a)), demonstrating the overriding importance of saddle point anisotropy in their behavior. Interstitial concentrations obtained without saddle anisotropy lie approximately mid-way between the fully anisotropic and non-interacting cases (Fig. (3.22b)), demonstrating that saddle point anisotropy is at least as important to their behavior as are \(p\,\Delta V\) interactions, which are more commonly investigated. Figure (3.23) gives a more detailed view of defect concentrations at different locations in the Ag layer of the Ag/Cu interface and in the Ag twist GB. Close to these interfaces, concentrations vary as a function of location parallel to the interface plane, following the strain field pattern created by the interfaces. Indeed, the strain field creates preferential paths for defect migration, as shown by the gray trajectories in Fig. (3.23). These paths are in general different for interstitials and vacancies. For both the Ag/Cu interface and Ag twist GB, vacancies preferentially migrate to the dislocation lines, while interstitials are mostly absorbed between dislocations. This preferential, non-random walk drift of point defects to specific locations is responsible for the enhanced interface sink strengths. Knowing the steady-state defect concentrations obtained by OKMC, sink strengths are derived for the two interfaces considered above. In the mean field rate theory formalism [41], "sink strengths" quantify the ability of sinks, such as interfaces, to absorb defects. Within this formalism, the evolution equation for the average defect concentration, \(\overline{C}\), follows \[\frac{d\overline{C}}{dt}=G-k^{2}D\overline{C}\,, \tag{3.67}\] where \(G\) is the defect creation rate and \(D\) is bulk defect diffusivity. The second term on the right hand side is related to the loss of defects at sinks with associated sink strength, \(k^{2}\). At steady state, the sink strength may be computed from the average concentration: \[k^{2}=\frac{G}{D\overline{C}}\,. \tag{3.68}\] Figure 3.22: Steady-state point defect concentrations as a function of location normal to interface planes. The black vertical lines represent the interface planes, while the continuous gray lines denote the reference case with no elastic interactions, computed analytically. OKMC results for both isotropic (orange) and anisotropic (blue) saddle point configurations are shown. (a) Vacancy and (b) interstitial profiles near Ag/Cu pure misfit interfaces. (c) Vacancy and (d) interstitial profiles near Ag twist GBs. Concentrations are normalized by the average concentration \(\overline{C}\) obtained when no elastic interactions are taken into account. Using the average of the concentration profile computed for defect removal at interfaces in the absence of elastic interactions, the interface sink strength is analytically found to be \(k^{2}=12/d^{2}\)[47]. When interactions between interfaces and defects are present, the sink strength is numerically determined through eq. (3.68), by using the average steady-state concentration obtained by OKMC simulations and the diffusion coefficient without elastic interactions. The resulting vacancy and interstitial sink strengths for both interfaces are shown in Fig. (3.24a\(-\)f) as a function of layer thickness. In all cases, the sink strength increases significantly when elastic interactions are taken into account. This effect is especially pronounced for thinner layers, as defects undergo elastic interactions with interfaces over a larger fraction of the layer. It is particularly strong for interstitials, whatever the interface type, and for vacancies for the twist interface. These results also confirm the importance of saddle point anisotropy: by comparing with OKMC simulations that use isotropic saddle-point \(\mathbf{P}\)-tensors, it yields order-of-magnitude increases in sink strength, in some cases. Another quantity of interest for radiation response is the bias factor, \(B\), which expresses the propensity of a given sink to absorb more interstitials than vacancies. It is defined as \[B=\frac{k_{i}^{2}-k_{v}^{2}}{k_{i}^{2}}\, \tag{3.69}\] where \(k_{v}^{2}\) and \(k_{i}^{2}\) are the sink strengths for vacancies and interstitials, respectively. For example, small interstitial clusters and dislocations exhibit positive bias factors (typically between 0.01 and 0.3 [45, 117]) and thus absorb more interstitials than vacancies. The preferential absorption of interstitials by biased sinks leads to an excess of remaining vacancies, which cluster and eventually aggregate into voids [181, 45]. Bias factors for the semicoherent interfaces are shown in Fig. (3.24g\(-\)i). Values larger than 0.2 are obtained for the fully anisotropic interaction model in the case of the Ag/Cu interface. Such interfaces would compete for interstitials with dislocations. The presence of two sinks of differing bias magnitude has been given as a possible cause for void swelling suppression in ferritin steels [174]. Interestingly, for the Ag twist GB the bias factor is negative, meaning that these interfaces tend to absorb more vacancies than interstitials. Similar observations have been made in Ref. [232], where the bias factor for single screw dislocations Figure 3.23: Preferential migration paths and local concentrations of (a) vacancies and (b) interstitials on the Ag side of the Ag/Cu interface and of (c) vacancies and (d) interstitials in the Ag twist GB. Migration paths are shown as gray lines originating from 1 nm away from the interface. The square grid of black lines represents interface dislocations. Concentrations are plotted in a plane located two atomic distances away from the interface. The concentrations are normalized by \(\overline{C}\): the average concentration when no interactions are considered. Any normalized concentration values higher than 0.015 are shown as equal to 0.015. is negative when using anisotropic elasticity theory and zero in the isotropic approximation. Such GBs may therefore deplete excess vacancy concentrations sufficiently to inhibit void nucleation. ### 3.6 Elastic strain relaxation in interfacial dislocation patterns The interfacial dislocation-based model described in section 3.2 has been extended to investigate the equilibrium relaxed dislocation microstructures with specified constraints on semicoherent interfaces [258, 259]. The present parametric energy-based framework includes surface/interface stress and elasticity effects as additional constitutive relations, which are viewed as infinitely thin membranes in contact with each individual material, give rise to non-classical boundary conditions. The elastic field solutions are used to compute the corresponding strain energy landscapes for planar hexagonal-shaped configurations containing three sets of misfit dislocations with unextended three-fold nodes. #### General considerations on hexagonal-shaped dislocation patterns The mechanical dislocation-based problem for determining the elastic strain relaxation of interfacial patterns formed by joining two linear anisotropic elastic materials A and B is described by adopting specific notations and conventions in Fig. (3.25). In the global coordinate system \((\mathrm{O},\,x_{1}^{\mathrm{or}},\,x_{2}^{\mathrm{or}},\,x_{3}^{\mathrm{or}})\), corresponding to the orientation relations along fixed crystal directions of the system of interest, the semicoherent interface is located at the coordinate \(x_{2}^{\mathrm{or}}=0\), with \(x_{2}^{\mathrm{or}}>0\) for material A, and \(x_{2}^{\mathrm{or}}<0\) for material B. Such directions are not necessary related to high symmetry directions, so that the anisotropic elastic constants may be displayed in the most general form. In the present work, the unit vector normal to the interface is \(\mathbf{n}\parallel x_{2}^{\mathrm{or}}\), and a coplanar free surface to the semicoherent interface is potentially introduced at \(x_{2}^{\mathrm{or}}=h_{\mathrm{A}}\), whereas B is always a semi-infinite linear elastic crystal. Figure 3.24: Enhancement in sink strength of Ag/Cu interfaces and Ag twist GBs for (a\(-\)c) vacancies (\(k_{2}^{2}\)) and (d\(-\)f) interstitials (\(k_{2}^{2}\)) in a given layer (Ag or Cu), as a function of layer thickness, \(d\). (g\(-\)i) Bias factors of Ag/Cu interface and Ag twist GB. The gray line corresponds to the analytical solution when no interaction is present (\(k^{2}=12/d^{2}\)). Orange and blue lines correspond to OKMC calculations without saddle point anisotropy and with the fully anisotropic interaction model, respectively. The crystallography of all interfaces is completely specified between close-packed planes of neighboring materials, so that both orientation relations of crystals A and B with relative misorientations (tilt and twist) and differing lattice parameters (mnsift) are described using the previous concept of reference/natural states, as defined in section 3.2. As an example, the 2.5\({}^{\circ}\) Ta (tantalum) twist boundary is illustrated in Fig. (3.25a). In the reference state, the interface is coherent, but the interface is not coherent in the natural state, and the atomic structures of interfaces lead to the formation of periodic networks of misfit dislocations that may undergo local relaxations or reconstructions [95]. The closely related quantized Frank-Bilby equation [94, 29, 30] and the O-lattice theory [31] are crystallographic approaches used to describe intrinsic dislocation structures at semiconherent interfaces, which provide the interfacial dislocation geometries in terms of line directions and spacings for one, two, or three independent, planar, and uniformly spaced parallel sets of infinitely long straight dislocations. As illustrated in the previous sections, however, such purely geometrical approaches are not able to characterize local reactions of crossing dislocations to form dislocation segments with different Burgers vectors in mesh networks that are energetically favorable. The extended formalism for predicting the interface dislocations arrays linking the quantized Frank-Bilby equation and anisotropic elasticity theory under the condition of vanishing the far-field stresses is used to identify the periodicity of the structures with two sets of dislocations from the pre-determined O-lattice vectors \(\mathbf{p}_{1}^{0}\) and \(\mathbf{p}_{2}^{0}\neq\mathbf{p}_{1}^{0}\), as illustrated in Fig. (3.25b). These two vectors characterize the initial lozenge-shaped unit cell of crossing dislocation sets (red points), for which the translations of the unit cell by the basis vectors \(\mathbf{p}_{1}^{0}\) and \(\mathbf{p}_{2}^{0}\) tessellate the entire interface plane. In the following, the superscript \({}^{\text{un}}\) will be used to indicate quantities related to the unrelaxed dislocation configurations, e.g. \(\bar{\xi}_{2}^{\text{un}}\parallel\mathbf{p}_{2}^{0}\) and \(\mathbf{\xi}_{2}^{\text{un}}\parallel\mathbf{p}_{1}^{0}\) correspond to the initial dislocation directions of the two sets that consist of the lozenge-shaped patterns, with Burgers vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), respectively, as stated in section 3.2. Planar energetically favorable interactions may lead to the formation of dislocation junctions with coplanar Burgers vector \(\mathbf{b}_{3}\), i.e. \[\mathbf{b}_{1}+\mathbf{b}_{2}\to\mathbf{b}_{3}\,, \tag{3.70}\] such that the current semicoherent interfaces contain infinite, planar, and periodic dislocation structures with three sets of misfit dislocations. As illustrated in Fig. (3.25b), the third newly formed set (in black) is associated with the junction formation due to the local rearrangements between two initial crossing dislocation arrays, shown by the blue and red dashed lines. The current directions of the three sets of misfit dislocations are denoted by \(\bar{\xi}_{1}\), \(\bar{\xi}_{2}\), and \(\bar{\xi}_{3}\) for which the latter is associated with the direction of the in-plane dislocation junctions. Figure 3.25: Geometry of a hexagonal-shaped dislocation pattern containing three sets of interface dislocations with the associated individual Burgers vectors. (a) The orientation relationships between the adjacent linear materials are defined with respect to the global coordinate system \((\text{O},\,\mathbf{x}_{1}^{\text{or}},\,\mathbf{x}_{2}^{\text{or}},\,\mathbf{x}_{3}^{ \text{or}})\), within which the semicoherent interface is located at \(\mathbf{x}_{2}^{\text{or}}=0\). For illustration, the current intrinsic dislocation structure is associated with a planar \(\{011\}\parallel\mathbf{n}\) twist GB between two bcc crystals with a 2.5\({}^{\circ}\) rotation angle. (b) Anisotropic elasticity calculations are performed in the non-orthogonal \((\text{O},\,\mathbf{x}_{1}^{\text{\prime}}\parallel\mathbf{p}_{1}^{\text{or}},\,\mathbf{ x}_{2}\parallel\mathbf{n},\,\mathbf{x}_{3}^{\text{\prime}}\parallel\mathbf{p}_{2}^{0})\) frame with fixed basis vectors, where \(\mathbf{p}_{1}^{\text{or}}\) and \(\mathbf{p}_{2}^{\text{or}}\neq\mathbf{p}_{1}^{0}\) are the O-lattice vectors that describe the periodicity of the dislocation structures. The fixed red and blue points characterize the initial lozenge-shaped unit cell and the pivot points for elastic strain relaxations, respectively. The grey points are related to the O-lattice points, separated by the networks of interfacial dislocations with three-fold dislocation junction nodes where the conservation law of Burgers vectors is satisfied, e.g. at the specific orange node \(\mathsf{J}_{1}\) that is parametrized by the dimensionless coordinates \((\eta_{1},\eta_{2})\). For convex hexagonal-shaped dislocation configurations, \(\mathsf{J}_{1}\) may move within the shaded triangular domain \(\mathcal{T}_{\text{ARC}}\) in dark grey. The present reactions yield to hexagonal-shaped patterns with three-fold dislocation nodes, where the centers of the parent dislocation segments from the lozenge-shaped unit cells consist of pinning pivot points (blue points) for glissile planar dislocations. An useful triangular domain \(\mathcal{T}_{\mathrm{ABC}}\) for performing parametric energy-based analyses, is represented by two blue pivot points (B and C) and the red intersection point A in which dislocation reactions occur, as shaded in dark grey in Fig. (3.25b). On the other hand, the newly formed representative hexagonal-shaped unit cell (light grey domain), which contains six vertices (dislocation nodes), indexed and ordered by \(\mathrm{J}_{1},\mathrm{J}_{2},\mathrm{J}_{3},\mathrm{J}_{4},\mathrm{J}_{5}\), and \(\mathrm{J}_{6}\), is denoted by \(\mathcal{H}_{1,h[2]h[3]h[4]h}\). The determination of such infinitely repeated dislocation nodes with the type of rearrangement defined by eq. (3.70) produces neither orientation nor magnitude changes in the O-lattice vectors. Thus, the two-dimensional periodicity of the dislocation networks containing three families of straight parallel dislocation segments in the local Cartesian frame \((\mathrm{O},\,\mathrm{x}_{1},\,\mathrm{x}_{2},\,\mathrm{x}_{3})\) with \(\mathrm{\mathbf{x}_{2}}\parallel\mathrm{\mathbf{x}_{2}^{\mathrm{or}}}\parallel \mathrm{\mathbf{n}}\), remains unchanged during the elastic strain relaxation processes. In the previous non-orthogonal (oblique and fixed) frame with basis vectors \((\mathrm{O},\,\mathrm{x}_{1}^{\prime},\,\mathrm{x}_{2},\,\mathrm{x}_{3}^{ \prime})\), where \(\mathrm{\mathbf{x}_{1}^{\prime}}\parallel\mathbf{p}_{1}^{\mathrm{O}}\parallel\mathbf{ \xi}_{2}^{\mathrm{an}}\) and \(\mathrm{\mathbf{x}_{3}^{\prime}}\parallel\mathrm{\mathbf{x}_{3}}\parallel\mathrm{\mathbf{ p}_{2}^{\mathrm{o}}}\parallel\mathbf{\xi}_{1}^{\mathrm{an}}\), the oriented angle between \(\mathbf{\xi}_{2}^{\mathrm{an}}\) and \(\mathbf{\xi}_{1}^{\mathrm{an}}\) is denoted by \(\phi^{\mathrm{an}}\), so that \(\mathrm{\mathbf{x}_{1}^{\prime}}=x_{1}\csc\phi^{\mathrm{an}}\) and \(\mathrm{\mathbf{x}_{3}^{\prime}}=x_{3}-x_{1}\csc\phi^{\mathrm{an}}\). Thus, any position vector in this non-orthogonal frame may be expressed as: \(\mathbf{r}=x_{1}^{\prime}\,\mathbf{p}_{1}^{\mathrm{o}}+x_{3}^{\prime}\,\mathbf{p}_{2}^{ \mathrm{o}}=(x_{1}\csc\phi^{\mathrm{an}})\,\mathbf{p}_{1}^{\mathrm{o}}+(x_{3}-x_{1} \csc\phi^{\mathrm{an}})\,\mathbf{p}_{2}^{\mathrm{o}}\). In particular, the mobile dislocation three-fold node of interest \(\mathrm{J}_{1}\), which is parametrized by the coordinates \((\eta_{1},\eta_{2})\) in the first quadrant of the \((\mathrm{O},\,\mathrm{x}_{1}^{\prime},\,\mathrm{x}_{2},\,\mathrm{x}_{3}^{ \prime})\) frame, is also defined by: \(\mathrm{J}_{1}=\eta_{1}\,\mathbf{p}_{1}^{\mathrm{o}}+\eta_{2}\,\mathbf{p}_{2}^{\mathrm{ o}}\), with \((\eta_{1},\eta_{2})\in[0,\,1/2\)[2, excluding \(0\) and \(1/2\) to describe convex hexagonal-shaped patterns with six distinct dislocation edges. For example, the limiting case of equilibrium arrays with two sets of orthogonal misfit dislocations is given by: \(\phi^{\mathrm{o}\mathrm{a}}=\pi/2\), \(\eta_{1}^{\mathrm{o}}\to 1/2\), and \(\eta_{2}^{\mathrm{o}}\to 1/2\), so that \(\mathrm{J}_{2}\simeq\mathrm{J}_{3}\) and \(\mathrm{J}_{5}\simeq\mathrm{J}_{6}\), as the \((010)\) twist GBs in fcc materials. On the other hand, the regular equilibrium hexagonal network corresponds to the particular case where: \(\phi^{\mathrm{o}\mathrm{a}}=\pi/3\), and \(\eta_{1}^{\mathrm{o}\mathrm{a}}=\eta_{2}^{\mathrm{o}\mathrm{a}}=1/3\), as the \((111)\) twist GBs in fcc crystals. #### 3.6.2 Solution methodology for strain-relaxed rearrangements During the non-random elastic strain relaxations without externally applied stresses, misfit dislocations are rearranged into hexagonal-shaped networks due to local reactions that lower the elastic strain energy at semicoherent interfaces [95, 122]. Such strain-relaxed rearrangements of interfacial dislocation patterns also involve the mechanical problem of finding the minimum-energy paths from a given initial non-equilibrium lozenge-shaped microstructure with two sets of parent misfit dislocations to new unique or multiple (with the same strain energy) stable equilibrium hexagonal-shaped dislocation patterns of lowest energies with possible metastable configurations. Without changing the interface crystallographic characters upon the relaxation processes, the prescribed displacement jumps for each periodic hexagonal unit cell are also assumed to vary linearly with the (algebraic) directed distance between the O-lattice points (displayed by the grey points in Fig. (3.25b)) and the nearest neighbor interfacial dislocation segments. At the positions of the dislocation segments, the relative displacements are completely described by the directions and constant magnitudes of the associated individual Burgers vectors. Furthermore, the non-classical boundary conditions due to the free surface excess stress and the semicoherent interface excess stress contributions are therefore applied at: \(\mathrm{\mathbf{x}_{2}^{\mathrm{or}}}=h_{A}\) and \(\mathrm{\mathbf{x}_{2}^{\mathrm{or}}}=0\), respectively. Thus, the minimum-energy paths are entirely obtained by measuring the removal of the short-range elastic strain energy with respect to the coordinates \((\eta_{1},\eta_{2})\) of \(\mathrm{J}_{1}\), along which the long-range elastic strain-free state is not altered by spurious non-zero far-field strains. For a given crystallographic orientation relationship between materials A and B, the methodology for determining the equilibrium dislocation configurations for elastic strain relaxation processes along minimum-energy paths is described below. The two first items summarize the strategy procedure for computing the Burgers vectors of interface dislocations using anisotropic elasticity theory, which have been introduced in section 3.2. 1. The geometries in terms of dislocation spacings and line directions, i.e. \(\xi_{1}^{\mathrm{an}}\) and \(\xi_{2}^{\mathrm{an}}\), related to the initial lozenge-shaped patterns are found by using the quantized Frank-Bilby equation. For such networks containing two sets of straight, parallel, and infinite misfit dislocations, the periodicity of the dislocation structures is also obtained by mapping the O-lattice points at the interfaces. The corresponding computed O-lattice vectors \(\mathbf{p}_{1}^{\mathrm{o}}\) and \(\mathbf{p}_{2}^{\mathrm{o}}\neq\mathbf{p}_{1}^{\mathrm{o}}\) are conveniently associated with the fixed and non-orthogonal basis vectors of the \((\mathrm{O},\,\mathrm{\mathbf{x}_{1}^{\prime}},\,\mathrm{\mathbf{x}_{2}},\,\mathrm{ \mathbf{x}_{3}^{\prime}})\) frame for elasticity analyses, where \(\mathrm{\mathbf{x}_{1}^{\prime}}\parallel\mathbf{p}_{1}^{\mathrm{o}}\parallel\xi_{2}^{ \mathrm{an}}\), \(\mathrm{\mathbf{x}_{2}}\parallel\mathrm{\mathbf{n}}\), and \(\mathrm{\mathbf{x}_{3}^{\prime}}\parallel\mathbf{p}_{2}^{\mathrm{o}}\parallel\mathbf{\xi}_{1}^ {\mathrm{an}}\). 2. The reference state, within which the individual Burgers vectors of both dislocation sets are defined, i.e. \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), is determined by combining the Frank-Bilby equation with anisotropic elasticity theory that meets the constraints of interface crystallographic character and zero long-range strains (or stresses) for infinite bicrystals. Because the latter far-field condition is still fulfilled during the elastic strain relaxation processes, the third Burgers vector \(\mathbf{b}_{3}\) for the newly formed dislocation junctions is also obtained from the conservation eq. (3.70) of the Burgers vector content at the three-fold node \(\mathrm{J}_{1}\). In the limiting case where a coplanar free surface is located in material A, the reference state (and therefore also, the three Burgers vectors) is fully associated with material B, e.g. the case of a thin film on a semi-infinite substrate. 3. The specific triangular region \(\mathcal{T}_{\text{ARC}}\) in the representative lozenge-shaped unit cell, formed by the three fixed points A, B, and C in Fig. (3.25b), is discretized into four-node quadrilateral elements with respect to the \(i^{\text{th}}\) nodal points with coordinates \((\eta_{1}^{i},\eta_{2}^{i})\), such that \(\{\eta_{1}^{i},\eta_{2}^{i}\}\in\left]0\right.\), \(1/2\right[^{2}\). This discretization allows to represent any convex hexagonal-shaped dislocation patterns in the non-orthogonal \((\text{O},\,\mathbf{x}_{1}^{i},\,\mathbf{x}_{2},\,\mathbf{x}_{3}^{i})\) frame for mechanics-based calculations of elastic field solutions, e.g. displacements, stresses, traction forces, etc. 4. The elastic strain energy stored at semicoherent interfaces is computed at any mesh point \((\eta_{1}^{i},\eta_{2}^{i})\), by using the persistent short-range stress and strain field solutions for convex and irregular hexagonal-shaped dislocation configurations. Furthermore, the complete elastic energy landscape \(\gamma_{\text{e}}(\eta_{1},\eta_{2})\) is interpolated for any \((\eta_{1},\eta_{2})\in\left]0\right.\), \(1/2\right[^{2}\) with the aid of standard finite element bilinear shape functions for four-node elements. 5. For energetically favorable reactions, the minimum-energy dislocation configurations are numerically obtained by using the conjugate gradient algorithm on the pre-computed energy landscapes with a given prescribed tolerance. Then, the nudged elastic band method [118, 225] is used to provide access to the minimum-energy paths between the initial (non-equilibrium) lozenge-shaped structures and the determined elastically relaxed dislocation patterns with the aid of the elastic forces: \(f_{\text{e}}=-\mathbf{\nabla}\gamma_{\text{e}}(\eta_{1},\eta_{2})\). In practice, all elastic field solutions are recomputed along the curvilinear reaction coordinates of the minimum-energy paths. #### 3.6.3 Parametric energy-based framework This section is concerned with the complete expressions of elastic fields for hexagonal-shaped dislocation patterns located at heterophase interfaces between two dissimilar anisotropic materials. The Stroh sextic formalism of anisotropic linear elasticity combined with the surface/interface treatment in Ref. [108] and a Fourier series-based solution technique is therefore used to compute the elastic fields outside the cores of dislocations. In the general case, all surfaces of interest (i.e. semicoherent interfaces and free surfaces) are distinctly considered as infinitely thin membranes with different, separate, and appropriate constitutive equations than the relations for both (bulk) materials A and B. Again, the pre-subscripts A and B in the elastic properties and also the field expressions will be omitted for clarity in the following if no distinction between materials is required. **Elastic field equations and solutions in bulk materials** In the fixed Cartesian coordinate system \((\text{O},\,\mathbf{x}_{1},\,\mathbf{x}_{2},\,\mathbf{x}_{3})\), the three-dimensional stress field \(\mathbf{\sigma}(\mathbf{x})=\sigma_{ij}(x_{1},x_{2},x_{3})\) and the displacement field \(\mathbf{u}(\mathbf{x})=u_{i}(x_{1},x_{2},x_{3})\) in both crystals A and B are related by the Hooke's law in index form from Eq. (3.12b), as follows \[\sigma_{ij}(x_{1},x_{2},x_{3})=c_{ijkl}\ u_{k,l}(x_{1},x_{2},x_{3})\, \tag{3.71}\] where a comma stands for differentiation, with repeated indices denoting summation convention ranging from 1 to 3, unless stipulated otherwise. The anisotropic elastic constants of the fourth-order stiffness tensor C are fully symmetric, i.e. \(c_{ijkl}=c_{ijkl}=c_{ijkl}\), and the classical partial differential eq. (3.8) of mechanical equilibrium that is fulfilled in both crystals in terms of the displacement fields is given by \[\sigma_{ij,ij}(x_{1},x_{2},x_{3})=c_{ijkl}\ u_{k,l}(x_{1},x_{2},x_{3})=0\,. \tag{3.72}\] According to eq. (3.7), the complete displacement field is expressed as the superposition of the linear displacement contribution from the proper selection of reference states for constrained interfaces and the total displacement fields produced by the arrays of interfacial Volterra dislocations. The latter dislocation displacement fields are also given as a biperiodic Fourier series, i.e. \[u_{k}^{\text{dis}}(x_{1},x_{2},x_{3})=\text{Re}\,\sum_{\mathbf{k}\neq\mathbf{0}}\, \text{e}^{2\pi\mathbf{k}\cdot\mathbf{r}}\ u_{k}^{\mathbf{k}}(x_{2})=2\,\text{Re}\sum_{D} \text{e}^{2\pi\mathbf{k}\cdot\mathbf{r}}\ u_{k}^{\mathbf{k}}(x_{2})\, \tag{3.73}\] where the Fourier series expansion involves the harmonics \((n,\,m)\) that belong to the upper two-dimensional half-plane domain, defined by \(D=\{\{n\in\mathbb{N}^{*}\}\cup\{m\in\mathbb{Z}^{*},\,n=0\}\}\). For clarity, the subscript \({}_{\text{dis}}\) in eq. (3.7) has been changed to superscript in eq. (3.73). The components \(k_{1}(n,m)\) and \(k_{3}(m)\) of the wavevectors \(\mathbf{k}\) are given by eq. (3.6) as follows \[\mathbf{k}\,\cdot\,\mathbf{r}=\frac{n}{p_{1}^{\text{o}}}\,\mathbf{x}_{1}^{\prime}+\frac{m}{ p_{2}^{\text{o}}}\,\mathbf{x}_{3}^{\prime}=\left(\frac{n\,\text{csc}\,\phi^{\text{un}} }{p_{1}^{\text{o}}}-\frac{m\,\text{ctg}\,\phi^{\text{un}}}{p_{2}^{\text{o}}} \right)x_{1}+\frac{m}{p_{2}^{\text{o}}}\,x_{3}=k_{1}(n,m)\ x_{1}+k_{3}(m)\ x_{3}\, \tag{3.74}\] with \(p_{1}^{\alpha}=|\,\mathbf{p}_{1}^{\alpha}|\) and \(p_{2}^{\alpha}=|\,\mathbf{p}_{2}^{\alpha}|\). On the other hand, the far-field components are computed for two dislocation sets to determine the correct reference state [249], within which the Burgers vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\) (and also \(\mathbf{b}_{3}\), by virtue of eq. (3.70)) are defined. Because the elastic (short-range) strain relaxations do not alter the long-range strain state during the junction formation of the third dislocation sets, the removal of the far-field strains (or stresses) in the natural state is fulfilled by solving the tensorial far-field eqs. (3.1), exhibiting non-zero and heterogeneous short-range elastic fields for interfacial dislocation patterns, only. Thus, substituting the displacement field eq. (3.73) into eq. (3.71), the second-order differential equation applied to both materials is obtained in index form as follows \[-4\pi^{2}\,\mathbf{W}_{1_{k}}\,\,\mathbf{h}_{k}^{\alpha}(x_{2})+i2\pi\left(\mathbf{W}_{2_{ k}}+\mathbf{W}_{2_{k}i}\right)\,\,\bar{n}_{k,2}^{\mathbf{k}}(x_{2})+\mathbf{W}_{3_{k}}\,\,\bar{n}_{k,2 2}^{\mathbf{k}}(x_{2})=0\,, \tag{3.75}\] where \(\mathbf{W}_{1}\), \(\mathbf{W}_{2}\), and \(\mathbf{W}_{3}\) are \(3\times 3\) real matrices defined in eqs. (3.10). In eq. (3.75), the superimposed tilde to any quantities will be used to indicate that the corresponding field solutions are consistent with the Frank-Bilby equation under the condition of vanishing far-field strains (or stresses) for any dislocation patterns. For non-zero wavevectors \(\mathbf{k}\), the standard solutions satisfying eq. (3.75) can be written in the following form [62] \[\bar{n}_{k}^{\mathbf{k}}(x_{2})=\mathrm{e}^{i2\pi p^{k}x_{2}}\,\,a_{k}^{\mathbf{k}}\,, \tag{3.76}\] where \(p^{\mathbf{k}}=p\) and \(\mathbf{a}^{\mathbf{k}}=a_{k}\) become the complex scalar and vectorial unknowns of the boundary value problems, respectively, for which the superscripts \(\mathbf{k}\) are omitted, for clarity. Introducing eq. (3.76) into eq. (3.75), the vector \(\mathbf{a}\) is found to satisfy the homogeneous linear system \[\left[\mathbf{W}_{1_{k}}+p\left(\mathbf{W}_{2_{k}}+\mathbf{W}_{2_{k}i}\right)+p^{2}\,\mathbf{ W}_{3_{k}}\right]a_{k}=\Pi_{ik}\,a_{k}=0\,, \tag{3.77}\] which corresponds to the standard eigenvalue problem in anisotropic elasticity theory [237, 243]. A non-zero (non-trivial) solution can be found only if the determinant of \(\mathbf{\Pi}\) is zero, i.e. \[\det\,\Pi_{ik}=0\,, \tag{3.78}\] leading to a sextic equation for \(p\). As mentioned in section 3.6.3, the solutions of eq. (3.78) have six imaginary roots, which are arranged such that the three first eigenvalue solutions \(p^{\alpha}\) have positive imaginary parts, indexed by superscripts \(\alpha=1\), \(2\), \(3\). The remaining three solutions have negative imaginary parts, so that \(p^{\alpha+3}=p_{*}^{\alpha}\). The corresponding eigenvectors \(a^{\alpha}=a_{k}^{\alpha}\) are also complex conjugates with \(a^{\alpha+3}=a_{*}^{\alpha}=a_{k}^{\alpha}\), so that the general solution may be rewritten as a linear combination of the three eigenfunctions, i.e. \[\bar{n}_{k}^{\mathrm{dis}}(x_{1},x_{2},x_{3})=2\,\mathrm{Re}\sum_{D}\mathrm{e }^{i2\pi k\cdot\tau}\,\sum_{\alpha=1}^{3}\lambda^{\alpha}\mathrm{e}^{i2\pi p^ {\alpha}x_{2}}\,\,a_{k}^{\alpha}+\zeta^{\alpha}\mathrm{e}^{i2\pi p^{\alpha}x _{2}}\,\,a_{k}^{\alpha}\,, \tag{3.79}\] which differs from eq. (3.11) by a multiplicative \(i2\pi\) term, without loss of generality. It also follows from eq. (3.71) that \[\tilde{\sigma}_{ij}^{\mathrm{dis}}(x_{1},x_{2},x_{3})=4\pi\,\mathrm{Re}\sum_{ D}\mathrm{i}\mathrm{e}^{i2\pi k\cdot\tau}\,\sum_{\alpha=1}^{3}\lambda^{\alpha} \mathrm{e}^{i2\pi p^{\alpha}x_{2}}\,\,\mathrm{H}_{ij}^{\alpha}+\zeta^{\alpha} \mathrm{e}^{i2\pi p^{\alpha}x_{2}}\,\,\mathrm{H}_{ij}^{\alpha}\,, \tag{3.80}\] where the \(3\times 3\) complex matrices \(\mathbf{\mathrm{H}}^{\alpha}\) are related to the eigenvectors \(\mathbf{a}^{\alpha}\) by \[\mathrm{H}_{ij}^{\alpha}=\left(k_{1}\,c_{ijk1}+k_{3}\,c_{ijk3}+p^{\alpha}c_{ijk 2}\right)a_{k}^{\alpha}\,, \tag{3.81}\] from selected elastic constants of materials A and B. In particular, the surface tractions at the semicoherent interfaces, i.e. \(x_{2}=0\), are reduced to \[\tilde{t}_{k}^{\mathrm{int}}(x_{1},x_{3})=\partial_{ki}^{\mathrm{dis}}(x_{1}, 0,x_{3})\,\,n_{i}=4\pi\,\mathrm{Re}\sum_{D}\mathrm{i}\,\mathrm{e}^{i2\pi k \cdot\tau}\,\sum_{\alpha=1}^{3}\lambda^{\alpha}\mathrm{H}_{2}^{\alpha}+\zeta^{ \alpha}\,\mathrm{H}_{k2}^{\alpha}\,, \tag{3.82}\] as well as the tractions at the free surface, i.e. \(x_{2}=h_{\mathrm{A}}\), to \[\tilde{t}_{k}^{\mathrm{fs}}(x_{1},x_{3})=\partial_{ki}^{\mathrm{dis}}(x_{1},h_{ \mathrm{A}},x_{3})\,\,n_{i}=4\pi\,\mathrm{Re}\sum_{D}\mathrm{i}\,\mathrm{e}^{i2 \pi k\cdot\tau}\,\sum_{\alpha=1}^{3}\lambda^{\alpha}\mathrm{e}^{i2\pi p^{ \alpha}h_{\mathrm{A}}}\,\,\mathrm{H}_{k2}^{\alpha}+\zeta^{\alpha}\mathrm{e}^{i2 \pi p^{\alpha}h_{\mathrm{A}}}\,\,\mathrm{H}_{k2}^{\alpha}\,\,. \tag{3.83}\] #### Free surface and semicoherent interface elasticity contributions Combined with the surface tractions in eqs. (3.82) and (3.83), the additional surface/interface stress contributions, due to the work required by applying in-plane forces to elastically stretch the pre-existing free surfaces and interfaces neighboring both materials A and B into the correct reference states, are introduced as follows \[\tau_{\chi\varphi}(x_{1},x_{3})=\gamma\,\delta_{\chi\varphi}+\frac{\partial\, \gamma}{\partial e^{s}_{\chi\varphi}(x_{1},x_{3})}\,, \tag{3.84}\] where \(\tau_{\chi\varphi}(x_{1},x_{3})\) and \(e^{s}_{\chi\varphi}(x_{1},x_{3})\) are the \(2\times 2\) surface stress and strain tensors, and \(\gamma\) is the surface free energy [227, 51]. Because eq. (3.84) is derived for the plane stresses acting in the surface area, the stress and strain fields have only in-plane components, and Greek indices take values \(1\) and \(3\), only. In order to solve the elasticity problems with appropriate constitutive relations between the surface stress and strain components, a linear constitutive equation analogous to eq. (3.71) is used [224], i.e. (3.85) \[\tau_{\chi\varphi}(x_{1},x_{3})=\tau^{0}_{\chi\varphi}+d_{\chi\varphi\eta\, \sigma}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ where \(z_{1}=z_{1}(x_{1})\) and \(z_{2}=z_{2}(x_{1},x_{3})\) are dimensionless linear functions. Assuming that the displacement jumps are zero at all positions of the O-lattice points, e.g. at O in the representative unit cell in Fig. (3.25b), the prescribed displacement field is also an odd function with respect to \(\mathbf{r}\) in the oblique (O, \(\mathbf{x}_{1}^{\prime}\), \(\mathbf{x}_{2}\), \(\mathbf{x}_{3}^{\prime}\)) frame. According to the linear elasticity theory, these displacement jumps produced by each hexagonal-shaped dislocation cell can therefore be formally expressed as double Fourier series for any dislocation configurations with respect to \((\eta_{1},\eta_{2})\), i.e. \[\mathbf{u}^{p}(x_{1},x_{3})=\text{Im}\sum_{\mathbf{k}\neq\mathbf{0}}\text{e}^{2\pi\mathbf{k} \cdot\mathbf{r}}\,\hat{\mathbf{u}}^{p}(\eta_{1},\eta_{2})=-\text{Re}\,i\sum_{\mathbf{k} \neq\mathbf{0}}\text{e}^{2\pi\mathbf{k}\cdot\mathbf{r}}\left(\hat{\mathbf{u}}_{1}^{p}(\eta_{1 },\eta_{2})+\hat{\mathbf{u}}_{2}^{p}(\eta_{1},\eta_{2})\right)\,, \tag{3.88}\] where all real-valued expansion coefficients \(\hat{\mathbf{u}}^{p}(\eta_{1},\eta_{2})\) in eq. (3.88) are additionally decomposed into the individual contributions \(\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})\) and \(\hat{\mathbf{u}}_{2}^{p}(\eta_{1},\eta_{2})\), associated with \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), respectively. In particular, the vector quantity \(\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})\) for \(\mathbf{b}_{1}\) is deduced by solving the double integral with respect to \(z_{1}\) and \(z_{2}\), as follows \[\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})=\text{Re}\left[\,i\,\int_{z_{1}(\eta_ {1})}^{z_{1}(\eta_{1})}\left(z_{1}\int_{z_{2}(z_{1},\eta_{1},\eta_{2})}^{z_{2} (z_{1},\eta_{1},\eta_{2})}\,e^{-2\pi(\eta z_{1}+\eta z_{2})}\,dz_{2}\right)\, dz_{1}\right]\mathbf{b}_{1}\,, \tag{3.89}\] for any \((\eta_{1},\eta_{2})\in\mathcal{H}_{\{1_{1}\}\text{b}_{2}\text{b}_{4}\text{b}_{ 5}\text{b}_{6}}\). Moreover, eq. (3.89) may be integrated over three separate unit domains, e.g. the parallelogram \(\mathcal{P}_{\{1_{1}\}\text{b}_{4}\text{b}_{6}}\) and both triangles \(\mathcal{T}_{\{1_{1}\}\text{b}_{2}\text{b}_{3}}\) and \(\mathcal{T}_{\{1_{4}\}\text{b}_{5}\text{b}_{6}}\), i.e. \[\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})=\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2} )|_{\mathcal{H}_{\{1_{1}\}\text{b}_{2}\text{b}_{4}\text{b}_{5}\text{b}_{6}}} =\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{\mathcal{P}_{\{1_{1}\}\text{b}_{4} \text{b}_{6}}}+\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{\mathcal{T}_{\{1_{1} \}\text{b}_{2}\text{b}_{3}}}+\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{ \mathcal{T}_{\{1_{2}\}\text{b}_{3}}}+\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{ \mathcal{T}_{\{1_{4}\}\text{b}_{5}\text{b}_{6}}}\,, \tag{3.90}\] as illustrated by the different vertices in Fig. (3.25b). Because the boundaries of the hexagonal-shaped unit cells are composed of straight dislocation segments, the integral eq. (3.89) is necessarily bounded by affine functions with respect to the coordinates \(\eta_{1}\) and \(\eta_{2}\). The first quantity \(\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{\mathcal{T}_{\{1_{1}\}\text{b}_{4} \text{b}_{4}\text{b}_{5}\text{b}_{6}}}\) in the right-hand side of eq. (3.90) is also computed by using the following bounds, i.e. \[\forall\,\{\eta_{1},\eta_{2}\}\in\mathcal{P}_{\{1_{1}\}\text{b}_{4}\text{b}_{ 6}}:\,\,\begin{cases}\hat{z}_{1}(\eta_{1})=-\eta_{1}\\ \hat{z}_{1}(\eta_{1})=\eta_{1}\end{cases}\,,\,\,\,\text{and}\,\,\,\,\,\begin{cases} \hat{z}_{2}(z_{1},\eta_{1},\eta_{2})=-\frac{1-2\eta_{2}}{2\eta_{1}}z_{1}-\frac {1}{2}\\ \hat{z}_{2}(z_{1},\eta_{1},\eta_{2})=-\frac{1-2\eta_{2}}{2\eta_{1}}z_{1}+\frac {1}{2}\,.\end{cases} \tag{3.91}\] Similarly, the two quantities \(\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{\mathcal{T}_{\{1_{2}\}\text{b}_{3} }}\) and \(\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})|_{\mathcal{T}_{\{1_{4}\}\text{b}_{5} \text{b}_{6}}}\) in eq. (3.90) are determined by considering \[\forall\,\{\eta_{1},\eta_{2}\}\in\mathcal{T}_{\{1_{1}\}\text{b}_{2}\text{b}_{ 3}}:\,\,\,\begin{cases}\hat{z}_{1}(\eta_{1})=\eta_{1}\\ \hat{z}_{1}(\eta_{1})=1-\eta_{1}\end{cases}\,,\,\,\,\text{and}\,\,\,\,\,\,\begin{cases} \hat{z}_{2}(z_{1},\eta_{1},\eta_{2})=\frac{(1-2\eta_{2})\,z_{1}-1+\eta_{2}+\eta_ {1}}{1-2\eta_{1}}\\ \hat{z}_{2}(z_{1},\eta_{1},\eta_{2})=\frac{2\eta_{2}\,z_{1}-\eta_{2}}{-1+2\eta_ {1}}\,,\end{cases} \tag{3.92}\] respectively. Thus, after integrating eq. (3.90) analytically with respect to eqs. (3.91) and (3.92), it can also be found that \[\hat{\mathbf{u}}_{1}^{p}(\eta_{1},\eta_{2})=\sin\left(2\pi\,(m\,\eta_{2}+n\,\eta_{1 })\right)\frac{-1+2\eta_{1}}{2\pi^{2}\,(m+n-2\,(m\,\eta_{2}+n\,\eta_{1}))\,(2m \,\eta_{2}+n\,(-1+2\eta_{1}))}\,\,\mathbf{b}_{1}\,, \tag{3.93}\] for any given \((\eta_{1},\eta_{2})\). Analogously to eq. (3.89), the vector quantity \(\hat{\mathbf{u}}_{2}^{p}(\eta_{1},\eta_{2})\) for \(\mathbf{b}_{2}\) is written in the form \[\hat{\mathbf{u}}_{2}^{p}(\eta_{1},\eta_{2})=\text{Re}\left[\,i\,\int_{z_{1}(\eta_{1})}^{ z_{1}(\eta_{1})}\left(\,\int_{z_{2}(z_{1},\eta_{1},\eta_{2})}^{z_{2}(z_{1},\eta_{1},\eta_{2})} \,z_{2}\,e^{-2\pi(\eta z_{1}+\eta z_{2})}\,dz_{2}\right)\,dz_{1}\right]\mathbf{b}_{2}\,, \tag{3.94}\] for which the same integral bounds defined by eqs. (3.91) and (3.92) are used to calculate eq. (3.94) over the hexagonal-shaped dislocation patterns. Hence, it follows \[\hat{\mathbf{u}}_{2}^{p}(\eta_{1},\eta_{2})=\sin\left(2\pi\,(m\,\eta_{2}+n\,\eta_{1 })\right)\frac{-1+2\eta_{2}}{2\pi^{2}\,(m+n-2\,(m\,\eta_{2}+n\,\eta_{1}))\,(m \,(-1+2\eta_{2})+2n\,\eta_{1})}\,\,\mathbf{b}_{2}\,, \tag{3.95}\] for any \((\eta_{1},\eta_{2})\). Combining eq. (3.93) with eq. (3.95), the complete vectorial solution for \(\hat{\mathbf{u}}^{p}(\eta_{1},\eta_{2})\) is given by (3.96) which closely corresponds to the same expression given in Ref. [38], after minor corrections. It is worth noting that three singular values for \(n\) and \(m\) give rise to null denominators in eq. (3.96), so that three cases must be distinguished, i.e. c1: \(m+n-2(m\,\eta_{2}+n\,\eta_{1})\neq 0\), c2: \(n-2(m\,\eta_{2}+n\,\eta_{1})\neq 0\), and c3: \(m-2(m\,\eta_{2}+n\,\eta_{1})\neq 0\). By defining the function \(z(n,m)=nz_{1}+m\,z_{2}\) in the exponential terms of both eqs. (3.89) and (3.94), all corresponding real-valued expansion coefficients are also obtained by replacing \(m\) with \(m^{*}\) in \(z(n,m)\) for all different cases, i.e. \[\text{cl: }m^{*}=-n\,\frac{1-2\eta_{1}}{1-2\eta_{2}}\,\quad\text{c2: }m^{*}=n\,\frac{1-2\eta_{1}}{2\eta_{2}}\,\text{ and }\quad\text{c3: }m^{*}=n\,\frac{2\eta_{1}}{1-2\eta_{2}}\, \tag{3.97}\] for which the expressions for these three cases are given in Appendix A from Ref. [258]. Finally, to exhibit the discontinuity condition in displacement, the prescribed jump in eq. (3.87) with the aid of the eqs. (3.96) may finally be related to the displacement fields generated by the interface dislocation patterns, i.e. \[u_{k}^{p}(x_{1},x_{3})=\left[\bar{u}_{k}^{\text{dis}}(x_{1},0,x_{3})\right]_{ \text{int}}=\,\bar{u}_{k}^{\text{dis}}(x_{1},0,x_{3})-\bar{u}_{k}^{\text{dis} }(x_{1},0,x_{3})\, \tag{3.98}\] where the complete elastic field solutions in both materials A and B are given by eq. (3.79). The symbol \(\left[\bar{y}_{k}\right]_{\text{int}}=\Delta y_{k}=\Delta y_{k}-y\bar{y}_{k}\) corresponds to the vectorial jump of the quantity \(\mathbf{y}\) across the interface at \(x_{2}=0\). Although all physical displacement fields in eq. (3.98) are defined as the real quantities of complex Fourier series-based expressions, the real part designation in eqs. (3.79) and (3.88) are conveniently omitted to express the complex equality, as follows \[-i\,\hat{u}_{k}^{p}(\eta_{1},\eta_{2})=\sum_{\alpha=\,1}^{3}\,{}_{\alpha} \lambda^{\alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}+\,{}_{\alpha}\bar{\zeta}^{ \alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}-\,\mathbb{B}^{\xi\alpha}\,{}_{\alpha }^{\alpha}\, \tag{3.99}\] so that both real and imaginary parts of eq. (3.99) lead to the equivalent homogeneous linear system \(\Sigma_{1}\) of six real equations, i.e. \[(\Sigma_{1})\ \ \forall k\in\{1,2,3\}:\left\{\begin{aligned} & 0=\text{Re}\,\sum_{\alpha=\,1}^{3}\,{}_{ \alpha}\lambda^{\alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}+\,{}_{\alpha}^{\xi \alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}-\,\mathbb{B}^{\xi\alpha}\,{}_{\alpha }\bar{a}_{k}^{\alpha},\\ -\hat{u}_{k}^{p}(\eta_{1},\eta_{2})=\text{Im}\,\sum_{\alpha=\,1} ^{3}\,{}_{\alpha}\lambda^{\alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}+\,{}_{ \alpha}^{\xi\alpha}\,{}_{\alpha}\bar{a}_{k}^{\alpha}-\,\mathbb{B}^{\xi\alpha} \,{}_{\alpha}\bar{a}_{k}^{\alpha}\,\end{aligned}\right. \tag{3.100}\] where \(\hat{\mathbf{u}}^{p}(\eta_{1},\eta_{2})\) is defined in eq. (3.96), for any given \((\eta_{1},\eta_{2})\in\left]0,\,1/2\right[^{2}\) and for all \(\{n,m\}\in D\). #### Stress conditions at the semicoherent interfaces Due to the presence of the interfacial excess energy close to grain and interphase boundaries, the discontinuity of the tangential stress components is introduced using the generalized Young-Laplace equation [108, 211, 77], as an equilibrium boundary condition to solve the present boundary-value problem with interface stress effects, i.e. \[\left[\bar{v}_{\varphi i}^{\text{dis}}(x_{1},0,x_{3})\,n_{i}\right]_{\text{ int}}+\tau_{\varphi\chi,\chi}=0\, \tag{3.101}\] together with the stress discontinuity in normal direction of the boundaries, as follows \[\left[\bar{v}_{ij}^{\text{dis}}(x_{1},0,x_{3})\,n_{j}\,n_{i}\right]_{\text{ int}}=\,\tau_{\chi\varphi}\,{}_{\chi\varphi}\, \tag{3.102}\] with \(\kappa_{\chi\varphi}\) the curvature tensor of the solid-state interface of interest. Substituting the linear constitutive relation of eq. (3.85) into eqs. (3.101) and (3.102) respectively, the governing non-classical boundary equations lead to \[\left\{\begin{aligned} 0&=\left[\bar{l}_{\varphi}^{ \text{int}}(x_{1},x_{3})\right]_{\text{int}}+d_{\varphi\chi\gamma}\,\bar{u}_{ \gamma,\gamma\chi}^{\text{dis}}(x_{1},0,x_{3})\\ 0&=\left[\bar{l}_{2}^{\text{int}}(x_{1},x_{3})\right]_{ \text{int}}-\left(\tau_{\chi\varphi}^{\text{\partial}}+d_{\chi\varphi\gamma}\, \bar{u}_{\gamma,\gamma\delta}^{\text{dis}}(x_{1},0,x_{3})\right)\left(x_{\chi \varphi}^{\text{\partial}}+x_{\chi\varphi}^{\text{A}}\right)\,\end{aligned}\right. \tag{3.103}\] where \(x_{\chi\varphi}^{\text{\partial}}\) and \(x_{\chi\varphi}^{\text{A}}\) are the deformation-independent curvature and curvature change tensors, respectively. In the classical theory of initially flat and infinitely thin membranes with small out-of-plane deflections [40], as the considered (and interpreted as surface stresses) elastically stretched membranes in Refs. [108, 109], the curvature change tensor may be approximated by \[\kappa_{\chi\varphi}^{\text{A}}=-\bar{u}_{\chi\varphi}^{\text{dis}}(x_{1},0,x_{ 3})\, \tag{3.104}\] without internal moments. Under the treatment of such specific boundary conditions normal to the initially flat (but, stretched) membranes, the distortion response caused by the presence of interface dislocations may elastically warp the semicoherent interfaces with radii defined by \(r_{\chi\varphi}=1/\kappa_{\chi\varphi}^{\Delta}\). Thus, the right-hand side of the second equation in eqs. (3.103) is deduced by subsequently imposing no initial curvature and neglecting the second-order effects compared with unity, as follows \[\big{(}\tau_{\chi\varphi}^{0}+d_{\chi\varphi\eta\eta}\,\tilde{u}_{\chi\varphi}^{ \mathrm{dis}}(x_{1},0,x_{3})\big{)}\big{(}\kappa_{\chi\varphi}^{0}+\kappa_{ \chi\varphi}^{\Delta}\big{)}\simeq-\tau_{\chi\varphi}^{0}\,\tilde{u}_{2,\chi \varphi}^{\mathrm{dis}}(x_{1},0,x_{3})\,, \tag{3.105}\] thus, imposing \(\kappa_{\chi\varphi}^{0}=0\) and \(\tilde{u}_{\chi\varphi}^{\mathrm{dis}}(x_{1},0,x_{3})\,\tilde{u}_{2,\chi \varphi}^{\mathrm{dis}}(x_{1},0,x_{3})\,\ll 1\). According to eq. (3.82), both discontinuous stress boundary conditions in eqs. (3.103) can also be recast in matrix form, i.e. \[\big{[}\tilde{l}_{k}^{\mathrm{int}}(x_{1},x_{3})\big{]}_{\mathrm{int}}-4\pi^{ 2}\,\mathrm{V}_{ki}\,\,\tilde{u}_{i}^{\mathrm{int}}(x_{1},0,x_{3})=0\,, \tag{3.106}\] where the \(3\times 3\) real matrix \(\mathbf{V}\) is expressed as \[\mathrm{V}_{ki}=\mathrm{V}_{ik}=\left[\begin{array}{cccc}\tilde{u}_{1}^{ \mathrm{d}}\tilde{u}_{11}+2\tilde{u}_{1}k_{3}u_{15}+\tilde{u}_{3}^{2}d_{35}&0& \tilde{u}_{1}^{\mathrm{d}}\tilde{u}_{15}+\tilde{u}_{1}k_{3}(d_{13}+d_{55})+ \tilde{u}_{3}^{2}d_{35}\\ 0&\tilde{u}_{1}^{2}\tilde{u}_{11}^{0}+2\tilde{u}_{13}k_{3}u_{15}+\tilde{u}_{3} ^{2}\tilde{u}_{33}&0&0\\ \tilde{u}_{2}^{2}\tilde{u}_{15}+\tilde{u}_{1}k_{3}(d_{13}+d_{58})+\tilde{u}_{3} ^{2}d_{35}&0&\tilde{u}_{1}^{2}\tilde{u}_{35}+2\tilde{u}_{1}k_{3}d_{35}+\tilde{ u}_{3}^{2}d_{33}\end{array}\right]\,, \tag{3.107}\] within which the surface/interface elastic constants are indexed using standard contracted notations. Mechanically balanced by the interface stress effects, eq. (3.106) shows that the infinitesimal in-plane strain fields in the membranes may influence the stresses in both bulk materials due to the elasticity contributions at the interphase boundaries. In contrast to the classical continuum elasticity, the tractions across the interface and the displacement fields are related to each other by the interface elasticity properties as well the interface geometries through the wavevector components. Because the materials A and B are mapped separately from the reference state, the coherent regions at the interfaces (separated by the networks of interfacial dislocations) can also be viewed as infinitely thin membranes separately in contact with each individual bulk material. Furthermore, the determination of the reference states yielding (in general) to unequal partitioning of elastic distortions, the tractions that act on each individual upper and lower materials bonded by these coherent interfacial regions are consequently assumed to be different in both magnitude and direction. Using the concept of interface zone by in Ref. [149], the specific traction vector \({}_{\mathrm{coh}}\,\mathrm{f}^{\mathrm{int}}(x_{1},x_{3})\), acting on both neighboring crystals with fictitious infinitely thin inter-layered coherent patches at \(x_{2}=0\), is introduced to transfer traction forces from the upper material to the adjacent lower material. The equilibrium condition between the interface coherent regions and material A also reads \[\mathrm{\alpha}_{k}^{\mathrm{int}}(x_{1},x_{3})-{}_{\mathrm{coh}}\,\mathrm{f} ^{\mathrm{int}}_{k}(x_{1},x_{3})-4\pi^{2}\,\mathrm{\mathcal{N}}_{ki}^{\mathrm{ int}}\,\,\tilde{u}_{i}^{\mathrm{dis}}(x_{1},0,x_{3})=0\,, \tag{3.108}\] by use of the boundary condition in eq. (3.106), while the equilibrium condition between the interface coherent regions and material B is given by \[{}_{\mathrm{coh}}\,\mathrm{f}^{\mathrm{int}}_{k}(x_{1},x_{3})-\mathrm{g}\, \tilde{l}_{k}^{\mathrm{int}}(x_{1},x_{3})-4\pi^{2}\,\mathrm{\mathcal{N}}_{ki}^ {\mathrm{int}}\,\,\tilde{u}_{i}^{\mathrm{dis}}(x_{1},0,x_{3})=0\,, \tag{3.109}\] where \(\mathrm{\mathcal{N}}^{\mathrm{int}}\) and \(\mathrm{\mathcal{N}}^{\mathrm{int}}\) depend on the elastic properties of the interfaces with respect to each material A and B, respectively. Summing both eqs. (3.108) and (3.109), it also follows that \[\mathrm{\alpha}_{k}^{\mathrm{int}}(x_{1},x_{3})-\mathrm{g}\,\tilde{l}_{k}^{ \mathrm{int}}(x_{1},x_{3})-4\pi^{2}\,\big{(}\mathrm{\mathcal{N}}_{ki}^{ \mathrm{int}}\,\,\mathrm{\mathcal{A}}\,\tilde{u}_{i}^{\mathrm{int}}(x_{1},0,x_ {3})+\mathrm{\mathcal{N}}_{ki}^{\mathrm{int}}\,\,\mathrm{g}\,\tilde{u}_{i}^{ \mathrm{int}}(x_{1},0,x_{3})\big{)}=0\,, \tag{3.110}\] which yields to the non-classical stress discontinuity conditions at the mismatched interfaces. Using eqs. (3.79) and (3.82), eq. (3.110) gives rise to the additional linear system \(\Sigma_{2}\) of six equations, i.e. \[\left(\Sigma_{2}\right):\left\{\begin{aligned} & 0=\mathrm{Re}\Big{[}\,\sum_{\alpha=1}^{3} \mathrm{\alpha}^{A\alpha}\mathrm{\alpha}_{A}^{\alpha}\mathrm{\alpha}_{A}^{ \alpha}\mathrm{\alpha}_{1}^{\alpha}+\mathrm{\alpha}_{S}^{\alpha}\mathrm{ \alpha}_{A}^{\alpha}\mathrm{\alpha}_{1}^{\alpha}+\mathrm{\alpha}_{S}^{\alpha} \mathrm{\alpha}_{A}\mathrm{\alpha}_{1}^{\alpha}\Big{)}+\mathrm{\mathcal{N}}_{13 }^{\mathrm{int}}\big{(}\mathrm{\alpha}^{A\alpha}\mathrm{\alpha}_{A}\mathrm{ \alpha}_{3}^{\alpha}+\mathrm{\alpha}_{S}^{\alpha}\mathrm{\alpha}_{A}\mathrm{ \alpha}_{3}^{\alpha}\big{)}\big{)}\\ &\qquad\qquad-\mathrm{g}\zeta^{\alpha}\big{(}\mathrm{g}\mathrm{ \alpha}_{1}^{A}-\mathrm{i}2\pi\big{(}\mathrm{\mathcal{N}}_{11}^{\mathrm{int}}\, \,\mathrm{\mathcal{A}}_{1}^{\mathrm{int}}+\mathrm{\mathcal{N}}_{13}^{\mathrm{int}} \,\,\mathrm{g}\mathrm{\alpha}_{3}^{\alpha}\big{)}\big{)}\Big{]}=\mathrm{Re} \,\sum_{\alpha=1}^{3}\,\mathrm{\mathcal{V}}_{1}^{\alpha}\\ & 0=\mathrm{Re}\Big{[}\,\sum_{\alpha=1}^{3}\mathrm{\alpha}^{A\alpha} \mathrm{\alpha}_{A}^{\alpha}\mathrm{\alpha}_{A}^{\alpha}\mathrm{\alpha}_{A}^{ \alpha}+\mathrm{\alpha}_{S}^{\alpha}\mathrm{\alpha}_{A}\mathrm{\alpha}_{3}^{ \alpha}+\mathrm{\alpha}_{S}^{\alpha}\mathrm{\alpha}_{A}\mathrm{\alpha}_{1}^{\alpha} \Big{)}+\mathrm{\mathcal{N}}_{33}^{\mathrm{int}}\big{(}\mathrm{\alpha}^{A \alpha}\mathrm{\alpha}_{A}\mathrm{\alpha}_{3}^{\alpha}+\mathrm{\alpha}_{S}^{ \alpha}\mathrm{\alpha}_{A}\mathrm{\alpha}_{3}^{\alpha}\big{)}\big{)}\\ &\qquad\qquad-\mathrm{g}\zeta^{\alpha}\big{(}\mathrm{g}\mathrm{ \alpha}_{3}^{A}-\mathrm{i}2\pi\big{(}\mathrm{\mathcal{N}}_{13}^{\mathrm{int}}\, \,\mathrm{\mathcal{A}}_{1}^{\mathrm{int}}+\mathrm{\mathcal{N}}_{33}^{\mathrm{int}} \,\,\mathrm{g}\mathrm{\alpha}_{3}^{\mathrm{int}}\,\,\mathrm{g}\mathrm{ \alpha}_{3}^{\alpha}\big{)}\big{)}\Big{]}=\mathrm{Re}\,\sum_{\alpha=1}^{3}\, \mathrm{\mathcal{V}}_{3}^{\alpha}\\ & 0=\mathrm{Im}\Big{[}\,\sum_{\alpha=1}^{3}\,\mathrm{\mathcal{V}}_{k}^{ \alpha}\Big{]}\,,\,\,\,\forall k\in\{1,2,3\}\,,\end{aligned}\right. \tag{3.111}\] with \(\mathrm{h}_{k}^{\alpha}=\mathrm{H}_{k2}^{\alpha}\), for any given \((\eta_{1},\eta_{2})\in\left]0,\,1/2\right[^{2}\) and for all \(\{n,m\}\in D\). #### Stress conditions at the free surfaces Similarly to the semicoherent interface treatment, the free surfaces experience excess energy and excess energy due to different energy profiles close to such singular membrane-like boundaries. Thus, additional non-classical boundary conditions as eq. (3.110) are introduced on the outer free surface, at \(x_{2}^{\rm cr}=h_{\rm A}\), i.e. \[\alpha_{\rm I}^{\rm fs}(x_{1},x_{3})+4\pi^{2}\,\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{\rm I}^{\rm dis}(x_{1},h_{\rm A},x_{3})=0\,, \tag{3.112}\] where \(\alpha_{\rm I}^{\rm fs}\) depends on the elastic constants of the free surfaces. It also yields to the following system \(\Sigma_{3}\) of six other equations, i.e. \[(\Sigma_{3}):\left\{\begin{aligned} & 0=\text{Re}\Big{[}\sum_{\alpha=1}^{3} \alpha_{\rm A}{}^{\alpha}\text{e}^{i2\pi\eta^{\rm s}h_{\rm A}}\big{(}\lambda h _{1}^{\alpha}-i2\pi\big{(}\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{1}^{\alpha}+\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{3}^{\alpha}\big{)}\big{)}\\ &\qquad\qquad+\alpha_{\rm I}^{\rm s}\text{e}^{i2\pi\eta^{\rm s}h_ {\rm A}}\big{(}\lambda h_{1}^{\alpha}-i2\pi\big{(}\mathcal{N}_{\rm BJ}^{\rm fs }\ \alpha_{1}^{\alpha}+\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{3}^{\alpha}\big{)}\big{)} \Big{]}=\text{Re}\sum_{\alpha=1}^{3}\,w_{1}^{\alpha}\\ & 0=\text{Re}\Big{[}\sum_{\alpha=1}^{3}\,\lambda^{\alpha}\text{e}^{i2 \pi\eta^{\rm s}h_{\rm A}}\big{(}\lambda h_{2}^{\alpha}-i2\pi\,\mathcal{N}_{22 }^{\rm fs}\ \alpha_{2}^{\alpha}\big{)}+\lambda_{\rm I}^{\rm s}\text{e}^{i 2\pi\eta^{\rm s}_{\rm I}h_{\rm A}}\big{(}\lambda h_{2}^{\alpha},-i2\pi\, \mathcal{N}_{22}^{\rm fs}\ \alpha_{2}^{\alpha}\big{)}\Big{]}\\ &\qquad\qquad=\text{Re}\sum_{\alpha=1}^{3}\,w_{2}^{\alpha}\\ & 0=\text{Re}\Big{[}\sum_{\alpha=1}^{3}\,\lambda^{\alpha}\text{e}^{i 2\pi\eta^{\rm s}h_{\rm A}}\big{(}\lambda h_{3}^{\alpha}-i2\pi\big{(}\mathcal{ N}_{\rm BJ}^{\rm fs}\ \alpha_{1}^{\alpha}+\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{3}^{\alpha}\big{)}\big{)}\\ &\qquad\qquad+\alpha_{\rm I}^{\rm s}\text{e}^{i2\pi\eta^{\rm s} _{\rm I}h_{\rm A}}\big{(}\lambda h_{3,*}^{\alpha}-i2\pi\big{(}\mathcal{N}_{\rm BJ }^{\rm fs}\ \alpha_{1}^{\alpha}+\mathcal{N}_{\rm BJ}^{\rm fs}\ \alpha_{3}^{\alpha}\big{)}\big{)} \Big{]}=\text{Re}\sum_{\alpha=1}^{3}\,w_{3}^{\alpha}\\ & 0=\text{Im}\Big{[}\sum_{\alpha=1}^{3}\,w_{2}^{\alpha}\Big{]}\,\ \ \forall k\in\{1,2,3\}\,\end{aligned}\right. \tag{3.113}\] for any given \((\eta_{1},\eta_{2})\in\,]0,\,1/2[^{2}\) and for all \(\{n,m\}\in\,D\). #### Determination of the minimum-energy paths When the linear systems in eqs. (3.100) with (3.111) and (3.113) are combined, the set \(\rm{Est}\) of all eighteen real unknowns (twelve and six for A and B, respectively) are also solved with respect to the prescribed boundary conditions, i.e. \[\text{E}\text{\rm{st}}=\sum_{\alpha=1}^{3}\,\{\,\text{Re}\,_{\rm A}\lambda^{ \alpha},\,\text{Im}\,_{\rm A}\lambda^{\alpha},\,\text{Re}\,_{\rm A}\zeta^{\alpha },\,\text{Im}\,_{\rm A}\zeta^{\alpha},\,\text{Re}\,_{\rm B}\zeta^{\alpha},\, \text{Im}\,_{\rm B}\zeta^{\alpha}\,\}\, \tag{3.114}\] completing the solutions of the elastic displacement and stress fields, given by eqs. (3.79) and (3.80), respectively. Following the procedure described in section 3.6.2, the upper triangular domain \(\mathcal{T}_{\rm{ARC}}\) in the representative unit dislocation cell, denoted by ABC in Fig. (3.25b), is discretized into four-node quadrilateral elements with respect to the \(i^{\rm th}\) nodal point coordinates \((\eta_{1}^{i},\eta_{2}^{i})\), such that \(\{\eta_{1}^{i},\eta_{2}^{i}\}\in\,]0,\,1/2[^{2}\) for convex hexagonal-shaped dislocation patterns. Thus, for any dislocation pattern that is geometrically characterized by the given coordinates \((\eta_{1}^{i},\eta_{2}^{i})\), the corresponding elastic strain energy can be computed as a volume integral over the heterostructure of interest, i.e. \[\gamma_{\rm e}^{i}(\eta_{1}^{i},\eta_{2}^{i})=\frac{1}{2A}\iiint_{\mathcal{V} }\ \partial_{ij}^{\rm dis}(x_{1},x_{2},x_{3})\ \tilde{u}_{i,i}^{\rm dis}(x_{1},x_{2},x_{3})\ \mathrm{d}V\,, \tag{3.115}\] where all persistent short-range field solutions of the integrand depend specifically on \((\eta_{1}^{i},\eta_{2}^{i})\) by the treatment of boundary conditions, described in section 3.6.4. For far-field stress-free bicrystals at equilibrium, the standard volume integral eq. (3.115) may be reduced to a surface integral by the use of integration by parts, together with the divergence theorem without any body forces [249, 226], as follows \[\gamma_{\rm e}^{i}(\eta_{1}^{i},\eta_{2}^{i})=\frac{1}{2A}\iint_{A(\eta_{0})} \ \tilde{t}_{k}^{\rm int}(x_{1},x_{3})\ \ \big{[}\tilde{u}_{k}^{\rm dis}(x_{1},0,x_{3})\big{]}_{\rm int}\ \mathrm{d}S\,, \tag{3.116}\] where \(A(\tau_{0})\) is the hexagonal-shaped unit cell. In eqs. (3.115) and (3.116), the expressions of elastic strain energy are conveniently expressed per unit area, for which \(A=A(\tau_{0}=0)\), and account for several different contributions, i.e. interaction between Volterra-type dislocations against the misfit strain state, self-energy induced by individual hexagonal-shaped dislocation configurations, as well as the interaction between the hexagonal-shaped unit cell with all infinitely repeated cells. Finally, the complete elastic strain energy landscape \(\gamma_{\rm e}(\eta_{1},\eta_{2})\) is interpolated for any \((\eta_{1},\eta_{2})\in\,]0,\,1/2[^{2}\), as follows \[\gamma_{\rm e}(\eta_{1},\eta_{2})=\sum_{i=1}^{4}N_{i}(\eta_{1},\eta_{2})\, \gamma_{\rm e}^{i}(\eta_{1}^{i},\eta_{2}^{i})\, \tag{3.117}\] where \(N_{i}(\eta_{1},\eta_{2})\) are the standard finite element bilinear shape functions for four-node elements. For elastic strain landscapes that favor the formation of dislocation junctions, the minimum-energy configurations are determined by computing the conjugate gradient algorithm, while the nudged elastic band method is used to find the corresponding minimum-energy paths. The nudged elastic band method is a chain-of-states method in which a string of images is used to describe the reaction pathways. These configurations are connected by spring forces to ensure equal spacing along the paths of interest. The ensemble of the configurations is then relax through a force projection scheme to converge to the most energetically favorable pathways [118, 225]. To identify the minimum-energy paths between the initial (non-equilibrium) lozenge-shaped pattern and the final elastically relaxed configurations (previously computed by the conjugate gradient algorithm), all images are simultaneously evolved to equilibrium under a nudged elastic band force (on image indexed by \(s_{\eta}\)) that contains two independent components on all images \(s_{\eta}\), i.e. \[f^{\rm NEB}_{s_{\eta}}=f^{\perp}_{s_{\eta}}+f^{\parallel}_{s_{\eta}}\, \tag{3.118}\] where \(f^{\perp}_{s_{\eta}}\) is the component of the elastic force acting normal to the tangent of the elastic landscape, as follows \[f^{\perp}_{s_{\eta}}=-\mathbf{\nabla}\gamma_{\rm e}(\eta_{1},\eta_{2})+\left( \mathbf{\nabla}\gamma_{\rm e}(\eta_{1},\eta_{2})\cdot\mathbf{\hat{\tau}}_{s_{\eta}} \right)\mathbf{\hat{\tau}}_{s_{\eta}}\,, \tag{3.119}\] with \(\mathbf{\hat{\tau}}_{s_{\eta}}\) the unit tangent to the elastic energy landscape. In addition, the spring force \(f^{\parallel}_{s_{\eta}}\) in eq. (3.118), acting parallel to the energy landscape [118, 225] is defined by \[f^{\parallel}_{s_{\eta}}=k(|\mathbf{\hat{\tau}}_{s_{\eta}+1}-\mathbf{\hat{\tau}}_{s_ {\eta}}|-|\mathbf{\hat{\tau}}_{s_{\eta}-1}|)\, \tag{3.120}\] where \(\mathbf{\hat{\tau}}_{s_{\eta}}=\mathbf{\hat{\tau}}_{s_{\eta}}(\eta_{1},\eta_{2})\) is the position of the \(s_{\eta}\)th image and \(k\) the spring constant. The spring interaction between adjacent images is added to ensure continuity of the chain. The present numerical procedure is identical to nudged elastic band calculations recently performed to analyze the calculation of attempt frequency for a dislocation bypassing an obstacle [234] using a nodal dislocation dynamics simulation with non-singular treatments for isotropic elastic fields [50]. #### Application to Au/Cu heterosystems The section gives applications to two examples of the general parametric energy-based framework. The first simple and limiting case is concerned with two dislocation sets in pure misfit \((010)\) Au/Cu interfaces, for which the strain energy landscape for formation of dislocation junctions is unfavorable. The subsequent investigation of the effects of surface/interface stress and elasticity properties with different boundary conditions in \((010)\) Au/Cu interfaces can be found in Ref. [258]. On the other hand, the second case deals with the minimum-energy reaction pathway of the pre-computed \((111)\) Au/Cu elastic energy landscape, where the initial and unrelaxed dislocation pattern solution is described by the Frank-Bilby equation. The materials properties used in these examples are listed in Table 3.7. Case 1: The (010) Au/Cu interface with two sets of dislocations As a limiting case, the atomically sharp \((010)\) Au/Cu misfit interface contains two sets of orthogonal dislocations in cube-cube orientation relationship, i.e. \(x^{\rm er}_{1}=[10\bar{1}]\), \(x^{\rm er}_{2}=\mathbf{n}=[010]\), and \(x^{\rm er}_{3}=[101]\). Similar Figure 3.26: Dependence of the total far-field stress component \(\sigma^{\rm c}_{11}+\sigma^{\rm\pm\infty}_{11}\) on \(\delta\) in the Au and Cu materials for the \((010)\) and \((111)\) Au/Cu semicoherent interfaces. \begin{table} \begin{tabular}{l c c c c} \hline \hline Symbols & Au (material A) & Cu (Material B) & Units & References \\ \hline \hline \multicolumn{5}{l}{Lattice parameters} \\ \(a\) & \(0.4078\) & \(0.3615\) & nm & [105] \\ \hline \multicolumn{5}{l}{Elastic components (Voigt notation)} \\ \(c_{11}\) & \(187.0\) & \(168.4\) & GPa & [122] \\ \(c_{12}\) & \(157.0\) & \(121.4\) & GPa & [122] \\ \(c_{44}\) & \(43.6\) & \(75.4\) & GPa & [122] \\ \hline \multicolumn{5}{l}{Elasticity properties for the semicoherent interfaces (Voigt notation)} \\ \(*\) Interface stress & & & \\ \(\tau_{11}\) & \(-0.0465\) & \(0.645\) & N/m & [149] \\ \(\tau_{13}\) & \(0\) & \(0\) & N/m & [149] \\ \(\tau_{33}\) & \(-0.0465\) & \(0.645\) & N/m & [149] \\ \(*\) Interface modulus & & & \\ \(d_{11}\) & \(-6.84\) & \(-5.99\) & N/m & [149] \\ \(d_{13}\) & \(-3.47\) & \(0.6540\) & N/m & [149] \\ \(d_{33}\) & \(-6.84\) & \(-5.99\) & N/m & [149] \\ \(d_{15}\) & \(0.0042\) & \(0.0032\) & N/m & [149] \\ \(d_{35}\) & \(0.0042\) & \(0.0032\) & N/m & [149] \\ \(d_{55}\) & \(-1.91\) & \(-3.67\) & N/m & [149] \\ \hline \multicolumn{5}{l}{Elasticity properties for the free surface (Voigt notation)} \\ \(*\) Surface stress & & & \\ \(\tau_{11}\) & \(1.49\) & \(-\) & N/m & [188] \\ \(\tau_{13}\) & \(0\) & \(-\) & N/m & [188] \\ \(\tau_{33}\) & \(1.49\) & \(-\) & N/m & [188] \\ \(*\) Surface modulus & & & \\ \(d_{11}\) & \(-7.10\) & \(-\) & N/m & [188] \\ \(d_{13}\) & \(-5.67\) & \(-\) & N/m & [188] \\ \(d_{33}\) & \(-3.17\) & \(-\) & N/m & [188] \\ \hline \end{tabular} \end{table} Table 3.7: Lattice parameters \(a\) of Au and Cu crystals, material properties \(c_{ij}\) of both bulk materials, surface stress \(\tau_{\chi\varphi}\) and surface modulus \(d_{\chi\varphi}\) of the semicoherent Au/Cu heterophase interface and the \((010)\) free surface in Au. to eq. (3.37), the net Burgers vectors are expressed by using the quantized Frank-Bilby equation [94, 29, 30], as follows (3.121) where \(d_{1}^{\rm un}\) and \(d_{2}^{\rm un}\) are the regularly spaced inter-dislocation spacings, and the interface Burgers vectors \(\mathbf{b}_{1}\parallel[10\bar{1}]\) and \(\mathbf{b}_{2}\parallel[101]\) are both parallel to \(\mathbf{x}_{1}\)- and \(\mathbf{x}_{3}\)-axis, respectively. As a result of arbitrarily selecting the reference state identical to the Au (or Cu) natural state, for which the geometry of inter-face dislocations (line directions and spacings) is independent of the choice of reference state, the line directions are defined by \(\mathbf{\xi}_{1}^{\rm un}\parallel[101]\) and \(\mathbf{\xi}_{2}^{\rm un}\parallel[10\bar{1}]\), and the inter-dislocation spacings are given by \(d_{1}^{\rm un}=d_{2}^{\rm un}=p_{1}^{\rm o}=p_{2}^{\rm o}=2.25144\) nm. Thus, the Frank-Bilby equation predicts that an orthogonal network of straight parallel dislocations with pure edge characters is also needed to accommodate the pure misfit \((010)\) Au/Cu interface. The geometry of such orthogonal grid of dislocations can also be characterized by \(\eta_{1}\to 1/2\) and \(\eta_{2}\to 1/2\) in the general parametric framework, because \(\phi^{\rm un}=\pi/2\). According to the bilinear function \(\mathbf{u}^{p}(\eta_{1}\to 1/2,\eta_{2}\to 1/2)\) for the prescribed displacement field in eq. (3.87), the corresponding real-valued expansion functions in eq. (3.88) for the individual set \(1\) can be computed by imposing \(m=0\), as follows (3.122) exhibiting that \(\hat{\mathbf{u}}^{p}(\eta_{1}\to 1/2,\eta_{2}\to 1/2)=\hat{\mathbf{u}}^{p}_{1}(\eta_{1}\to 1/2,\eta_{2}\to 1/2)\) is evidently written as a function of \(\mathbf{b}_{1}\), for set \(1\). By superposing the similar contribution of set \(2\) with \(n=0\), the final prescribed displacement field produced by an orthogonal network of dislocations in eq. (3.88) is therefore written in the form of two distinct one-dimensional sawtooth-shaped functions with Fourier sine series, as follows (3.123) where \(k_{1}(n,0)\) and \(k_{3}(m)\) are defined in eq. (3.74), with \(\phi^{\rm un}=\pi/2\). Here, the sawtooth-shaped functions in eq. (3.123) differ from eq. (3.19) by individual translations of magnitude \(d_{i}^{\rm un}/2\). Similarly to \(\Sigma_{1}\) and \(\Sigma_{2}\) in Figure 3.27: Elastic strain energy landscapes \(\gamma_{\rm e}\) in J.m\({}^{-2}\) of the hexagonal-shaped patterns with three-fold dislocation nodes as a function of \(\eta_{1}\) and \(\eta_{2}\), for the (a) \((010)\) and (b) \((111)\) Au/Cu heterophase interface cases. The large points at \(\eta_{1}=\eta_{2}=1/2\) correspond to the initial lozenge-shaped patterns, for which the two crossing dislocation sets are related to equilibrium and non-equilibrium dislocation configurations for the \((010)\) and \((111)\) interface planes, respectively. The latter case gives rise to the presence of a minimum-energy path (in black) between the initial pattern and the fully elastically strain-relaxed dislocation structure (magenta point) at stable equilibrium state. An intermediate state is displayed by the orange point. eqs. (3.20) and (3.25), the simplest limiting case of bicrystals without any surface/interface elasticity effects leads to a set of twelve real and linear equations, i.e. (3.124) with respect to the six associated complex unknown quantities, i.e. \({}_{\rm A}\lambda^{\alpha}\) and \(\mathrm{g}_{\rm g}^{\alpha}\). Following the procedure described in section 3.3.3, the two deformation gradients \(\mathbf{F}_{\rm Au}^{-1}\) and \(\mathbf{F}_{\rm Cu}^{-1}\) in eq. (3.121) (also, the magnitudes of both \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\)) are determined by ensuring the condition of vanishing far-field stresses along a transformation pathway between both materials Au and Cu. For cube-cube orientation relation, this condition is met by continuously adjusting the reference lattice parameter \(a_{\rm ref}\) along a specified reaction pathway coordinate \(\delta\), starting with the pure lattice parameter of Au to Cu, i.e. \[a_{\rm ref}=(1-\delta)\,\,a_{\rm Au}+\delta\,a_{\rm Cu}\,, \tag{3.125}\] where \(0\leq\delta\leq 1\) is a dimensionless variable that interpolates linearly between \(a_{\rm Au}\) and \(a_{\rm Cu}\). According to the far-field eq. (3.1), the dependence of the total large-range stress components \(\sigma_{11}^{\rm c}+\sigma_{11}^{\pm\infty}\) in Au (black line with symbols) and Cu (red line with symbols) on the transformation pathway coordinate \(\delta\) is plotted in Fig. (3.26). For the (010) misfit case, both far-field stress components vanish for \(\delta_{(010)}=0.60392\), so that the corresponding reference state is closer to Cu than to Au, i.e. \(\delta_{(010)}>0.5\), where \(c_{\rm Cu}c_{11}<{}_{\rm Au}c_{11}\) and \(c_{\rm Cu}c_{12}<{}_{\rm Au}c_{12}\), but \(c_{\rm Cu}c_{44}>{}_{\rm Au}c_{44}\). All other elastic components are consistent with the absence of strains in the long range and no rotations are induced along the transformation path. Thus, it gives rise to the reference lattice parameter \(a_{\rm ref}=0.37984\) nm, and also the magnitudes of correct Burgers vectors, i.e. \(b_{1}=b_{2}=0.26859\) nm, selected by the coherent reference state. When an incorrect reference state is arbitrary chosen, the corresponding Burgers vectors deviate in magnitude and non-zero spurious stress fields exist in the microstructure. For instance, a residual stress state in Au persists with \(\mathrm{{}_{\rm A}\omega^{\rm c}_{11}}+\mathrm{{}_{\rm A}\omega^{\rm c}_{11}}\simeq 6.29\) GPa and \(\simeq-3.66\) GPa, for \(\delta_{(010)}=0\) and \(1\), respectively. A larger residual stress field exists in Cu as well, where \(c_{\rm Cu}\sigma_{11}^{\rm c}+c_{\rm Cu}\sigma_{11}^{-\infty}\simeq-8.77\) GPa, for \(\delta_{(010)}=0\), and \(\simeq 5.09\) GPa, for \(\delta_{(010)}=1\). For the following calculations in interfacial hexagonal-shaped dislocation patterns, the upper half-plane domain \(D=\{\{0\leq n\leq n_{\rm max}\}\cup\{|m|\leq n_{\rm max}\}\}\setminus\{m\leq 0,\,n=0\}\}\) is defined by setting \(n_{\rm max}=50\), which is large enough to ensure accurate solutions in truncated elastic stress fields with three sets of dislocations. Figure (3.27a) shows the elastic strain energy landscape for the \((010)\) Au/Cu misfit interface with classical boundary conditions between both neighboring semi-infinite Au and Cu crystals, for simplicity. To determine such energy landscape, the triangular domain \(T_{\rm acx}\) is first discretized into 121 nodal points with coordinates \((\eta_{1}^{i},\eta_{2}^{i})\), such that \(\{\eta_{1}^{i},\eta_{2}^{i}\}\in[0]\), \(1/2^{2}\), as depicted by the gray dots in Fig. (3.27a). Using the persistent short-range elastic fields, the finite (guaranteed by the zero far-field stresses) stored elastic energy per unit area is computed for any \((\eta_{1}^{i},\eta_{2}^{i})\) using eq. (3.116) with \(r_{0}=b_{1}/4\). Following the standard interpolation procedure of eq. (3.117), the elastic strain energy for any given \((\eta_{1},\eta_{2})\in[0,1/2]^{2}\) shows a smooth and symmetric landscape with respect to the median \((\eta_{1}=\eta_{2})\) of the triangular domain, within which the unique strain energy minima is obtained at \(\eta_{1}\to 1/2\) and \(\eta_{2}\to 1/2\), with \(\gamma_{\rm e}^{\rm min}=\gamma_{\rm e}(\eta_{1}\to 1/2,\eta_{2}\to 1/2)\simeq 0.57344\) J.m\({}^{-2}\). Planar dislocation reactions and junctions for \((010)\) misfit interfaces are also shown to be energetically unfavorable. It is therefore demonstrated that the initial orthogonal grid of uniformly spaced edge dislocations corresponds to the equilibrium structures for the \((010)\)-type misfit interfaces, which satisfies the condition of vanishing far-field stresses as well as the minimum-energy criterion for predicting the most favorable dislocation structures. Near the unreacted state of the \((010)\) Au/Cu system, the present energy landscape shows concave slope profiles at \(\eta_{1}\simeq\eta_{2}\simeq 1/2\). For calculations with other fcc/fcc heterosystems in the \((010)\) cube-cube orientation relationship (not shown here), the corresponding unreacted state can exhibit convex energy profiles, which suggest different bound crossed states of dislocation reactions for the \((010)\) twist GBs. Thus, the parent dislocations could also exhibit strong repulsive interactions or crossed states where local bend and twist of dislocations may locally occur at the short-range distances, as observed in non-coplanar dislocations [179]. ### Case 2: The \((111)\) Au/Cu interface with three sets of dislocations In contrast to the \((010)\) Au/Cu case, the \((111)\)-oriented habit interface planes exhibit different arrangements of atoms, which yield to more complex interface dislocation patterns and also to general elastic states where both constituent strains and rotations are unequally partitioned between the crystals [125]. The present orientation relations associated with the \((111)\) Au/Cu misfit case are defined by \(\mathbf{x}_{1}^{\rm{tr}}=[11\bar{2}]\), \(\mathbf{x}_{2}^{\rm{cr}}=[111]\parallel\mathbf{n}\), and \(\mathbf{x}_{2}^{\rm{cr}}=[1\bar{1}0]\), within which the fcc \(\{111\}\) close-packed planes contain \(a_{\rm{ref}}/2\langle 110\rangle\)-type Burgers vectors. Similarly to the \((010)\) case, such Burgers vectors must be defined in the proper reference state under the condition of vanishing far-field stresses in the \((111)\) Au/Cu bicrystal. By arbitrarily choosing \(\mathbf{b}_{1}=a_{\rm{Au}}/2[10\bar{1}]\) and \(\mathbf{b}_{2}=a_{\rm{Au}}/2[0\bar{1}]\) as the reference Burgers vectors, the quantized Frank-Bilby eq. (3.121) gives rise to the lozenge-shaped dislocation structure that is specifically comprised of two arrays of parallel dislocations (with no local reactions at nodes): the initial line directions are defined by \(\mathbf{\xi}_{1}^{\rm{in}}\parallel[01\bar{1}]\) and \(\mathbf{\xi}_{2}^{\rm{un}}\parallel[10\bar{1}]\), so that the individual characters are \(\phi_{1}^{\rm{un}}=\phi_{2}^{\rm{un}}=60^{\circ}\), and the angle between these two unrelaxed sets of dislocations is \(\phi^{\rm{un}}=60^{\circ}\). In addition, \(p_{1}^{\rm{0}}=p_{2}^{\rm{0}}=2.25144\) nm, so that the inter-dislocation spacings are given here by \(d_{1}^{\rm{un}}=d_{2}^{\rm{un}}=1.94980\) nm. As illustrated in Fig. (3.26), the dependence of the total far-field stress components in the \((111)\) system, i.e. in both Au (black line) and Cu (red line) on \(\delta\), yields to a predicted reference state for \(\delta_{(111)}=0.57962\), so that \(a_{\rm{ref}}=0.38096\) nm, and also to the magnitudes of correct Burgers vectors are defined by \(b_{1}=b_{2}=0.26938\) nm. Moreover, Fig. (3.26) shows stronger spurious stress values for the \((111)\) than \((010)\) system cases, by a factor of 2.33 (2.15) in Au (Cu) when \(\delta=0\), i.e. when Au is improperly selected as the reference state. The same qualitative conclusion regarding the spurious stress state can be drawn for \(\delta=1\). Using the aforementioned Frank-Bilby solution as the initial dislocation structure for possible elastic strain relaxation, Fig. (3.27b) shows the pre-computed elastic landscape as function of \(\eta_{1}\) and \(\eta_{2}\), associated with the \((111)\) misfit interface case. The symmetric landscape has been computed using the same number of nodal points than in Fig. (3.27a), for which the orientations of both plots are different for clarity. The elastic energy per unit interface area for the unrelaxed lozenge-shaped dislocation pattern is given by \(\gamma_{\rm{e}}(\eta_{1}\to 1/2,\eta_{2}\to 1/2)\simeq 0.49568\) J.m\({}^{-2}\), with \(\tau_{0}=b_{1}/4\), which is slightly lower than the stored energy for the \((111)\) Au/Cu system for \(\eta_{1}\to 1/2\) and \(\eta_{2}\to 1/2\). Here, the landscape for the \((111)\) system is qualitatively and quantitatively different than the \((010)\) case, since the former gives rise to the existence of a unique minimum-energy dislocation configuration with three sets of dislocations resulting from junction reactions. The energy minimization procedure that involves the conjugate gradient algorithm is performed by using a prescribed convergence criterion in the pre-computed energy landscape. The interface dislocation structures with the lowest elastic energy are considered to be found when the difference between the values of the stored elastic energy for two subsequent iterations is less than \(10^{-4}\) J.m\({}^{-2}\). The corresponding minimum-energy path is determined by using the nudged elastic band method between the initial non-equilibrium and the minimum-energy states, for which the spring constant \(k\) in eq. (3.120) has been varied over several orders of amplitude without noticeable effects on the computed path. The obtained minimum-energy path is displayed in Fig. (3.27b) by the black curved chain with equidistantly positioned images (i.e. intermediate states), where the final configuration state is designated as the final elastically strain-relaxed Figure 3.28: (a) Dependence on \(\eta_{1}=\eta_{2}\) of the elastic energy \(\gamma_{\rm{e}}\) in J.m\({}^{-2}\), i.e. along the bisecting lines of the admissible triangular domains \(T_{\rm{A}\rm{C}}\), as displayed in Figs. (3.27). The blue and the red curves correspond to the mismatched \((010)\) and \((111)\) Au/Cu interfaces, respectively. The latter exhibits black dots, indexed by \(s_{\eta}=1,\ldots,16\), which represent the minimum-energy path from Fig. (3.27b). The large points at \(\eta_{1}=\eta_{2}=1/2\) are related to the initial lozenge-shaped patterns with two crossing sets of dislocations, whereas the vertical arrow shows the minimum-energy configuration associated with the \((111)\) semicoherent interface. (b) Dependence on \(s_{\eta}\) of the dislocation characters \(\phi_{l}\) for the three sets and the angle \(\phi\) between the two parent dislocations for the corresponding \((111)\) Au/Cu case. All these quantities are expressed in \({}^{\circ}\). Figure 3.29: Plots of initial, intermediate, and final states along the computed minimum-energy path for the \((111)\) Au/Cu heterophase interface. The initial periodic network of lozenge-shaped misfit dislocations undergoes local relaxations and also leads to a final elastically relaxed hexagonal-shaped dislocation pattern with lowest short-range strain energy. (a) Dislocation structures. Distribution of (b) the normal displacement component \(u_{2}\) and (c) the displacement norm \(u\). See text for the displacement field expressions. dislocation pattern. Here, the smooth path has no energy barrier (therefore also, no saddle point) and 15 intermediate states, which connect the initial and final states, are constructed. The minimum strain energy related to the relaxed dislocation pattern is given by \(\gamma_{\rm e}^{\rm min}=\gamma_{\rm e}(\eta_{1}\to 0.31981,\eta_{2}\to 0.31981)=0.44733 \,{\rm J.m^{-2}}\), which corresponds to a significant decrease in strain energy of 9.75%. The variations of strain energy along the median (\(\eta_{1}=\eta_{2}\)) of the two (\(010\)) and (\(111\)) Au/Cu landscapes, as displayed by the blue and red dotted lines in the insets of Fig. (3.28a), start from their initial corresponding lozenge-shaped dislocation structures at \(\eta_{1}=\eta_{2}=1/2\) with different stored energy values. The red (blue) line illustrates the (un)favorable elastic energy profile for junction formation that continuously decreases (increases) with decreasing both values of \(\eta_{1}\) and \(\eta_{2}\) from \(1/2\) at the \((111)\) (\((010)\)) Au/Cu heterophase interface. The intermediate states between the lozenge-shaped and the relaxed hexagonal-shaped dislocation configurations for the \((111)\) case are indexed by \(s_{\eta}=1,\dots,15\). Such considerable saving in strain energy along \(s_{\eta}\) is related to the change in dislocation structures, e.g. dislocation characters \(\phi_{i}\) and the angle \(\phi\) between \(\xi_{2}\) and \(\xi_{1}\), which can be examined along the determined minimum-energy path. Figure (3.28b) plots these geometrical characteristics in terms of \(\phi\) (in green), \(\phi_{1}\) (blue), \(\phi_{2}\) (red), and \(\phi_{3}\) (black, for the newly formed set of dislocation junction) as a function of \(s_{\eta}\). It is also found that the geometrical equilibrium configuration of the minimum-energy dislocation pattern is characterized by \(\phi^{\rm eq}\simeq 128.4^{\circ}\), \(\phi^{\rm eq}_{1}=\phi^{\rm eq}_{2}\simeq 85.8^{\circ}\), and \(\phi^{\rm eq}_{3}=90^{\circ}\). Both sets 1 and 2 deviate by \(4.2^{\circ}\) from pure edge characters, and the dislocation structure deviates by \(8.4^{\circ}\) from regular hexagonal-shaped configuration. Such dislocation arrangement is in agreement with atomistic analysis in iron, where deviations from pure screw dislocations in \((110)\) bcc twist GBs with comparable order of dislocation spacings have been reported using molecular statics simulations [292]. Figures (3.29) illustrate the strain-relaxed rearrangements of the interfacial dislocations from the lozenge-shaped configurations on the \((111)\) heterophase interface using different elastic quantities, which can, for example, be used to analyze the likely regions for nucleating interface dislocations or absorbing and annihilating point defects (interstitials and vacancies). All contour plots are displayed at \(x_{2}=3\,a_{\rm Au}\) with respect to the three dislocation configurations shown in Figs. (3.29a), i.e. the "initial"1 at \(s_{\eta}=1\), intermediate (\(s_{\eta}=8\)), and the final relaxed (\(s_{\eta}=16\)) states, for which the specific intermediate case is located exactly halfway between both initial and final states, as depicted by the orange point along the computed minimum-energy path in Fig. (3.27b). A schematic representation of the atomically sharp \((111)\) Au/Cu interface with current periodic dislocation lines is shown in Figs. (3.29a), where the Au (Cu) atoms are plotted by white (gray) dots. The three corresponding Burgers vectors on the \((111)\) close-packed plane are represented as well. Footnote 1: Here, “initial” means the first admissible configuration with three sets of dislocations, where an initially small dislocation segment for the junction has been introduced (in the direction of the steepest descent between the two parent sets) to solve the corresponding solutions for hexagonal-shaped dislocation patterns. Figures (3.29b) and (c) illustrate the normal displacement component \(u_{2}=\bar{u}_{2}^{\rm dis}(x_{1},3\,a_{\rm Au},x_{3})\) and the displacement norm \(u=|\bar{u}^{\rm dis}(x_{1},3\,a_{\rm Au},x_{3})|\), respectively. Figures (3.29b) show that the minimum values of \(u_{2}=-0.01\) nm are located in the centers of the dislocation patterns, while the maximum values yield close to the dislocation junctions for the initial unrelaxed pattern. In the final relaxed dislocation configuration, the maximum values are unequally distributed at the three-fold dislocation nodes, e.g. \({\rm J}_{1}=\{{\rm J}_{1},{\rm J}_{3},{\rm J}_{5}\}\) versus \({\rm J}_{1}=\{{\rm J}_{2},{\rm J}_{4},{\rm J}_{6}\}\), for which the set of junction nodes \({\rm J}_{i}\) gives rise to larger amplitudes of \(u_{2}\) than \({\rm J}_{\rm in}\). Figures (3.29c) display the complex relief of displacement norm \(u\) with the largest magnitudes at \({\rm J}_{i}\), for illustration. #### 3.6.6 Comparison with atomistic simulations The model interfaces for the present comparisons with atomistic simulations are selected according to the following criteria: 1. The structure of the interface is describable as a dislocation network. The present study is concerned with dislocation-based models of interface structure. Thus, interfaces to which these models do not apply are not suitable. 2. This dislocation network undergoes a relaxation through the dissociation of four-fold junctions into three-fold junctions. Some interfacial dislocation networks are not suitable for the study because they contain stable four-fold junctions that do not undergo any relaxation. 3. The interface dislocation network is initially periodic and remains so as it relaxes. Moreover, the dislocations in the network do not dissociate into partials. These choices are necessitated by current limitations in modeling capabilities [258, 259]. The requirement of periodicity is met by selecting special interfaces that may be modeled by two overlapping sets of misfit dislocations, whereas general interfaces involve three overlapping dislocation sets [1]. The requirement of no dissociation excludes from consideration GBs in low stacking fault energy materials. 4. The final structure of the relaxed interface is not the outcome of any inherent symmetry that the interface possesses. For example, while twist boundaries on \(\{111\}\) planes in aluminum meet all the foregoing conditions, they are excluded from consideration because the relaxed dislocation structure in these interfaces has the same \(p6m\) symmetry as the underlying, unrelaxed dichromatic pattern [67, 68]. Such a symmetry-driven relaxation does not constitute a stringent test of the elasticity-based relaxation model. 5. Differences between the relaxed and unrelaxed dislocation network must be discernable in atomistic simulations. Thus, the dislocations should not be so closely spaced that they are difficult to distinguish yet not so far apart that they would require very large atomistic models. This criterion is met through judicious selection of the interface crystallographic character (misorientation, misfit, and plane orientation). All of the foregoing criteria are met by the two classes of model interfaces selected for the present comparison: low-angle twist GBs on \(\{110\}\)-type planes in niobium (Nb t-GBs) as well as heterophase interfaces between \(\{111\}\)-type planes of silver and \(\{110\}\)-type planes of vanadium (Ag/V interfaces). For both interface types, a series of structures is considered by varying twist angle, \(\theta\), i.e. \(0^{\circ}\leq\theta\leq 10^{\circ}\) for both interface types. When \(\theta=0^{\circ}\), the Nb t-GB reduces to a perfect single crystal while the Ag/V interface is in the NW OR [281, 195], where \(\left\langle 110\right\rangle_{\text{fcc}}\) and \(\left\langle 100\right\rangle_{\text{bcc}}\) are parallel within the interface plane. Ag/V interfaces formed in magnetron sputtered multilayers have been characterized extensively [282]. They are observed in a variety of ORs and with a wide range of interface planes. Among the structures reported are Ag/V interfaces in the KS and NW ORs, both along Ag \(\{111\}\) and V \(\{110\}\) planes. They have been previously modeled using elasticity theory, albeit without accounting for network relaxations, as well as using classical potential [175]. Comparisons with atomistic simulations revealed discrepancies that were hypothesized to arise from nodal reconstructions of the kind investigated here. The dislocation-based model is presented in details in section 3.6, while the embedded atom method potentials are used to model atomic interactions in both Nb [298] and Ag/V [283]. No experimental investigations of Nb \(\{110\}\) t-GBs have been reported. Nevertheless, these interfaces were previously investigated by atomistic simulations [176], by anisotropic linear elasticity theory, and most recently using phase field models [213]. However, no quantitative comparison between structures predicted by the elasticity theory and atomistic modeling has been previously conducted. ##### Nb \(\{110\}\) t-GBs Figure (3.30) compares the energy of Nb t-GBs computed from atomistic models with values obtained using the dislocation-based model, the latter using two different core cutoff radii. Both atomistics and the elasticity theory reveal similar trends, with energies increasing monotonically as a function of \(\theta\) within the range of twist angles investigated. Comparison of elastic results before and after relaxation of the dislocation network shows that this step in the calculation yields a relatively modest reduction in elastic energies. For example, for \(\theta=2^{\circ}\), the reduction is approximately 8% of the initial energy. Energies computed from atomistic models are higher than those obtained from the elasticity theory. This difference is due to dislocation core energies, which are inherently captured in the atomistic calculation, but are not accounted for in the dislocation approach. The larger the core cutoff, the lower the energy computed by the present calculations. Interestingly, regardless of the cutoff radius, the values are smaller than the atomistic ones by an apparently \(\theta\)-independent factor, consistent with both the elastic and core energies scaling in proportion to the total length of dislocation segments in the network, to a first approximation. Figure (3.31) compares the structure of Nb t-GB dislocation networks determined from atomistic modeling to ones found with the elasticity theory, using \(\theta=2^{\circ}\) as an example. Other twist angles give rise to Figure 3.30: Nb-t GB energies computed as a function of \(\theta\) using the dislocation-based model and atomistic modeling. qualitatively similar structures. The atomistic structure in Fig. (3.31a) consists of a 2-D tiling of hexagonal regions separated by a connected network of misfit dislocation segments of predominantly screw character. Two types of segments are present: ones with \(\frac{1}{2}\left\langle 111\right\rangle\)-type Burgers vectors as well as ones with \(\left\langle 100\right\rangle\)-type Burgers vectors. As shown in Fig. (3.31a), the former are approximately twice as long as the latter. Consistent with previous studies in bcc Nb [176] and iron [291], both segment types have compact cores of atomic-scale dimensions. The hexagonal regions making up the t-GB are symmetric with respect to reflections about mirror lines parallel and perpendicular to the shorter segments (with \(\left\langle 100\right\rangle\)-type Burgers vectors). Similar to the atomistic structure, the network predicted by the elasticity theory\(-\)shown in Fig. (3.31b) and (3.31c)\(-\)also consists of predominantly screw character dislocation segments with \(\frac{1}{2}\left\langle 111\right\rangle\)- and \(\left\langle 100\right\rangle\)-type Burgers vectors, the former with approximately twice the length of the latter. However, unlike in the atomistic structure, for a core cutoff radius of one quarter of the Burgers vector, the dislocation-based model network consists of slightly distorted hexagons with no lines of mirror symmetry, as evidenced by the unequal values of the angles \(\alpha_{1}\) and \(\alpha_{2}\) between the short and long dislocation segments in Fig. (3.31b). In addition to the geometry shown in Fig. (3.31b), the elasticity theory also predicts another stable dislocation network configuration with identical energy by reversing the circulation of the Burgers vectors and with the values of the two angles \(\alpha_{1}\) and \(\alpha_{2}\) reversed. Thus, the complete elastic energy landscape of the Nb t-GB dislocation network with this cutoff radius\(-\)expressed in terms of the nodal positions\(-\)shares the same symmetry as the GB dichromatic pattern itself, but the individual dislocation configurations corresponding to the minima in that landscape do not. The discrepancy between the elasticity-based and atomistic structures has been analyzed in detail and confirmed that, for core cutoff radii of one quarter of the Burgers vector, it occurs systematically for all the twist angles and is not due to inadequate relaxation of either model. Its cause ultimately traces back to the character dependence of dislocation strain energies in bcc crystals [20, 11]. In elastically isotropic bcc crystals, screw dislocations have the lowest energy per unit length. By contrast, in elastically anisotropic materials, the dislocation energy per unit length is lowest for mixed dislocations. For instance, in Nb, dislocation arrays with \([111]\)-type Burgers vectors exhibit a deviation of \(\sim 10^{\circ}\) with respect to perfect screw character [259]. The asymmetry of the distorted hexagons in Fig. (3.31b) increases the edge component of the constituent dislocation segments, thereby reducing the elastic strain energy, as compared to the perfectly symmetric hexagons in Fig. (3.31a). Interestingly, when the core cutoff radius is increased to two times the Burger vector, the elastic prediction of the relaxed dislocation network is symmetric, as shown in Fig. (3.31c). At first, it might be tempting to say that using a larger core cutoff changes the character dependence of the dislocation elastic energy, e.g. by lowering the energy of the pure screw relative to a mixed character. However, the form of the elastic field around an isolated dislocation has no characteristic length scale, so changing the core cutoff cannot lead to any change in the character dependence of dislocation properties [122]. Rather, the difference between the patterns in Fig. (3.31b) and (c) is likely due to the length scale of the GB dislocation network itself, in particular to features of its elastic field within a distance of \(\sim 2b\) from the three-fold junctions that, when excluded from the elastic energy calculation, shift the elastic energy minimum to the symmetric state. Evidence for such near-node effects at twist GBs along \(\left\{110\right\}\)-type planes in bcc metals has been found in phase field simulations of dislocation networks, where dislocations are seen to acquire a slight curvature near three-fold junctions in some materials [275]. Figure 3.31: Dislocation networks in Nb t-GBs obtained by (a) atomistic modeling and the dislocation-based model with core cutoff radii of (b) and one quarter of the Burgers vector magnitude and (c) two times the Burgers vector magnitude. Atoms in (a) are colored by their potential energy. The pattern is symmetric with respect to the dashed mirror lines in (a). In (b) and (c), dislocation line segments are superimposed on the unrelaxed dichromatic pattern of the GB. ### Ag/V interfaces Figure (3.32) plots energies of Ag/V interfaces as a function of twist angle, \(\theta\). As discussed in section 3.4.6, the energies of heterophase interfaces, such as Ag/V, may be viewed as the sum of a chemical contribution, which is due to the difference in bonding between the two elements in the coherent reference state, and a contribution from the misfit dislocation network, which is associated with the relaxation of coherency. Only the latter depends on the twist angle while the former is a constant, independent of \(\theta\). The elasticity-based model only computes the elastic contribution to the misfit dislocation network energy. Thus, to ease comparison of energies computed from atomistics to those computed using the elasticity theory, all plots in Fig. (3.32) have been shifted so that their minima are at an energy value of zero. For the atomistic calculations, a downward shift of \(0.85\) J/m\({}^{2}\) was imposed while all the elastic calculations were shifted downward by \(0.24\) J/m\({}^{2}\). The difference between these shift values, i.e. \(0.61\) J/m\({}^{2}\), is due to the chemical bonding contribution to the total interface energy. It is substantially larger than the elastic contribution. This conclusion is consistent with previous first-principles calculations, such as the one on Fe/VN interfaces reported in Ref. [138]. The Ag/V interface energies computed using the atomistic model exhibit local maxima at \(\theta=0^{\circ}\) (the NW OR) and \(\theta=5.25^{\circ}\) (the KS OR), two nearly degenerate minima at \(\theta=4.5^{\circ}\) and \(\theta=6^{\circ}\), and a monotonically increasing energy for twist angles greater than \(6^{\circ}\). Figure (3.32) plots the elastic energies for two unrelaxed dislocation network configurations, labeled "case 1" and "case 2". These cases correspond to two different solutions to the Frank-Billy equation obtained by selecting two different combinations of misfit dislocation Burgers vectors, following the terminology introduced in section 3.4.6. Thus, case 1 is identified as the solution obtained using Burgers vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\) and case 2 as that obtained using \(\mathbf{b}_{1}\) and \(\mathbf{b}_{3}\), with \(\mathbf{b}_{1}=a_{\rm ref}/2[\overline{101}]\), \(\mathbf{b}_{2}=a_{\rm ref}/2[\overline{11}]\), and \(\mathbf{b}_{3}=a_{\rm ref}/2[\overline{11}0]\) in the reference crystal. Case 1 has the lower energy for all twist angles, except in the interval \(\sim 4.25^{\circ}<\theta<\sim 5.25^{\circ}\), where the energy of the latter is lower. One point of intersection between the case 1 and case 2 plots in Fig. (3.32) occurs near a local energy maximum, close the KS OR. Each of the unrelaxed configurations predicts one energy minimum, with twist angles and energies in reasonable agreement with one of the minima in the atomistic model. Moreover, the energies of case 1 are in quantitative agreement with the atomistic model for \(\theta>6^{\circ}\). However, at lower twist angles (\(\theta<\sim 6^{\circ}\)), case 1 systematically overpredicts the interface energy by approximately \(20\) mJ/m\({}^{2}\). As demonstrated in Fig. (3.32), relaxation of the dislocation network structure in the elasticity-based model removes nearly all discrepancies in energy between the two models. Quantitatively accurate predictions of interface energies are achieved over the full range of twist angles, with the greatest differences being on the order of \(10\) mJ/m\({}^{2}\) and occurring within a relatively narrow range of twist angles centered approximately on \(\theta=5.5^{\circ}\). Most notably, the match in the energy and twist angle of the minimum near \(\theta=4.5^{\circ}\) improves and the discrepancy between the elasticity-based model and atomistic energies for \(\theta<\sim 4^{\circ}\) is removed. The dislocation network geometries predicted by atomistics and the elasticity theory are compared in Fig. (3.33) for several representative twist angles. All these examples exhibit good qualitative agreement between the two modeling methods, with the general shape of the relaxed patterns matching that of the atomistic models. Small quantitative discrepancies in the lengths and angles of individual dislocations are nevertheless apparent, e.g., in the orientation of the dislocation segments colored blue for \(\theta=2^{\circ}\) in Fig. (3.33b). Interestingly, the elasticity-based model predicts no dissociation of four-fold dislocation nodes for \(\theta=8^{\circ}\), thereby explaining why the unrelaxed network labeled "case 1" in Fig. (3.32) predicts the energy of this interface so accurately. Figure 3.32: Ag/V interface energies computed as a function of \(\theta\) using the dislocation-based approach and atomistic modeling. ## Discussion It is found that incorporating relaxation leads to improved predictions of interface energies for Ag/V interfaces, yielding nearly perfect quantitative agreement with atomistic models. In particular, it did not appear that these predictions might be improved by incorporating a dislocation core model into the elasticity theory. This apparent insensitivity to core structure may be due to the relatively large (1 nm-scale [250]) width of misfit dislocation cores in such interfaces as well as their confinement to the interface plane [70]. It indicates that the effectiveness of linear elastic, dislocation-based models in predicting interface energies and structures is greater than anticipated for some heterophase interfaces, such as the Ag/V interface studied here. The investigation also shows that, while incorporating relaxations improved quantitative predictions of Ag/V energies, it did not alter the qualitative features of the energy versus twist angle plots obtained from unrelaxed dislocation models. In particular, it appears that qualitative features of this curve\(-\)such as the number of energy minima and maxima as well as their twist angles and relative energies\(-\)may be obtained via consideration of unrelaxed structures alone. Relaxation of the dislocation network is not essential for predicting those aspects of the interface energy dependence on twist angle. However, dislocation network relaxation is essential for correctly predicting the geometry of the dislocation networks in these interfaces. For Nb t-GBs, marked differences between energies computed using the elasticity-based model and atomistic models are observed. These differences are attributed to the significant contribution of dislocation cores to the total energy of these interfaces: a contribution naturally accounted for in the atomistic model, but not in the elasticity theory. In particular, the incorporation of dislocation network relaxations in the elasticity theory does not resolve the observed discrepancies. Moreover, use of short (\(\sim b/4\)) core cutoff radii leads to asymmetric lowest energy network geometries, contrary to atomistic models. Thus, to better predict the energies of Nb t-GBs, the dislocation approach should be augmented with a core model. Overall, the introduction of core-spreading dislocations in continuum mechanics is a long-standing problem. One approach might be to include a Peierls-Nabarro type calculation with gamma-surfaces obtained from first principles calculations [68]. Some phase field models of dislocation behavior already use such techniques to model dislocation core spreading as well as the corresponding core energy [275; 213]. A second branch is based on generalized higher-order continuum dislocation mechanics [84; 241; 210; 84], which provides length-scale dependent field solutions. Besides these approaches, a recent core-spreading treatment to the present interfacial dislocation networks has been proposed, as described in section 3.8. Figure 3.33: The dislocation structures of the Ag/V interface obtained by the atomistic modeling (atoms coloured by their local potential energies using the same color bar as in Fig. (3.31) from \(-5.13\) ev (blue) to \(-5.11\) ev (red) and dislocation-based model (dislocation segments superimposed on dichromatic patterns). In (a), (b), and (c), the black line segment is the dislocation segment created upon dissociation of a four-fold node into two three-fold nodes. There is no such segment in (d) because the four-fold node in this network does not dissociate. The thin black lines in (a) illustrate the shape of the initial, unrelaxed dislocation network. ### 3.7 Interaction with extrinsic dislocations in bimaterials In this section, lattice dislocation interactions with semicoherent interfaces are studied by means of anisotropic field solutions in homo- and hetero-structures. The Stroh formalism cover different shapes and dimensions of various extrinsic and intrinsic dislocations2. As illustrated in Fig. (3.34), equi-spaced arrays of straight lattice dislocations and finite arrangements of piled-up dislocations as well as polygonal and elliptical dislocation loops are considered using the superposition principle in three dimensions. Interaction and driving forces are derived to compute the equilibrium dislocation positions in pile-ups, including the internal structures and energetics of the interfacial dislocations. For illustration, the effects due to the elastic and lattice mismatches are discussed in the pure misfit Au/Cu and heterophase Cu/Nb systems, where the discrepancies from the approximation of isotropic elasticity are shown. Footnote 2: In accordance with the former derivations established by the Pan and workers, two explicit conventions have been changed with the foregoing formalism: the interface normal \(\mathbf{n}\parallel\mathbf{x}_{2}\) with \(\mathbf{n}\parallel\mathbf{x}_{3}\), as well as the positive with negative sign of the exponential in the Fourier transforms in eqs. (3.5) and (3.7), without loss of generality. These conventions will be adopted to the end of the manuscript. #### Extrinsic dislocation arrays and loops In classical dislocation dynamics calculations [248, 252], the material volumes of interest are usually regarded as a representative part of infinitely large crystals that are replicated by periodic boundary conditions to preserve the translational invariance. It becomes also useful to derive accurate field solutions for infinite dislocation arrays, for which the periodicity of lattice dislocations is consistent with infinitely periodic boundary conditions applied to the elementary representative volumes, without introducing truncation in replicating simulation cells. The elastic solutions for arrays of lattice dislocations and dislocation pile-ups located in bimaterials are analytically obtained using anisotropic elasticity, respectively. Without loss of generality, the following solutions are given for singularities located either in material A or B. ##### Elastic fields for infinitely periodic dislocation arrays The linear elasticity problem of the elastic field solutions in both materials A and B due to a Volterra-type lattice dislocation array with periodic inter-dislocation distances \(h\) in bicrystals is solved for single dislocations [200, 201, 203] with infinite series [61]. For \(n\) lattice dislocations from \(-\infty\) to \(\infty\), which are located in material A at \((x_{1}^{\text{lat}}+nh,x_{3}^{\text{lat}})\), the corresponding displacement field in material A, i.e. with \(x_{3}>0\), is expressed therefore as (3.126) \[\begin{split}\prescript{}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\pres}{ }{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\}{\prescript}{ \prescript}{\prescript}{\prescript}{\prescript}{}{\prescript}{\prescript}{\prescript}{ }{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{\}{\prescript}{\prescript}{ }{\prescript}{\prescript}{\prescript}{\prescript}{\prescript}{}{\prescript}{\prescript}{ }{\prescript}{\prescript}{\prescript}{}{\prescript}{\prescript}{}{\prescript}{\prescript}{ }{\prescript}{\prescript}{}{\prescript}{\prescript}{}{\prescript}{\prescript}{}{\prescript}{ }{\prescript}{\pt}{\prescript}{}{\prescript}{\prescript}{}{\prescript}{}{\prescript}{ }{\prescript}{\pt}{\prescript}{}{\prescript}{}{\prescript}{\pt}{\prescript}{}{ \prescript}{}{\prescript}{}{\prescript}{}{\prescript}{}{\prescript}{}{\prescript}{}{ \prescript}{}{\pt}{\prescript}{}{\prescript}{}{\prescript}{}{\pt}{\prescript}{}{ }{\prescript}{}{\prescript}{}{\pt}{\prescript}{}{\pt}{\prescript}{}{\pt}{ \prescript}{}{\pt}{\prescript}{}{\pt}{\prescript}{}{\pt}{\prescript}{}{}{\prescript} {}{\pt}{\prescript}{}{\pt}{\pt}{\prescript}{}{\pt}{\pt}{\prescript}{}{ \pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{ \pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{ \pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{ \pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{ \pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\pt}{\ptpt}{\pt}{\pt}{\pt}{\ptpt}{\pt}{\pt}{\pt}{\pt}{\ptpt}{\pt}{\pt}{\pt}{\pt}{\ptpt}{\pt}{\pt}{\ptpt}{\pt}{\pt}{\pt}{\pt}{\pt}{\ptpt} where the diagonal complex matrices \(\left\langle\_\,\stackrel{{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ ### Elastic fields for piled-up dislocation arrays Since the single dislocation pile-up system can be viewed as a discrete set of identical parallel lattice dislocations lying in the same slip plane with combinations of attractive and repulsive forces on each dislocation until a barrier (here, the semicoherent interface) is encountered, the particular boundary-value problem consists of superposing the elastic stress fields produced by each dislocation from the entire pile-up. This summation over \(N\) dislocations that are individually located at \(\mathbf{x}^{\text{lats}}=(x_{1}^{\text{lats}},x_{3}^{\text{lats}})\) for the \(s^{\text{th}}\) piled-up lattice dislocation of interest (here, invariant along the \(x_{2}\)-axis), with the same (positive or negative) sign, is also carried out over the single stress field solutions, as follows (3.137) \[\alpha^{\text{lat\,pile-up}}_{ij}(\mathbf{x})=\sum_{s=1}^{N}\alpha^{\text{lat\,s \,th}}_{ij}(\mathbf{x})\,,\,\,\,\text{and},\,\,\,\,\text{{\rm{}}}\,\,\,\,\text{{ \rm{}}}\,\,\,\,\text{{\rm{}}}\,\,\,\,\text{{\rm{}}}\,\,\,\,\text{{\rm{}}}\, \,\,\,\,\text{{\rm{}}}\,\,\,\,\,\text{{\rm{}}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Since the point force (called source point) acts at \(\mathbf{y}\) in the upper half-space of a bimaterial, i.e. \(y_{3}>0\), the general Green's function tensor at \(\mathbf{x}\) (called field point) is separated into two parts, i.e. \[\forall\,y_{3}>0:\ \mathbf{G}(\mathbf{y},\mathbf{x})=\begin{cases}\,{}^{\Lambda}\mathbf{G}^{ \uparrow\infty}(\mathbf{y},\mathbf{x})+{}_{\Lambda}\mathbf{G}^{\uparrow\infty}(\mathbf{y },\mathbf{x})+{}_{\Lambda}\mathbf{G}^{\uparrow\text{image}}(\mathbf{y},\mathbf{x})\,,&x_{3} >0\\ \,{}^{\Lambda}\mathbf{G}^{\uparrow}(\mathbf{y},\mathbf{x})={}_{\Theta}\mathbf{G}^{ \uparrow\text{image}}(\mathbf{y},\mathbf{x})\,,&x_{3}<0\,,\end{cases} \tag{3.143}\] where \(\mathbf{G}^{\uparrow\infty}\) corresponds to the full-space part and \(\mathbf{G}^{\uparrow\text{image}}\) to the complementary image part, for which the latter is associated with the elastic mismatch in dissimilar materials. Here and in the following, the symbol \(\uparrow(\downarrow)\) is introduced to unambiguously specify that the tensorial Green's functions are associated with a dislocation loop that is located in the upper (lower) material. For instance, \({}_{\Lambda}\mathbf{G}^{\uparrow\infty}(\mathbf{y},\mathbf{x})\) represents the full-space Green's function tensor computed in the semi-infinite linear elastic crystal \(\Lambda\) at \(\mathbf{x}\) when the point force \(\mathbf{y}\) acts at the upper crystal \(\uparrow\). For a specific surface \({}_{\Lambda}\) bounded by a dislocation loop in the upper material \(\Lambda\) with constant elastic stiffness and uniform Burgers vector, the corresponding displacement gradients can straightforwardly be separated into two parts as follows \[{}_{\Lambda}\mathbf{d}^{\text{loop}\,\uparrow}_{k,q}(\mathbf{y})={}_{\Lambda} \mathbf{c}_{ijml}\,{}_{\Lambda}\mathbf{d}^{\text{loop}}_{j}\mathbf{a}_{i}^{ \text{loop}}\int_{{}_{\Lambda}}\left[{}_{\Lambda}\mathbf{C}^{\uparrow\infty }_{mk,xy_{q}}(\mathbf{y},\mathbf{x})+{}_{\Lambda}\mathbf{C}^{\uparrow\text{image}}_{ mk,xy_{q}}(\mathbf{y},\mathbf{x})\right]\,\mathrm{d}S(\mathbf{x})\,, \tag{3.144}\] where differentiation on the left-hand side is with respect to \(\mathbf{y}\). Only the corresponding derivatives of the point-force Green's function tensors are therefore needed to determine the elastic distortion (also, the elastic stress) fields, which is discussed and detailed in Appendix A from Ref. [261]. Thus, the complete point-force Green's displacement tensor in real space is given by \[{}_{\Lambda}\mathbf{G}^{\uparrow\infty}(\mathbf{y},\mathbf{x})=\begin{cases}-\dfrac{1 }{2\pi^{2}}\int_{0}^{\pi}{}_{\Lambda}\mathbf{A}_{*}\cdot\mathbf{F}_{(\Lambda }\mathbf{\rho}_{*}^{\dagger})\cdot{}_{\Lambda}\mathbf{A}_{*}^{\dagger}\, \mathrm{d}\theta\,,&x_{3}>y_{3}\\ \dfrac{1}{2\pi^{2}}\int_{0}^{\pi}{}_{\Lambda}\mathbf{A}\cdot\mathbf{F}_{( \Lambda}\mathbf{\rho}^{\dagger})\cdot{}_{\Lambda}\mathbf{A}^{\dagger}\, \mathrm{d}\theta\,,&0\leq x_{3}<y_{3}\,,\end{cases}\quad\text{and},\ {}_{\Lambda}\mathbf{G}^{\uparrow\text{image}}(\mathbf{y},\mathbf{x})=\dfrac{1}{2\pi^{2 }}\int_{0}^{\pi}{}_{\Lambda}\mathbf{A}_{*}\cdot_{\Lambda}\mathbf{C}^{\uparrow \cdot}{}_{\Lambda}\mathbf{A}^{\dagger}\,\mathrm{d}\theta\,,\end{cases} \tag{3.145}\] where both matrices \(\mathbf{F}(\Lambda\mathbf{\rho}^{\dagger})\), or \(\mathbf{F}(\Lambda\mathbf{\rho}_{*}^{\dagger})\) by substitution, and \({}_{\Lambda}\mathbf{C}^{\dagger}\) for the full-space and complementary image parts, respectively, are defined by \[\mathbf{F}(\Lambda\mathbf{\rho}^{\dagger})=\left((x_{1}-y_{1})\cos \theta+(x_{2}-y_{2})\sin\theta+{}_{\Lambda}\mathbf{\rho}^{k}(x_{3}-y_{3}) \right)^{-1}\mathbf{I}\] \[{}_{\Lambda}\mathbf{C}^{\dagger}=\left((x_{1}-y_{1})\cos\theta+(x _{2}-y_{2})\sin\theta+{}_{\Lambda}\mathbf{\rho}_{*}^{k}x_{3}-{}_{\Lambda} \mathbf{\rho}^{j}y_{3}\right)^{-1}\left({}_{\Lambda}\mathbf{A}_{*}^{-1}{}_{ \Lambda}(\mathbf{M}_{*}+{}_{\Lambda}\mathbf{M})^{-1}{}_{(\Lambda}\mathbf{M}-{} _{\Theta}\mathbf{M})\cdot{}_{\Lambda}\mathbf{A}\right)\,. \tag{3.146}\] By virtue of eq. (3.71) and eq. (3.144) with eqs. (3.145) and (3.146), the stress fields produced in \(\Lambda\) due to dislocation loops located at the upper material \(\Lambda\) are expressed as follows (3.147) \[{}_{\Lambda}\mathbf{C}^{\uparrow\infty}(\mathbf{y})=\begin{cases}(2\pi)^{-1}{}_{ \Lambda}\mathbf{c}_{\text{regular}}\,{}_{\Lambda}\mathbf{c}_{ijml}\,{}_{\Lambda} \mathbf{d}^{\text{loop}}_{j}\int_{0}^{\pi}\left[{}_{\Lambda}\mathbf{A}_{*}^{ \top}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! so that the completing stress field solutions in the lower materials B are finally given by (3.151) where the expression for the integral (3.147) and (3.151) yield the final inhomogeneous stresses in both neighboring crystals A and B with different anisotropic elastic constants, respectively, which are induced by arbitrary polygonal-shaped as well as elliptical dislocation loops in the upper material. These expressions are suitable for numerical treatments (e.g., using the weighted Chebyshev-Gauss quadrature method) since line integrals over \([0,\,\pi]\) are needed for both full-part and image parts in stresses. #### 3.7.2 Internal forces on intrinsic and extrinsic dislocations A general concept in classical dislocations dynamics simulations is based on the assumption of equilibrium of forces at each time increments that act along the dislocation lines or loops (e.g., polygonally discretized into segments). These forces arise externally from the dislocation of interest (as a long-range force) and from the dislocation itself (as a self-force). In the following, these two contributions are described. The long-range Peach-Koehler formulaMany features of crystalline solids can be explained based on the conservative driving forces to dislocation lines and loops. In the present work, to determine the driving force exerted on dislocations by the total stress field \(\sigma^{\text{tot}}(\mathbf{x}^{p})\) applied at the coordinate \(\mathbf{x}^{p}\) of point \(P\), the long-range Peach-Koehler force \(\mathbf{f}^{\text{PK}}(\mathbf{x}^{p})\) per unit length is used [205], i.e. (3.152) where \(\epsilon_{kl}\) is the permutation tensor, and the superimposed bars to any stress quantities are used to indicate that the stress fields exclude the singular self-stress field components which would yield unrealistic divergent components. Thus, the local stress fields in eq. (3.152) are originated from external applied (here, uniform) stresses \(\sigma^{\text{app}}\), and the internal stresses produced by the other dislocations, i.e. including the misfit dislocation stresses \(\sigma^{\text{int}}\), lattice dislocation structures (arrays and pile-ups) \(\partial^{\text{lat}}\), and dislocation loops \(\sigma^{\text{loop}}\), where all field solutions include the complementary image stresses arising from the presence of dissimilar interfaces. Self-force on planar dislocation loopsWithout considering any nonlinear interatomic interactions due the dislocation cores, the standard self-force is thought of as resulting from the elastic self-energy of the dislocation loops. From Ref. [99], the self-energy of a planar dislocation loop with arbitrary shape is defined to be the strain energy exterior to a tube of cut-off radius \(\epsilon\) surrounding the dislocation loop \(L\), such that the corresponding stored energy value is formally finite. The total self-energy \(W^{\text{self}}\) for a given dislocation loop is also separated into two contributions [13, 46, 99], i.e. \[W^{\text{self}}=W^{\text{self}}_{T_{e}}+W^{\text{self}}_{S_{e}}=\frac{1}{2} \int_{T_{e}}\sigma^{\text{loop}}_{ij}(\mathbf{x})\,u^{\text{loop}}_{i}(\mathbf{x})\, n^{\text{tube}}_{j}\,\,\text{d}S+\frac{1}{2}\int_{S_{e}}\sigma^{\text{loop}}_{ij}( \mathbf{x})\,b^{\text{loop}}_{i}\,n^{\text{tube}}_{j}\,\,\text{d}S\,, \tag{3.153}\] where \(\mathbf{n}^{\text{tube}}\) is the inner normal on the tube \(T_{e}\) surrounding \(L\), referred as the tube self-energy contribution, and \(S_{e}\) is the portion of an open surface \(S\) bounded by \(L\) not enclosed by \(T_{e}\), i.e. the cut self-energy part. After substantial manipulation [98, 99], the tube contribution to the self-energy is given by \[W^{\text{self}}_{T_{e}}=\frac{1}{2}\int_{L}\text{d}\ell\int_{0}^{2\pi}\sigma ^{\text{loop}}_{ij}(\omega)\,u^{\text{loop}}_{i}(\omega)\,n^{\text{tube}}_{j }\,\,\epsilon\,\,\text{d}\omega=H(\alpha)\int_{L}\text{d}\ell\,, \tag{3.154}\] where \[H(\alpha)=-\frac{1}{2}\int_{0}^{2\pi}\frac{\partial\phi_{i}}{\partial\omega} \text{d}\omega=\frac{1}{2}\int_{0}^{2\pi}\phi_{i}\frac{\partial u^{\text{ loop}}_{i}}{\partial\omega}\text{d}\omega\,, \tag{3.155}\] with \(\omega\) the polar angle about the dislocation in any plane cross-section of the tube, and \(\phi_{i}\) the Airy stress function vector. The rigorous derivation in Ref. [99] by varying the self-energy in eq. (3.153) with respect to an arbitrary in-plane virtual normal displacement of the dislocation loop gives rise to the self-consistent treatment of the total distributed (and also signed) self-force on the dislocation loop at point \(P\) of interest. The complete line-tension self-force expression is also defined as follows \[f^{\Gamma}(\mathbf{x}^{P})=\underbrace{-\frac{1}{2}\,b_{i}\,n_{j}(\sigma_{ij}^{ \text{loop}}(\mathbf{x}^{P}+\epsilon\,\mathbf{m})+\sigma_{ij}^{\text{loop}}(\mathbf{x}^{P }-\epsilon\,\mathbf{m}))+\kappa\,E(\phi)}_{\text{cut self-force}}\underbrace{- \kappa\left[H(\mathbf{\alpha})+\frac{\partial^{2}H}{\partial\mathbf{\alpha}^{2}} \right]}_{\text{tube self-force}}, \tag{3.156}\] where \(\mathbf{m}=\mathbf{n}\times\mathbf{\tau}\) points inward for convex planar dislocation loops (i.e. toward the centers) with \(\mathbf{\tau}\) the unit local tangent vector to the planar loop, \(\kappa\) is the local curvature at point \(P\), \(E\) is the standard pre-logarithmic energy factor of an infinite straight dislocation with tangent \(\mathbf{\xi}\), and \(H\) is a tube integral around the same latter dislocation. The term \(E\) in eq. (3.156) is expressed as function of the polar angle \(\phi\), which is measured between the Burgers vector and the line direction \(\mathbf{\xi}\), i.e. \(\phi\) denotes a given character angle, where \(\phi=0^{\circ}\) corresponds to a pure screw dislocation [222], while \(H\) depends on the angle \(\alpha\) between \(\mathbf{\tau}\) and an arbitrary datum [99, 13]. The datum is conveniently chosen here along the fixed Burgers vector of the corresponding planar dislocation loop. Hence, \(\mathbf{\alpha}\) is also the local character angle, measured counter-clockwise from the Burgers vector to the local tangent vector \(\mathbf{\tau}\) at every point on the loop. In particular, \(\mathbf{\alpha}=\phi+90^{\circ}\) for circular shear dislocation loops, where more complex geometrical relations arise for arbitrarily-shaped dislocation loops. Importantly, because these energy contributions vary with dislocation characters (e.g., edge and screw dislocations have different energies), the line-tension self-forces can exert a torque on the line portion of the curved dislocation of interest in order to rotate it into its orientation of lowest energy. It is worth noting that the first part in the cut self-force contribution in eq. (3.156) is given in Ref. [43], where the singularity is removed by defining the stress as an average of stress evaluated at two points on the opposite side of and at a short distance \(\epsilon\) away from the dislocation line. The correction term \(\kappa\,E(\phi)\), proportional to the curvature \(\kappa\), has been consistently obtained in Ref. [99]. Here, if the force calculated from eq. (3.156) is positive, then it acts along \(-\mathbf{m}\) and vice versa, so that \[\mathbf{f}^{\Gamma}(\mathbf{x}^{P})=-f^{\Gamma}(\mathbf{x}^{P})\,\,\mathbf{m}\,. \tag{3.157}\] For large values of \(\kappa^{-1}\) and \((\kappa\epsilon)^{-1}\), the dominant cut self-force contribution in eq. (3.156) is \(-\kappa\,\Gamma\), where \(\Gamma\) represents the classical local line-tension approximation [13], i.e. \[\Gamma=\left(E(\phi)+\frac{\partial^{2}E}{\partial\phi^{2}}\right)\ln\bigl{(} (\kappa\epsilon)^{-1}\bigr{)}\,, \tag{3.158}\] for which \(-\kappa\,\Gamma\) vanishes for any \(P\) along infinitely long straight dislocations, i.e. when \(\kappa\to 0\). In the following, \(\epsilon\) is chosen as the pre-determined cutoff distance \(r_{0}\) that excludes the dislocation singularities, i.e. \(\epsilon=r_{0}\). #### On the piled-up dislocations in the (111)Cu/(011)Nb bimaterial The considered interaction problems are related to the force equilibrium of piled-up systems with infinitely long straight dislocations under the action of an externally applied shear stress that maintains the lattice dislocations toward the impertable interfaces (i.e. without slip transmission across the boundaries) in (111)-type glide planes. Without loss of generality, the dislocations with Burgers vectors \(gb^{\text{lat}}\) are embedded in the material B. Several cases, from the simple single-crystal case of equilibrium pile-ups to the piled-up dislocations between the bimetallic semicoherent Cu/Nb interface with relaxed interfacial dislocation arrangements and a shear dislocation loop, are presented. The materials properties used in the calculations are listed in Table 3.1. #### Computational procedures The present calculations of equilibrium positions of \(N\) piled-up dislocations embedded in the lower materials are carried out by using a numerical iterative relaxation scheme under constant applied stresses. As commonly used in dislocation dynamics simulations, linear mobility law for all (except for the leading dislocations) piled-up dislocations, individually located at \(\mathbf{x}^{\text{lat}s}\) in single glide planes, is phenomenologically introduced, i.e. \[\mathbf{\tau}_{\text{glide}}(\mathbf{x}^{\text{lat}s})=B^{-1}\mathbf{f}_{\text{glide}}^{ \text{PK}}(\mathbf{x}^{\text{lat}s})\,, \tag{3.159}\] where \(B\) is an isotropic viscous coefficient and is \(\sim 10^{-5}\) Pa.s for fcc solids [156]. In the following calculations, \(B=5.10^{-5}\) Pa.s. In eq. (3.159), the Peach-Koehler force is also used to compute the velocity \(\mathbf{v}_{\text{glide}}\) for dislocation glide, which in turn, is used to predict the newly positions of the dislocation by adopting a standard time integration scheme (the explicit forward Euler time discretization) accordingly, such that \(\mathbf{x}^{\text{lat}s}(t+\Delta t)=\mathbf{x}^{\text{lat}s}(t)+\Delta t\,\mathbf{v}_{ \text{glide}}(\mathbf{x}^{\text{lat}s})\). The initial distributions of the piled-up dislocations are aligned and arbitrary equi-spaced on the same glide plane until the net resolved force on each dislocation (except acting on the first leading dislocations at the interfaces) is less than \(10^{-5}\) N/m. For the specific leading dislocations, strictly located at the interfaces, i.e. \(\mathbf{x}^{\mathrm{lat\,1st}}=\mathbf{0}\), a zero velocity is therefore imposed, while the corresponding Peach-Koehler force \(f^{\mathrm{PK}}(\mathbf{x}^{\mathrm{lat\,1st}}=\mathbf{0})\) is not necessary equal to zero. These forces acting on the leading dislocations at interfaces are therefore separated into both resolved glide and climb components, i.e. \[f^{\mathrm{PK,\,1st}}_{\mathrm{glide}}=f^{\mathrm{PK}}(\mathbf{x}^{\mathrm{lat\,1st }}=\mathbf{0})\cdot\nu_{\mathrm{glide}}\,,\,\,\text{and},\,\,\,f^{\mathrm{PK,\,1st }}_{\mathrm{climb}}=f^{\mathrm{PK}}(\mathbf{x}^{\mathrm{lat\,1st}}=\mathbf{0})\cdot \mathbf{\nu}_{\mathrm{climb}}\,, \tag{3.160}\] where \(\nu_{\mathrm{glide}}=\mathbf{\nu}_{\mathrm{climb}}\times\mathbf{\xi}\), while \(\mathbf{\nu}_{\mathrm{climb}}\) are the directions of the dislocation pile-ups and the normal of the slip planes, respectively. Here, the piled-up dislocation line directions \(\mathbf{\xi}\) are chosen such that \(\mathbf{\nu}_{\mathrm{glide}}\) points away from the interfaces, so that a positive value of the glide force in eq. (3.160) indicates a repulsive force from the interface. In order to illustrate the numerical procedure for dislocation pile-ups, single-crystal Cu systems (which consist of a particular case where both materials A and B are identical, i.e. without lattice misfit nor image forces) are considered with the following orientation, i.e. \(\mathbf{x}_{1}=[1\bar{1}0]\), \(\mathbf{x}_{2}=[110]\), and \(\mathbf{x}_{3}=\mathbf{n}=[001]\). Two piled-up systems with different dislocation characters are examined: the piled-up dislocations with \(60^{\circ}\) mixed and pure screw characters. The line directions \(\mathbf{\xi}\parallel[110]\) are defined along the planar glide plane normal to \(\mathbf{\nu}_{\mathrm{climb}}\parallel[1\bar{1}\bar{1}]\), which is not orthogonal to the impenetrable interface since an angle of \(54.7^{\circ}\) is defined with the pile-up plane. The calculations are performed in Cu, with the moderately high anisotropy ratio \(A_{\mathrm{Cu}}=3.21\), and the associated Burgers vectors are defined by \({}_{\mathrm{B}}b^{\mathrm{lat}}=a_{\mathrm{Cu}}[101]\) and \({}_{\mathrm{B}}b^{\mathrm{lat}}=a_{\mathrm{Cu}}[110]\parallel\kappa_{2}\), for 6-, 9-, or 12-mixed and screw piled-up dislocations, respectively. As quantified in the foregoing sections, because the corresponding elastic coherency stress states that characterize the semicoherent interfaces can be very large in the far-field stress-free materials, simple shear stresses are applied to the nanostructured elastic problems with mixed and pure screw piled-up dislocations, respectively. Figures (3.35a), (b) and (c) illustrate the stress field components \(c_{22}^{\mathrm{lat}}\) or \(c_{12}^{\mathrm{lat}}\) produced by the equilibrium piled-up arrangements of 6, 9 or 12 mixed and screw dislocations, respectively. For comparison with Fig. (3.35c), Fig. (3.35d) shows the equilibrium 12-screw piled-up dislocations and the corresponding stress component \(c_{12}^{\mathrm{lat}}\) that are obtained by using the isotropic elastic approximation based on the Voigt averages of the elastic constants. Considered as the exhaustion of Frank-Read sources, these plots illustrate the back stress concentrations generated by the pile-ups, which are considerably affected by the number of piled-up dislocations, the individual dislocation characters, as well as the anisotropic elasticity. Table (3.8) reports the corresponding dislocation positions in the different equilibrium pile-ups and forces on the fixed leading dislocations in anisotropic and isotropic media. As found in the earliest studies of discrete edge or screw dislocation pile-ups on simple single glide planes by use of the Laguerre polynomials as routine procedures [85, 116], the present results essentially illustrate a \(\sim x^{-1/2}\) dependence of the dislocation density on the distance \(x\) to the impenetrable interfaces. For the mixed piled-up dislocations, the results from anisotropic and isotropic elastic calculations are practically indistinguishable with 6 and 9 dislocations. For screw piled-up dislocations, however, the results exhibit the discrepancies in dislocation spacings and forces (in magnitude in N/m and sign, especially for the climb components) resulting from the considered approximation of isotropic elasticity, with relative errors that vary between 16.4% and 31.3% in dislocation positions. Figure 3.35: Specific internal stress components produced by the equilibrium (a) 6-mixed, (b) 9-mixed, and (c) 12-screw piled-up dislocations against the interface in Cu using anisotropic elasticity. The same component as in (c) is shown in (d) using the approximation of isotropic elasticity (obtained by the Voigt averages). The horizontal lines denote the impenetrable interfaces. ### Dislocation geometries and orientations An arbitrary microstructure is chosen here to demonstrate the ability of the present elastic superposition scheme in complex piled-up dislocation problems. The short-range stress fields generated by the infinitely periodic misfit dislocation pattern and interacted with various types of lattice defects are investigated for the semi-infinite Cu/Nb system in the NW \(\mathrm{OR}\). As defined in eq. (3.44), the following specific NW relations are used, i.e. \(\mathbf{x}_{1}\parallel[11]_{\mathrm{loc}}\parallel[100]_{\mathrm{loc}},\mathbf{x}_ {2}=[11\dot{2}]_{\mathrm{loc}}\parallel[01\dot{1}]_{\mathrm{loc}}\), and, \(\mathbf{x}_{3}\parallel\mathbf{n}\parallel[111]_{\mathrm{loc}}\parallel[011]_{ \mathrm{loc}}\). In the upper dislocated Cu material, two sets of the infinitely periodic mixed dislocations with identical line directions \(\mathbf{\xi}\parallel[112]_{\mathrm{loc}}\parallel\mathbf{x}_{2}\), but different Burgers vectors \(\mathbf{\lambda}\mathbf{b}^{\mathrm{lat}\,(1)}\parallel[101]\) (i.e. almost edge) and \(\mathbf{\lambda}\mathbf{b}^{\mathrm{lat}\,(2)}\parallel[01\dot{1}]\) (i.e. \(30^{\circ}\) mixed) are randomly introduced near the Cu/Nb interface, respectively. The inter-dislocation distance \(h\), which consists of the translationally periodic boundary conditions with respect to the \(\mathbf{x}_{1}\) direction, and each set of the infinitely long dislocations has the same number of positive and negative signed dislocation characters, such that these dislocations are viewed as statistically-stored dislocations [10], without producing long-range stress effects. On the other hand, the misfit dislocations that are characterized by a non-zero Burgers vector content (to necessarily realize the compatibility at the NW \(\mathrm{Cu/Nb}\) interface) are, by definition, geometrically-necessary dislocations with zero far-field stresses as well. Following the procedure from section 3.6, these geometrically-necessary dislocations are analyzed in Fig. (3.36). Furthermore, an elliptical shear dislocation loop that lies on the \((111)_{\mathrm{fcc}}\parallel\mathbf{n}\) glide plane, with \(\mathbf{\lambda}\mathbf{b}^{\mathrm{loop}}\parallel[1\dot{1}0]_{\mathrm{fcc}}\parallel \mathbf{x}_{1}\), is embedded in the Cu material. In the lower bcc Nb material, a pile-up system with \(N=5\) pure edge dislocations is introduced with line directions \(\mathbf{\xi}\parallel[01\dot{1}]_{\mathrm{loc}}\parallel\mathbf{x}_{2}\), on the glide plane normal to \(\mathbf{v}_{\mathrm{climb}}\parallel[211]\) and Burgers vectors \(\mathbf{\mathrm{s}}\mathbf{b}^{\mathrm{lat}}\parallel[111]_{\mathrm{loc}}\). Furthermore, a circular shear dislocation loop resides in the \((011)_{\mathrm{bcc}}\parallel\mathbf{n}\) glide plane in Nb, such that the pile-up of edge dislocations is comprised between the circular shear dislocation loop and the semicoherent Cu/Nb interface. The complete dislocated microstructure in the present Cu/Nb bimaterial is displayed in Fig. (3.37a). According to these dislocation geometries and orientations, both interacting shear dislocation loops in Cu and Nb are not periodically replicated. Moreover, the extrinsic lattice dislocations are fixed in Cu, while the piled-up dislocations can glide only, without bowing around the loops. ### Interfacial dislocation structures The proper reference state under the condition of vanishing far-field stresses in the NW \(\mathrm{Cu/Nb}\) bicrystal has been determined in section 3.4, where the magnitude of correct Burgers vectors are given by \(b_{1}=b_{2}=0.28301\) nm. With respect to the Burgers vectors, the quantized Frank-Bilby eq. (3.121) gives rise to different solutions, with a particular lozenge-shaped dislocation structure that is specifically comprised of two arrays of parallel dislocations (with no local reactions at nodes) with mixed characters \(\phi_{1}^{\mathrm{in}}=\phi_{2}^{\mathrm{in}}=37.51^{\circ}\), and the angle between these two unrelaxed sets of dislocations is \(\phi^{\mathrm{in}}=15.03^{\circ}\). In addition, \(p_{1}^{\mathrm{in}}=p_{2}^{\mathrm{in}}=4.33249\) nm, so that the inter-dislocation spacings are given by \(d_{1}^{\mathrm{un}}=d_{2}^{\mathrm{un}}=1.12341\) nm. This specific dislocation structure is considered as an initial non-equilibrium state, where local reactions of crossing dislocations to form dislocation segments with a third Burgers vector in hexagonal-shaped networks can be energetically favorable. As described in section 3.7.1, the intrinsic dislocation pattern in the NW orientation is obtained by pre-computing the elastic strain energy landscape that corresponds to the lozenge-shaped solution predicted by the quantized Frank-Bilby equation. Figure (3.36a) displays the specific landscape for the Cu/Nb system that is related to the initial lozenge-shaped dislocation structure with two sets of dislocations, as depicted by the red and blue lines in Figs. (3.36a) and (c). The minimum-energy dislocation configuration exhibits also a hexagonal-shaped structure with three sets of dislocations, as designated by the fully strain-relaxed \begin{table} \begin{tabular}{|c|c c c c c c c c c c|c|c|} \hline \# Mixed dislocation & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & \(f_{\mathrm{loc}}^{\mathrm{lat}\,(1)}\) & \(f_{\mathrm{loc}}^{\mathrm{lat}\,(1)}\) \\ \hline Isotropic & 0 & 0.52 & 1.80 & 3.91 & 7.08 & 11.97 & & & & & & 2.67 & 0.13 \\ Anisotropic & 0 & 0.49 & 1.67 & 3.62 & 6.55 & 11.06 & & & & & & & 2.67 & 0.13 \\ Isotropic & 0 & 0.35 & 1.20 & 2.54 & 4.42 & 6.92 & 10.18 & 14.48 & 20.51 & & & & 4.01 & \(-\)0.61 \\ Anisotropic & 0 & 0.33 & 1.11 & 2.35 & 4.09 & 6.41 & 9.43 & 13.41 & 19.01 & & & 4.01 & 0.61 \\ Isotropic & 0 & 0.27 & 0.90 & 1.89 & 3.25 & 5.02 & 7.22 & 9.91 & 13.18 & 17.18 & 22.88 & 28.89 & 5.38 & \(-\)0.55 \\ Anisotropic & 0 & 0.24 & 0.82 & 1.74 & 3.01 & 4.65 & 6.69 & 9.19 & 12.23 & 15.95 & 20.60 & 26.86 & 5.36 & 1.44 \\ \hline \hline \# Screw dislocation & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & \(f_{\mathrm{loc}}^{\mathrm{lat}\,(1)}\) & \(f_{\mathrm{loc}}^{\mathrm{lat}\,(1)}\) \\ \hline Isotropic & 0 & 1.09 & 3.72 & 8.14 & 14.83 & 25.11 & & & & & & & 14.70 & \(-\)0.19 \\ Anisotropic & 0 & 0.76 & 2.66 & 5.78 & 10.35 & 27.64 & & & & & & 14.70 & \(-\)7.45 \\ Isotropic & 0 & 0.62 & 2.33 & 5.12 & 9.05 & 14.31 & 21.20 & 30.29 & 43.21 & & & & 22.05 & \(-\)7.78 \\ Anisotropic & 0 & 0.59 & 1.67 & 3.72 & 6.40 & 10.01 & 14.92 & 21.11 & 30.28 & & & & 22.04 & \(-\)10.26 \\ Isotropic & 0 & 0.55 & 1.79 & 3.86 & 6.67 & 10.38 & 15.04 & 20.75 & 27.73 & 36.38 & 47.23 & 62.17 & 29.40 & 0.92 \\ Anisotropic & 0 & 0.46 & 1.23 & 2.67 & 4.68 & 7.33 & 10.54 & 14.45 & 19.36 & 25.42 & 33.03 & 43.22 & 29.39 & \(-\)13.54 \\ \hline \end{tabular} \end{table} Table 3.8: Equilibrium positions (in nm) of 60\({}^{\circ}\) mixed and pure screw piled-up dislocations measured obliquely along the piled-up slip direction from the interface in Cu. pattern. The corresponding minimum-energy path is plotted in Fig. (3.36b). It is therefore shown that the elastic relaxation is accompanied by a decrease in strain energy of \(\sim 4\%\), yielding the formation of large set of dislocation junctions with pure edge characters (black lines in Fig. (3.36d)). Changes in the associated normal stresses \(\sigma_{33}^{\text{int}}\) between the initial and the final configurations in the Cu/Nb system are computed at \(x_{3}=1\) nm, and illustrated on the right-hand side. It is observed that the third new set of dislocation junctions exhibits almost zero normal stresses, while the maximum compressive stress values are reached at the parent dislocation sets. #### Internal forces on lattice dislocations Table (3.9) summarizes the results (positions to the interface and forces) computed at \(x_{2}=0\) nm for the dislocation pile-up in anisotropic and isotropic Cu/Nb dislocated bimaterial at equilibrium, for which the explicit positions are depicted in Fig. (3.37a) for the anisotropic case. Values in brackets indicate the algebraic relative error (in %) due to the isotropic approximation of elasticity (with respect to the full anisotropic case as reference) with significant errors of \(\sim\pm 20\%\) in both glide and climb force magnitudes. Nucleation and multiplication of dislocations in microstructure are traditionally described by applying dislocation criteria, e.g., resolved shear stresses, Hertzian principal shear stresses, von Mises strains or stresses. For illustration, Fig. (3.37b) plots the von Mises stress distribution cut from the mid-section of the microstructure, where the local stress is mainly concentrated around the upper dislocation loop. The Peach-Koehler forces along the leading piled-up dislocation and lattice dislocations are computed at Figure 3.36: Representation of the arrangements of atoms and dislocations in the semicoherent Cu/Nb interface, which yields hexagonal-shaped meshes of dislocations with the lowest strain energies. (a) The pre-computing elastic energy landscape for the Cu/Nb system in the NW orientation shows a minimum-energy path (i.e. the horizontal chain in black) from the initial lozenge-shaped Frank-Bilby solution (i.e. a pattern with two sets of dislocations, represented by the red and blue lines) in (c) to the final (fully-relaxed) dislocation structures of the lowest strain energy in (d). The corresponding variation of energy along the minimum-energy path is plotted with respect to the dimensionless coordinate \(s_{\eta}\) in (b). The initial Frank-Bilby solution and the final hexagonal dislocation structure for the Cu/Nb system are shown in terms of dislocation structure (left-hand sides) by forming the third new set of dislocation junctions (in black) and of the normal stress \(\sigma_{33}^{\text{int}}\) in GPa (right-hand sides). \begin{table} \begin{tabular}{|c|c c c c c c c c c c|c c c c|} \hline \# Edge dislocation & 1 & 2 & \multicolumn{3}{c}{3} & 4 & \multicolumn{3}{c|}{5} & \multicolumn{3}{c|}{\(\ell_{\text{pile-up}}\)} & \multicolumn{3}{c|}{\(f_{\text{pile-up}}^{\text{int}}\)} & \multicolumn{3}{c|}{\(f_{\text{pile-up}}^{\text{int}}\)} \\ \hline Isotropic & 0 & 0.28 & \(\pm\)1. & 1.20 & \(\pm\)0. & 2.33 & \(-\)3.6 & 3.62 & \(-\)7.1 & 7.44 & \(-\)3.9 & 14.01 & 18.3 & 7.99 & \(-\)19.5 \\ Anisotropic & 0 & 0.27 & \(\mathit{ref}\) & 1.16 & \(\mathit{ref}\) & 2.42 & \(\mathit{ref}\) & 3.90 & \(\mathit{ref}\) & 7.74 & \(\mathit{ref}\) & 11.85 & \(\mathit{ref}\) & 9.92 & \(\mathit{ref}\) \\ \hline \end{tabular} \end{table} Table 3.9: Equilibrium positions (in nm) of pure edge piled-up dislocations in Nb measured obliquely along the piled-up direction from the interface in Cu/Nb. Values in brackets indicate the algebraic relative error (in %) due the isotropic approximation of elasticity (with respect to the true anisotopic case as reference). equi-spaced positions along the \(\mathbf{x}_{2}\)-axis, and are displayed in Fig. (3.37c), including the full-space and the complementary image parts. For clarity, the reference vector that scales the magnitude of the Peach-Koehler force is 10 N/m for the leading dislocation, and 5 N/m for the lattice dislocations. According to the large magnitude in von Mises stress field, the dislocation features may serve as dislocation emission in the bi-material or/and the semicoherent interface. Despite the present idealized situations in dislocation features, the complex and heterogeneously distributed force profiles on straight dislocations provide insights into the behavior of dislocations, e.g. the largest forces in magnitude are experienced on the leading piled-up dislocation (L), which would also bow-out away from the interface since the heterogeneous step-like force profile is mainly due to the presence of the intrinsic hexagonal-shaped dislocation structure at the semicoherent interface. From these distributions, the resolved shear stress forces on specific glide planes could therefore be projected to extend the computational procedure of motion for all glide dislocations in bimaterials. ##### Self- and Peach-Koehler forces on shear dislocation loops In order to compute the complete self-forces \(f^{T}\) associated with the dislocation loops, the pre-logarithmic energy factor \(E\) in eq. (3.156) is determined by asymptotically reducing the parametric energy-based framework for one set of dislocations [258]. For a single set of Volterra-type dislocations, the corresponding energy per unit length of dislocation is viewed as the work done in forming the dislocation network by cutting and displacing the habit plane at \(x_{2}=0\) between \(x_{1}=r_{0}\) and \(x_{1}=d_{1}-r_{0}\), as follows \[E=d_{1}\;\gamma_{\rm e}=\frac{1}{2}\int_{r_{0}}^{d_{1}-r_{0}}\;t_{j}(x_{1},x_ {2}=0)\;u_{j}^{p}(x_{1},x_{2}=0)\;{\rm d}x_{1}\,, \tag{3.161}\] according to eq. (3.30), where the prescribed displacement jump is \(u^{p}(x_{1},x_{2}=0)=-\mathbf{b}_{1}\) for Volterra-type dislocations [249]. However, the inter-distance spacings must be sufficiently large to represent the equivalent energy state for one infinite straight dislocation, as requested by the line tension formulation in section 3.7.2. Here, the inter-distance spacings \(d_{1}\) for one single set of dislocations is chosen such that the corresponding stress field is equivalent to the stress state produced by one single dislocation. As discussed in Ref. [260], when \(d_{1}\) is fictitiously increased by a multiplicative factor \(10^{3}\), the discrepancy in stress state Figure 3.37: (a) Geometries and orientations among various dislocations in the anisotropic \((111)\)Cu/\((011)\)Nb bimaterial with interface at \(x_{3}=0\) nm. The upper material Cu contains eight infinitely long straight and uniformly spaced parallel dislocation arrays along the \(x_{2}\)-axis, with different characters (in dark gray (almost edge) and in red (30\({}^{\circ}\) mixed)) and an elliptical shear dislocation loop, while the lower material Nb is comprised of a pile-up system with 5 pure edge dislocations and a circular shear dislocation loop. (b) The corresponding von Mises stress field, associated with the equilibrium piled-up dislocations and all other dislocations, including the hexagonal-shaped dislocation structures at the semicoherent Cu/Nb interface. (c) Force distribution along all lattice dislocations (e\({}_{i}\)) and (m\({}_{j}\)) in Cu as well as along the leading dislocation (L). The reference vector 10 N/m (5 N/m) represents the magnitude scale of the Peach-Koehler force exerted on the leading (lattice) dislocation(s). All lattice and piled-up dislocations are defined along the \(x_{2}\)-axis. between such dislocation array with large spacings and the single dislocation case is almost zero. Thus, substituting \(d_{1}\) with \(10^{3}\,d_{1}\) in eq. (3.161), \(E\) and \(E^{\prime\prime}\), where \({}^{\prime}\) stands for differentiation with respect to \(\phi\), can be numerically be evaluated for infinite character-dependent dislocations in the present anisotropic Cu/Nb material. As a measure of the stiffness of the dislocations, the term \(E+E^{\prime\prime}\) for Cu and Nb is plotted in Fig. (3.38a) as function of \(\phi\), such that pure screw (edge) character is characterized by \(\phi=0^{\circ}\) (\(\phi=90^{\circ}\)) for infinite dislocations, respectively. These plots are in agreement with the distinguishing classification of the anisotropic curves in Ref. [13], e.g., the appearance of maxima and minima for values of \(\phi\) between \(\phi=0^{\circ}\) and \(180^{\circ}\) as well as the asymmetric (symmetric) case in bcc Nb (fcc Cu) materials about \(\phi=90^{\circ}\). Furthermore, the term \(H+H^{\prime\prime}\) that arises in eq. (3.156) for the tube self-force contribution, where \(H\) is defined in eq. (3.154), is displayed in Fig. (3.38b), while the superposition \(E-H-H^{\prime\prime}\) is plotted in Fig. (3.38c). Here, the symbol \({}^{\prime}\) deals with differentiation with respect to \(\alpha\). Using the geometrical features in terms of curvatures \(\kappa\) and relations between \(\phi\) and \(\alpha\) for both elliptical and circular dislocation loops in Cu and Nb, which can be easily parametrized, the complete algebraic self-forces \(f^{T}\) are shown in Fig. (3.38d). Interestingly, as the self-force \(f^{T}\) is positive close to the edge orientations in Nb, i.e. \(90^{\circ}\leq\alpha\leq 107^{\circ}\) and \(253^{\circ}\leq\alpha\leq 270^{\circ}\) (which corresponds to \(\alpha=90\pm 17^{\circ}\) and \(\alpha=270\pm 17^{\circ}\) by symmetry properties), as shown by the shaded blue regions, the line tension provides an expansion reaction in the near-edge orientations that acts along the \(-m\) directions, i.e. pointing outward from the centers of the circular dislocation loop, while a global striking behavior is observed for all other non-edge characters, especially for the local near-screw character elements that usually have lower elastic energy [122, 156]. This result is in good qualitative agreement with predictions in Ref. [12] with similar characteristics (i.e. Burgers vector and habit plane) for shear dislocations in highly anisotropic \(\alpha\)-iron. In Cu, however, the elliptical dislocation loop tends to shrink under the action of the heterogeneously distribu Figure 3.38: Determination of the self-forces, given by eq. (3.156), on the planar elliptical and circular shear dislocation loops that reside in upper material Cu (red curves) and lower material Nb (blue curves), respectively, as function of the angles \(\phi\) and \(\alpha\) in \({}^{\circ}\). (a) \(E+E^{\prime\prime}\), (b) \(H+H^{\prime\prime}\), (c) \(E-H^{\prime\prime}-E^{\prime\prime}\), and (d) the complete algebraic self-forces \(f^{T}\) that are continuously distributed around both dislocation loops. The symbol \({}^{\prime}\) stands for differentiation with respect to the proper angles that are depicted in the inset of (b). The fcc (bcc) case exhibits symmetric (asymmetric) behavior with respect to the median axes (i.e. the vertical dotted lines), and \(f^{T}\) can also be positive in Nb (depicted by the shaded regions in blue), which means that the self-forces tend locally to expand the corresponding dislocation loop by the near-edge dislocation elements, i.e. \(\alpha=90\pm 17^{\circ}\) and \(\alpha=270\pm 17^{\circ}\), while \(f^{T}\) is always negative for the elliptical shear loop in Cu. tension self-forces. Figures (3.39a) and (b) illustrate the self-force profiles (black arrows) that are larger in magnitude in the \(\mathbf{x}_{2}=[112]_{\rm fcc}\parallel[01]_{\rm bcc}\) direction in Cu and Nb, respectively, while the self-force contribution tends to locally expand the lower dislocation loop in Nb in the \(\mathbf{x}_{1}\parallel[100]_{\rm bcc}\) direction (as displayed by the dotted circles in gray). Furthermore, the blue arrows illustrate the interaction force contribution between the two dislocation loops. For instance, the interaction force acting on the upper dislocation loop in Cu is determined by superposing its complementary image force and the full-space part produced by the lower dislocation loop in Nb. It is shown that this force component pulls the elliptical dislocation loop toward the semicoherent interface, i.e. toward the softer material Nb, with the largest magnitude on the minor axis region with screw character elements. On the other hand, the dislocation loop force contribution is almost in-plane for the lower dislocation loop in Nb. Finally, the total Peach-Koehler forces, which include the dislocation loop force contribution and all other contributions from the lattice dislocation arrays (including the piled-up dislocations), are shown in Fig. (3.39) with orange arrows. It can therefore be observed that the Peach-Koehler force tends to rotate out of the \((111)_{\rm fcc}\) glide plane in the upper dislocation loop around the \([\bar{1}10]_{\rm fcc}\), and also to shear it by climb-assisted dislocation-glide process, while the same force in Nb tends to expand preferentially the lower circular dislocation loop in the specific \([01\bar{1}]_{\rm bcc}\) direction onto the \((011)_{\rm bcc}\) glide plane. #### Limitations Based on the previous sections 3.4 and 3.6, and specially in section 3.7, where it should be recognized that the procedure for determining the driving forces is not easily tractable for arbitrarily-shaped dislocation loops in large-scale dislocation dynamics simulations, the inherent assumption that follows from the linear elasticity theory is related to the introduction of a core cutoff radius to eliminate the divergence of the dislocation field solutions. Furthermore, the comparison between the elasticity theory and atomistic predictions leads to discrepancies in the interfacial stored energies, mainly due to the singular consideration of the dislocation cores, as quantified in sections 3.4.6 and 3.6.6. Thus, the remedy to the difficulties encountered and the discrepancies made, lies in the derivation of non-singular solutions for extrinsic and intrinsic dislocation structures. In the context of classical elasticity, singularity-free fields are obtained by convoluting the prescribed displacement jumps with isotropic Gaussian distributions. Conceptually similar to the orig Figure 3.39: Discrete distribution of the local forces (black arrows), of the dislocation loop forces (blue) that include the corresponding full-space solutions and complementary (image) contributions of the loops, and of the complete Peach-Koehler forces (orange), which act along both shear dislocation loops. These force distributions are exerted on the elliptical dislocation loop in Cu (a) and on the circular loop in Nb (b). For both shear dislocation loops, the associated fcc and bcc Burgers vectors lie along the \(\mathbf{x}_{1}\)-axis. The dotted two small circles in (b) illustrates the local self-stress expansion of the loop in the lower material Nb by the near-edge (since the local character between the Burgers vector and the local tangent is characterized by a in Fig. (3.38)) character elements. inal Peierls-Nabarro approach [206, 193], this procedure overcomes the long-standing dislocation problems of singular elastic fields in the core regions, which has been applied to interfacial dislocations [262] and more recently to extrinsic dislocation loops [268]. In addition, a second emphasis has been placed on the extension to multilayered magneto-electro-elastic plates with multiple semicoherent interfaces, such that the single semicoherent homo- and hetero-phase interface in pure elastic bimaterials becomes a particular case of the general approach, as described in the following section 3.8. ### 3.8 Extension to non-singular fields in multilayered magneto-electro-elastic plates Multiphysics analyses in man-made (piezoelectric, piezomagnetic, and magneto-electro-elastic (MEE)) multiferroics have attracted tremendous interest of many researchers because of their widespread and advanced applications involving intelligent topological structures, energy harvesting and green energy production, optoelectronics, and self-powered biomedical devices. External surfaces and internal interfaces between alternating dissimilar materials play special roles in magnetism [78, 178, 270], electrical transport [178, 131, 270], mechanical properties [302, 190], and also in multiple coupled magnetic, electric, and mechanical fields, which become crucial to design novel nanostructured composites with outstanding functional and enhanced MEE properties [196, 214, 60, 288]. One significant technological problem during the growth of nanoscale multilayers is related to the lattice mismatches between different layers [57, 300, 81, 295], which induce spurious MEE field concentrations that can markedly enhance or degrade the materials properties, and in the latter case, causing crack initiation and growth, dielectric breakdown and magnetic failure. The present section focuses on atomistically informed conditions for crystalline interfaces with lattice-mismatched dislocation structures in multilayered MEE materials made of CoFe\({}_{2}\)O\({}_{4}\) (magnetostrictive cobalt ferrite, CFO) and BaTiO\({}_{3}\) (piezoelectric barium titanate, BTO). #### Boundary-value problem and singularity-free field solutions The classical six-dimensional Stroh formalism is extended to a ten-dimensional formalism combined with a Fourier series-based solution procedure to determine the displacement and traction fields in anisotropic multilayered MEE solids under external loads. Such multilayers are composed of semicoherent interfaces, for which each pure misfit interface consists of two different sets of infinitely long straight, uniformly Figure 3.40: Superposition principle for the semicoherent interfaces in miscible MEE multilayers subjected to external loads. (a) Representative linear and anisotropic free-standing multilayered system that consists of \(w\) rectangular layers with a couple of semicoherent interfaces at \(z=z_{j}\) and \(z=z_{k}\), while the others are perfectly bonded between adjacent layers. The heterophase interfaces possess different internal structures comprised of two planar arrays of infinitely long straight, and periodically-spaced dislocations. The open and filled symbols represent the atomic structure of the lattice-mismatched semiconherent interface, while the solid segments indicate the corresponding misfit dislocations. The imperfect interface at \(z_{j}\) is bonded between layers \(j\) and \(j+1\), with discontinuity quantities between the upper and lower sides indicated by \(+\) and \(-\), and is therefore coplanar to two flat free surfaces at \(z=z_{0}=0\) (bottom) and \(z=z_{w}\) (top). (b) Using the superposition principle, general mechanical, electric, and magnetic boundary conditions are externally and vertically applied on both the top and bottom surfaces of MEE solid with perfectly bonded interfacial boundary conditions (i.e., coherent internal interfaces). spaced, and parallel core-spreading dislocations. Practical recursive operations are explicitly derived with respect to specific internal and external boundary conditions for multilayered solids with one and two semicoherent heterophase interfaces. #### Basic equations Figure (3.40a) shows the representative multilayered system that consists of \(w\) dissimilar, linear and anisotropic MEE layers with individual finite thickness \(h_{k}=z_{k}-z_{k-1}\) for the \(k^{\text{th}}\) layer, with \(k=1,\ldots,w\). A global and fixed Cartesian coordinate system with basis vectors \((\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3})=(\mathbf{x},\mathbf{y},\mathbf{z})\) is conveniently attached to the multilayers (and alternatively used for clarity in the further notations), where the unit vector normal to the interfaces is \(\mathbf{n}\parallel\mathbf{x}_{3}\parallel\mathbf{z}\), while all layered rectangular plates are located in the positive \(\mathbf{z}\)-region. Thus, the flat bottom and top surfaces are located at \(z=z_{0}=0\) and at \(z=z_{w}=H=\sum_{k=1}^{w}h_{k}\), respectively, within which (mechanical, electric, and magnetic) loads can therefore be applied on these external surfaces, as illustrated in Fig. (3.40b). Furthermore, \(s\)\((\leq w-1)\) semicoherent interfaces of given crystallographic characters (misorientation and interface plane orientation) containing each up to two different sets of infinitely periodic dislocation patterns are explicitly described by solving the quantized Frank-Bilby equation, as detailed in section 3.2. In the absence of body forces, thermal effects, electric current densities, electric and magnetic charge densities, the unified formulation of the governing equations of mechanical equilibrium with the Maxwell equations is represented by a single set of partial differential equation [168, 169, 198, 199, 203] as follows \[\sigma_{ij,i}=0\,, \tag{3.162}\] where the index runs from 1 to 3 (from 1 to 5) over repeated lowercase (uppercase) subscripts, unless stipulated otherwise. The extended stress field \(\sigma_{ij}\) in eq. (3.162) is defined by \[\sigma_{ij}=\begin{cases}\sigma_{ij}&J=j=1,2,3\\ D_{i}&J=4\\ B_{i}&J=5\,,\end{cases} \tag{3.163}\] with \(\sigma_{ij}\) the components of the mechanical stress (in \(\text{N}/\text{m}^{2}\)), \(D_{i}\) the electric displacement (in \(\text{C}/\text{m}^{2}\)), and \(B_{i}\) the magnetic induction (in \(\text{N}/\text{A}.\text{m}\)), which satisfy the static constitutive relations for each linear and anisotropic layer of the fully coupled MEE materials, \[\begin{cases}\sigma_{ij}=c_{ijlm}\gamma_{lm}-e_{ikj}E_{k}-q_{kij}H_{k}\\ D_{i}=e_{ik}\gamma_{jk}+e_{ij}E_{j}+\mu_{ij}H_{j}\\ B_{i}=q_{ijk}\gamma_{jk}+\alpha_{ji}E_{j}+\mu_{ij}H_{j}\,,\end{cases} \tag{3.164}\] where all materials properties are position-dependent in the multilayers, but homogeneously defined in each layer. In particular, \(\gamma_{lm}\) is the elastic strain (dimensionless), \(E_{k}\) is the electric field (in \(\text{V}/\text{m}\)), \(H_{k}\) is the magnetic field (in \(\text{A}/\text{m}\)), and \(c_{ijlm}\), \(e_{kij}\), \(q_{kij}\) and \(\alpha_{ij}\) are the elastic moduli (in \(\text{N}/\text{m}^{2}\)), piezoelectric (in \(\text{C}/\text{m}^{2}\)), piezomagnetic (in \(\text{N}/\text{A}.\text{m}\)), and magnetoelectric (in \(\text{C}/\text{A}.\text{m}\)) coefficients, respectively. Furthermore, \(e_{ij}\) and \(\mu_{ij}\) are the dielectric permittivity (in \(\text{C}^{2}/\text{N}.\text{m}^{2}\)) and magnetic permeability (in \(\text{N}.s^{2}/\text{C}^{2}\)) coefficients, respectively, for which all repeated indexes are ranged in \(\{1,2,3\}\). Various particular and uncoupled cases (e.g., pure elastic and piezoelectric) can evidently be reduced from eq. (3.164) by setting the appropriate coupling coefficients to zero. Using the shorthand notation, the constitutive relations can be recast as follows \[\sigma_{ij}=c_{ijKL}\gamma_{KL}=c_{ijKL}u_{K,l}\,, \tag{3.165}\] where the materials constants are defined by \[c_{ijKL}=\begin{cases}c_{ijkl}&J,\,K=j,\,k=1,2,3\\ e_{ilj}&J=j=1,2,3,\,K=4\\ e_{ikl}&J=4,\,K=k=1,2,3\\ q_{ilj}&J=j=1,2,3,\,K=5\\ q_{ikl}&J=5,\,K=k=1,2,3\\ -\alpha_{il}&J=4,\,K=5,\,\text{or},\,\,K=4,\,J=5\\ -e_{il}&J,\,K=4\\ -\mu_{il}&J,\,K=5\,,\end{cases} \tag{3.166}\] which satisfy the following symmetries: \(c_{ijlm}=c_{ijlm}=c_{ljml}=c_{lmij},\,e_{kij}=e_{kji},\,q_{kij}=q_{kji},\,\,\alpha_{ ij}=\alpha_{ji},\,\,\epsilon_{ij}=\epsilon_{ji}\), and \(\mu_{ij}=\mu_{ji}\). Both extended strain \(\gamma_{KI}\) and displacement \(u_{K}\) fields in eq. (3.165) are given by \[\gamma_{KI}=\begin{cases}\gamma_{kl}=\frac{1}{2}\left(u_{k,l}+u_{l,k}\right) \qquad K=k=1,2,3\\ -E_{l}=\phi_{J}\qquad\qquad\qquad\qquad\qquad K=4\;,\quad\text{and},\quad\,u_ {K}=\begin{cases}u_{k}&K=k=1,2,3\\ \phi&K=4\\ \psi&K=5\,,\end{cases}\end{cases} \tag{3.167}\] with \(u_{k}\), \(\phi\), and \(\psi\), being the elastic displacement (in m), the electrostatic potential (in V), and the magneto-static potential (in A), respectively. From eq. (3.163), the extended traction \(t_{J}\) with normal is \(n_{i}\) is therefore given by \[t_{J}=\sigma_{ij}n_{i}=\begin{cases}\sigma_{ij}n_{i}&J=j=1,2,3\\ D_{i}n_{i}&J=4\\ B_{i}n_{i}&J=5\,.\end{cases} \tag{3.168}\] #### The dual variable and position procedure in multilayered systems For in-plane multilayered MEE plates in presence of periodically-spaced interfacial dislocations, the extended displacement vector \(u_{J}\) in the physical domain is written in terms of a biperiodic Fourier series expansion, as follows \[u_{J}\left(x_{1},x_{2},x_{3}=z\right)=\text{Re}\sum_{\eta}\text{e}^{-i2\pi\eta _{k}x_{\alpha}}\,\,\bar{u}_{J}\left(\eta,z\right)\,, \tag{3.169}\] with \(\alpha=\{1,\,2\}\), as already defined in eq. (3.73) with a negative sign of the exponential in the Fourier transforms. For clarity in the notation, the wavevectors \(\mathbf{k}\) in eq. (3.73) have been changed with \(\eta\) in eq. (3.169), as well as the superscript \({}^{\mathbf{k}}\) with the superimposed tilde for all fields expressed in the frequency domain. Substitution of eq. (3.169) to eq. (3.165) and then to eq. (3.162) results in a system that consists of five homogeneous second-order differential equations in the Fourier-transformed domain, i.e. (3.170) \[4\pi^{2}c_{l after substituting eq. (3.174) into eq. (3.171). The Stroh eigenvalues of eq. (3.175) and the corresponding eigenvectors are conveniently arrange such that \(\operatorname{Im}p_{I}>0\), and \(p_{I+5}=p_{I}\), because the remaining five solutions have negative imaginary parts due to the positive definiteness of the magnetic, electric, and elastic strain energy densities. Here and in the following, the overbar denotes the complex conjugate. By superposing the ten eigensions, general expressions of the extended displacements and tractions in the Fourier-transformed domain can therefore be expressed in terms of the Stroh formalism in any given layer \(j\) bonded by interfaces \(z_{j-1}\) and \(z_{j}\), as follows \[\begin{bmatrix}-i2\pi\eta\ \bar{\mathbf{u}}\left(\eta,z\right)\\ \bar{\mathbf{t}}\left(\eta,z\right)\end{bmatrix}=\begin{bmatrix}\mathbf{A}&\bar{ \mathbf{A}}\\ \mathbf{B}&\bar{\mathbf{B}}\end{bmatrix}\begin{bmatrix}\left\langle\mathrm{e} ^{-i2\pi\eta+\eta\left(z-z_{j}\right)}\right\rangle&\mathbf{0}_{5,5}\\ \mathbf{0}_{5,5}&\langle\mathrm{e}^{-i2\pi\eta+\eta\left(z-z_{j-1}\right)} \rangle\end{bmatrix}\begin{bmatrix}\mathbf{K}_{1}\\ \mathbf{K}_{2}\end{bmatrix}\,, \tag{3.176}\] with \(z_{j-1}<z<z_{j}\), while \(\mathbf{A}\) and \(\mathbf{B}\) the \(5\times 5\) eigenvector matrices defined by \[\mathbf{A} =\left[\mathbf{a}_{1},\ \mathbf{a}_{2},\ \mathbf{a}_{3},\ \mathbf{a}_{4},\ \mathbf{a}_{5}\right]\] \[\mathbf{B} =\left[\mathbf{b}_{1},\ \mathbf{b}_{2},\ \mathbf{b}_{3},\ \mathbf{b}_{4},\ \mathbf{b}_{5} \right]=\mathbf{R}^{\mathbf{t}}\mathbf{A}+\mathbf{T}\mathbf{A}\,\langle \mathrm{e}^{i2\pi\eta+\eta\left(z-z_{j}\right)}\rangle\,, \tag{3.177}\] where the \(z\)-dependent diagonal and exponential matrix in eq. (3.177) is represented by \[\langle\mathrm{e}^{i2\pi p_{I}\eta\left(z-z_{j}\right)}\rangle=diag\Big{[} \mathrm{e}^{i2\pi p_{I}\eta\left(z-z_{j}\right)},\ \mathrm{e}^{i2\pi p_{I}\eta\left(z-z_{j}\right)},\ \mathrm{e}^{i2\pi p_{I}\eta\left(z-z_{j}\right)},\ \mathrm{e}^{i2\pi p_{I}\eta\left(z-z_{j}\right)},\ \mathrm{e}^{i2\pi p_{I}\eta \left(z-z_{j}\right)}\Big{]}\,, \tag{3.178}\] and \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\) in eq. (3.176) are two \(5\times 1\) complex (and constant) column matrices to be determined by specific boundary conditions in dislocated ME multilayers. Once the extended displacement \(\bar{\mathbf{u}}\) and traction \(\bar{\mathbf{t}}\) vectors in the transformed domain are obtained by solving eq. (3.176), the remaining \(7\times 1\) extended in-plane stresses \(\hat{\sigma}^{s}\) in the transformed domain, i.e., consisting of the in-plane elastic stresses, electric, and magnetic displacements, can be found by using the following relation, i.e. \[\hat{\sigma}^{s}_{ij}\left(\eta,z\right)=-i2\pi\eta\ c_{ijK}m_{i}\,\hat{u}_{K} \left(\eta,z\right)+c_{ijK}n_{i}\,\hat{u}_{K,3}\left(\eta,z\right)\,, \tag{3.179}\] with \(i=\{1,\ 2\}\), \(J=\{1,\ 2,\ 4,\ 5\}\), and \(i\leq J\). The derivative term on the right-hand side of eq. (3.179) is given in terms of the extended displacements and tractions in the transformed domain by \[\hat{u}_{K,3}\left(\eta,z\right)=\left[c_{ijK}n_{i}\eta_{i}\right]^{-1}\left( \bar{\mathbf{t}}_{j}\left(\eta,z\right)+i2\pi\eta\ c_{ijK}m_{i}\,\eta_{i}\,\hat{u} _{K}\left(\eta,z\right)\right)\,, \tag{3.180}\] for which eqs. (3.179) and (3.180) read in vector-tensor form as \[\begin{split}\hat{\sigma}^{s}\left(\eta,z\right)&=-i2\pi \eta\,\mathbf{M}_{1}\,\bar{\mathbf{u}}\left(\eta,z\right)+\mathbf{M}_{2}\,\bar{\bm {u}}_{3}\left(\eta,z\right)\\ &\equiv\left\{\bar{\mathbf{\gamma}}_{1},\ \bar{\mathbf{\gamma}}_{12},\ \bar{\mathbf{\gamma}}_{22},\ \bar{\mathbf{\gamma}}_{14}=\bar{\mathbf{\gamma}}_{1},\ \bar{\mathbf{\gamma}}_{24}=\bar{\mathbf{\gamma}}_{2},\ \bar{\mathbf{\gamma}}_{15}=\bar{\mathbf{\beta}}_{1},\ \bar{\mathbf{\gamma}}_{25}=\bar{\mathbf{\beta}}_{2}\right\}\\ &\bar{\mathbf{u}}_{3}\left(\eta,z\right)=\mathbf{T}^{-1}\left(\bar{\bm {t}}\left(\eta,z\right)+i2\pi\eta\,\mathbf{R}^{\mathbf{t}}\,\bar{\mathbf{u}}\left( \eta,z\right)\right)\,,\end{split} \tag{3.181}\] respectively. The two \(7\times 5\) matrices \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\) in eq. (3.181) are explicitly given by (3.182) \[\mathbf{M}_{1}=\begin{bmatrix}c_{11K1}\,m_{1}+c_{11K2}m_{2}\\ c_{12K1}\,m_{1}+c_{12K2}m_{2}\\ c_{12K1}\,m_{1}+c_{22K2}m_{2}\\ c_{2K1}\,m_{1}+c_{22K2}m_{2}\\ c_{14K1}\,m_{1}+c_{14K2}m_{2}\\ c_{24K1}\,m_{1}+c_{24K2}m_{2}\\ c_{24K1}\,m_{1}+c_{24K2}m_{2}\\ c_{24K1}\,m_{1}+c_{24K2}m_{2}\\ c_{25K1}\,m_{1}+c_{15K2}m_{2}\\ c_{25K2}\,m_{2}\\ c_{26K3}\end{bmatrix}=\begin{bmatrix}c_{11K1}+c_{11K2}m_{2}&c_{15}m_{1}+c_{14 1}m_{2}&c_{11}m_{1}+c_{21}m_{2}&c_{11}m_{1}+c_{21}m_{2}&c_{11}m_{1}+c_{21}m_{2}&q_{1 1}m_{1}+q_{21}m_{2}\\ c_{11}m_{1}+c_{26K2}m_{2}&c_{66}m_{1}+c_{62}m_{2}&c_{65}m_{11}+c_{64}m_{2}&c_{65}m_{11}+c_{6 1}m_{2}m_{2}&c_{61}m_{1}+q_{26}m_{2}\\ c_{21}m_{1}+c_{20}m_{2}&c_{26}m_{11}+c_{22}m_{2}&c_{25}m_{11}+c_{24}m_{2}&c_{21}m_{1}+ c_{26}m_{2}&q_{12}m_{1}+q_{22}m_{2}\\ c_{14K1}m_{1}+c_{16K2}m_{2}&c_{16}m_{1}+c_{12}m_{2}&c_{15}m_{1}+c_{14}m_{2}&-c_{11 }m_{1}-c_{12}m_{2}&-\ -\ respectively. Both unknown complex vectors \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\) in eq. (3.183) can then be eliminated to establish the relation between the expansion coefficients on both interfaces at \(z_{j-1}\) and \(z_{j}\) of the layer \(j\) of interest, i.e. \[\begin{bmatrix}-i2\pi\eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j-1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j})\end{bmatrix}=\begin{bmatrix}\mathbf{S}^ {j}_{10\times 10}\end{bmatrix}\begin{bmatrix}-i2\pi\eta\,\tilde{\boldsymbol{\mu}} (\eta,z_{j})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j-1})\end{bmatrix}=\begin{bmatrix}\mathbf{S}^ {j}_{11}&\mathbf{S}^{j}_{12}\\ \mathbf{S}^{j+1}_{21}&\mathbf{S}^{j+1}_{22}\end{bmatrix}\begin{bmatrix}-i2\pi \eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j-1})\end{bmatrix}\,, \tag{3.184}\] within which the ten-dimensional matrix \(\mathbf{S}^{j}_{10\times 10}\) is formulated as follows (3.185) For the adjacent layer \(j+1\), the corresponding propagation of the expansion coefficient solutions at both interfaces \(z_{j}\) and \(z_{j+1}\) yields therefore similar relations as eq. (3.184), i.e. \[\begin{bmatrix}-i2\pi\eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j+1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j+1})\end{bmatrix}=\begin{bmatrix}\mathbf{S} ^{j+1}_{10\times 10}\end{bmatrix}\begin{bmatrix}-i2\pi\eta\,\tilde{ \boldsymbol{\mu}}(\eta,z_{j+1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j})\end{bmatrix}=\begin{bmatrix}\mathbf{S}^ {j+1}_{11}&\mathbf{S}^{j+1}_{21}\\ \mathbf{S}^{j+1}_{21}&\mathbf{S}^{j+1}_{22}\end{bmatrix}\begin{bmatrix}-i2\pi \eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j+1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j})\end{bmatrix}\,, \tag{3.186}\] which can be combined with eq. (3.184) by properly assuming that the interface at \(z_{j}\) between the two layers is perfectly bonded, i.e., the transformed displacement and traction vectors are continuous at \(z=z_{j}\), as specified by \[\mathrm{C}:\,\begin{cases}\left\{\tilde{\boldsymbol{\mu}}\left(\eta,z=z_{j} \right)\right\}_{-}^{+}=\tilde{\boldsymbol{\mu}}\left(\eta,z_{j+}\right)- \tilde{\boldsymbol{\mu}}(\eta,z_{j-})=\boldsymbol{0}_{5\times 1}\\ \left\{\tilde{\boldsymbol{\tau}}\left(\eta,z=z_{j}\right)\right\}_{-}^{+}= \tilde{\boldsymbol{\tau}}\left(\eta,z_{j+}\right)-\tilde{\boldsymbol{\tau}} \left(\eta,z_{j-}\right)=\boldsymbol{0}_{5\times 1}\,.\end{cases} \tag{3.187}\] Thus, the following recursive relations between interfaces \(z_{j-1}\) and \(z_{j+1}\) can be derived as \[\begin{bmatrix}-i2\pi\eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j-1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j+1})\end{bmatrix}=\begin{bmatrix}\mathbf{S} ^{j+1}_{10\times 10}\end{bmatrix}\begin{bmatrix}-i2\pi\eta\,\tilde{ \boldsymbol{\mu}}(\eta,z_{j+1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j-1})\end{bmatrix}=\begin{bmatrix}\mathbf{S} ^{j+1}_{11}&\mathbf{S}^{j+1}_{12}\\ \mathbf{S}^{j+1}_{21}&\mathbf{S}^{j+1}_{22}\end{bmatrix}\begin{bmatrix}-i2\pi \eta\,\tilde{\boldsymbol{\mu}}(\eta,z_{j+1})\\ \tilde{\boldsymbol{\tau}}(\eta,z_{j-1})\end{bmatrix}\,, \tag{3.188}\] where the superscripts \({}^{jj+1}\) means the resulting propagation matrix from layer \(j\) to layer \(j+1\), with the submatrices \(\mathbf{S}^{j+1}_{10\times 10}\) being expressed as \[\begin{cases}\left[\mathbf{S}^{j+1}_{11}\right]&=\left[\mathbf{S}^{j}_{11} \mathbf{S}^{j+1}_{11}\right]+\left[\mathbf{S}^{j}_{11}\mathbf{S}^{j+1}_{12} \right]\left[\mathbf{I}_{5\times 5}-\mathbf{S}^{j}_{21}\mathbf{S}^{j+1}_{12}\right]^{-1} \left[\mathbf{S}^{j}_{21}\mathbf{S}^{j+1}_{11}\right]\\ \left[\mathbf{S}^{j+1}_{12}\right]&=\left[\mathbf{S}^{j}_{12}\right]+\left[ \mathbf{S}^{j+1}_{11}\mathbf{S}^{j+1}_{12}\right]\left[\mathbf{I}_{5\times 5}-\mathbf{S}^{j}_{21} \mathbf{S}^{j+1}_{12}\right]^{-1}\left[\mathbf{S}^{j}_{22}\right]\\ \left[\mathbf{S}^{j+1}_{21}\right]&=\left[\mathbf{S}^{j+1}_{21}\right]+\left[ \mathbf{S}^{j+1}_{22}\right]\left[\mathbf{I}_{5\times 5}-\mathbf{S}^{j}_{21} \mathbf{S}^{j+1}_{12}\right]^{-1}\left[\mathbf{S}^{j}_{21}\mathbf{S}^{j+1}_{11}\right] \\ \left[\mathbf{S}^{j+1}_{22}\right]&=\left[\mathbf{S}^{j+1}_{22}\right]\left[ \mathbf{I}_{5\times 5}-\mathbf{S}^{j}_{21}\mathbf{S}^{j+1}_{12}\right]^{-1}\left[ \mathbf{S}^{j}_{22}\right].\end{cases} \tag{3.189}\] For multilayers with a single semicoherent interface, the recursive relations in eq. (3.188) with eqs. (3.189) can be propagated from the bottom surface to the semicoherent interface and then from the semicoherent interface to the top surface, without causing numerical instability issues as obtained by the traditional propagation matrix method [265]. Using the specific displacement discontinuity conditions at the semicoherent interfaces and the traction-free boundary conditions at bottom and top surfaces, the involved unknown expansion coefficients can be numerically solved and propagated to any \(z\)-level to determine all \(z\)-dependent expansion coefficients of both the Fourier-transformed displacement and traction vectors. This procedure is explicitly derived for two practical traction-free multilayered structures with one and two semicoherent interfaces in the next section. By use of the superposition principle, external uniform loads acting on the bottom and/or top surfaces in the associated MEE solids can consistently be applied using similar recursive relations with perfectly bonded interfacial conditions and subsequently be superposed to the previous dislocation-induced field solutions. When these coefficients are solved by imposing internal and external boundary conditions, the ultimate operations are related to the summation of all the Fourier components altogether to obtain the general and complete full-field solutions in the physical domains by inverse Fourier transforms. #### Disregistry at semicoherent interfaces with core-spreading dislocation structures Due to the two-dimensional periodicity of the interface dislocation structures for a given neighboring atomic plane, the relative displacement discontinuity condition at semicoherent interfaces at \(z=z_{j}\) between layers \(j\) and \(j+1\), is defined in both physical and Fourier-transformed domains using a similar biperiodic Fourier series expansion to eq. (3.169), as follows (3.190) with \(\mathbf{u}^{p}\) and \(\bar{\mathbf{u}}^{p}\) being the prescribed relative displacement vectors (magnitudes and directions), expressed in the physical and Fourier-transformed domains [258, 259], respectively. The components of the wavevectors \(\mathbf{\eta}\) parallel to the interface in eq. (3.190) must fulfill the following relation, i.e. \[\eta_{\alpha}x_{\alpha}=\eta_{1}(n)\ x_{1}+\eta_{2}(m)\ x_{2}=\frac{n}{|\mathbf{p} _{1}|}\ x_{1}+\frac{m}{|\mathbf{p}_{2}|}\ x_{2}\,, \tag{3.191}\] by virtue of eq. (3.6), with \(\phi=\pi/2\). For pure misfit heterophase interfaces that consist of orthogonal edge dislocation networks with zero interaction energy, the complete displacement jump is described by the superposition of two distinct one-dimensional sawtooth-shaped functions \(\mathbf{u}_{1}^{p}\) and \(\mathbf{u}_{2}^{p}\) with Fourier sine series in the physical domain, as defined in eq. (3.123). The corresponding Fourier-transformed displacement jumps \(\bar{\mathbf{u}}_{1}^{p}\) and \(\bar{\mathbf{u}}_{2}^{p}\) for each set of dislocations are therefore related to the total disregistry in eq. (3.190) by \[\bar{\mathbf{u}}^{p}(n,m,z_{j})=\bar{\mathbf{u}}_{1}^{p}(n,z_{j})+\bar{\mathbf{u}}_{2}^{p} (m,z_{j})=-i\frac{(-1)^{n+1}}{\pi n}\ \mathbf{b}_{1}(z_{j})-i\frac{(-1)^{n+1}}{\pi m}\ \mathbf{b}_{2}(z_{j})\,, \tag{3.192}\] where the \(z\)-dependent Burgers vectors are discretely localized at the interfaces. However, the cores of the misfit dislocations can spread at dissimilar boundaries for interfaces with low shear resistances [191, 127, 172]. Such compact dislocation cores can therefore be spread out by convoluting the discontinuity displacement conditions with specific spreading function on the interface plane to form a continuous distribution of the Burgers vectors. In the context of linear elasticity theory, two isotropic weighted Burgers vector density functions \(\mathbf{\omega}_{\gamma}(x_{\gamma})\), with \(\gamma=\{1,\,2\}\), are also introduced as follows \[{}^{*}\mathbf{b}_{\gamma}=\int_{-\infty}^{\infty}\!\!\mathbf{\omega}_{\gamma}(x_{ \gamma})\,\mathrm{d}x_{\gamma}=\mathbf{b}_{\gamma}\int_{-\infty}^{\infty}\!\!\bm {\omega}_{\gamma}(x_{\gamma})\,\mathrm{d}x_{\gamma}\,, \tag{3.193}\] which ensures that both the magnitude and the direction of the Burgers vectors remain unchanged, and \({}^{*}\mathbf{b}_{\gamma}=\mathbf{b}_{\gamma}\) when the density function is reduced to the delta function, i.e., \(\omega_{\gamma}(x_{\gamma})=\delta(x_{\gamma})\). In eq. (3.193) the pre-superscript \({}^{*}\) indicates the quantities that have been distributed (also, convoluted) by the weighted core-spreading functions. One-dimensional Gaussian distributions of dislocation cores are conveniently prescribed to represent the core-spreading dislocations for each independent set of interfacial dislocations, i.e. \[\omega_{\gamma}(x_{\gamma})=\frac{\mathrm{e}^{-x_{\gamma}^{2}/t_{\gamma}^{2}} }{r_{\gamma}\sqrt{\pi}}\,, \tag{3.194}\] where the standard deviation is \(\sigma_{\gamma}=r_{\gamma}\sqrt{2}\), with \(r_{\gamma}>0\) being the dislocation core radius parameters that regularize the classical compact dislocation cores. In practice, the same weighted core-spreading functions are applied to both sets of interfacial dislocations, so that \(\omega_{1}=\omega_{2}\), with \(r_{1}=r_{2}=r_{0}\). Using the advantages offered by the convolution properties of Fourier series expansions, the weighted displacement jump for set \(1\) from eqs. (3.123) and (3.192) is defined as \[\begin{split}{}^{*}\mathbf{u}_{1}^{p}(x_{1},z_{j})&= \mathbf{u}_{1}^{p}(x_{1},z_{j})\ *\omega_{1}(x_{1})\equiv\int_{-\infty}^{\infty}\mathbf{u}_{1}^{p}(x_{1}-x_{1}^{ \prime},z_{j})\ \omega_{\gamma}(x_{1}^{\prime})\,\mathrm{d}x_{1}^{\prime}\\ &=\sum_{\begin{subarray}{c}n\,\geq\,1\\ m\,=\,0\end{subarray}}\frac{(-1)^{n+1}\ \mathrm{e}^{-(\pi n\,r_{0}/p_{1})^{2}}}{\pi n}\sin \left(\frac{2\pi n\,x_{1}}{p_{1}}\right)\ \mathbf{b}_{1}(z_{j})\\ {}^{*}\bar{\mathbf{u}}_{1}^{p}(n,z_{j})&=-i\frac{(-1)^{n+1} \ \mathrm{e}^{-(\pi n\,r_{0}/p_{1})^{2}}}{\pi n}\ \mathbf{b}_{1}(z_{j})\,,\end{split} \tag{3.195}\] respectively, while the corresponding displacement jumps for set \(2\) are analogously given by \[\begin{split}{}^{*}\mathbf{u}_{2}^{p}(x_{2},z_{j})&= \sum_{\begin{subarray}{c}n\,=\,0\\ m\,\geq\,1\end{subarray}}\frac{(-1)^{m+1}\ \mathrm{e}^{-(\pi n\,r_{0}/p_{2})^{2}}}{\pi m}\sin \left(\frac{2\pi m\,x_{2}}{p_{2}}\right)\ \mathbf{b}_{2}(z_{j})\\ {}^{*}\bar{\mathbf{u}}_{2}^{p}(m,z_{j})&=-i\frac{(-1)^{m+1} \ \mathrm{e}^{-(\pi n\,r_{0}/p_{2})^{2}}}{\pi m}\ \mathbf{b}_{2}(z_{j})\,,\end{split} \tag{3.196}\] in both the physical and Fourier-transformed domains. The non-regularized discontinuous displacement vectors (given by eqs. (3.123) and (3.192)) are also obtained for \(r_{0}=0\) in eqs. (3.195) and (3.196). Thus, the interface conditions \(\mathsf{S}\) on the semicoherent interface associated with core-spreading dislocations in the Fourier-transformed domain are imposed by \[\mathsf{S}:\begin{cases}\left\{\begin{aligned} \bar{\mathbf{\mathsf{h}}}\left( \eta,z=z_{j}\right)\right\}_{-}^{\dagger}&=\bar{\mathbf{\mathsf{h}}}( \eta,z_{j+})-\bar{\mathbf{\mathsf{h}}}(\eta,z_{j-})\\ &=-i\frac{(-1)^{n+1}\,\mathbf{e}^{-(mr_{0}/p_{1})^{2}}}{\pi n}\, \mathbf{\mathsf{b}}_{1}(z_{j})-i\frac{(-1)^{m+1}\,\mathbf{e}^{-(mr_{0}/p_{2})^{2} }}{\pi m}\,\mathbf{\mathsf{b}}_{2}(z_{j})\\ \left[\bar{\mathbf{\mathsf{I}}}\left(\eta,z=z_{j}\right)\right]_{-}^{ \dagger}&=\bar{\mathbf{\mathsf{I}}}(\eta,z_{j+})-\bar{\mathbf{\mathsf{I}}}( \eta,z_{j-})=\mathbf{\mathsf{0}}_{5\times 1}\,,\end{aligned}\right.\end{cases} \tag{3.197}\] for any \(\left\{n,\,m\right\}\geq 1\), which are evidently reduced to eqs. (3.187) for coherent interfaces with zero Burgers vector content. Figure (3.41a) shows the Gaussian density distribution \(b_{1}\,\omega_{1}(x_{1})\) of the single discrete Burgers vector \(\mathbf{b}_{1}\), with arbitrarily given values for \(b_{1}=0.32\) nm and \(r_{0}=2.5\,b_{1}\), while Fig. (3.41b) illustrates the corresponding displacement jumps across the interface with core-spreading dislocations: \({}^{*}\mathbf{\mathsf{n}}_{1}^{p}\) (red curve, given by eq. (3.195)), and without: \(\mathbf{\mathsf{u}}_{1}^{p}\) (black, with \(r_{0}=0\)). These curves are plotted with 20 harmonics only, with also arbitrarily dislocation spacings \(p_{1}=7\) nm, exhibiting that the Fourier series expansion with the core-spreading treatment for interface dislocations converges conditionally and numerically faster than the original expansions without treatment. The relative displacement profile becomes therefore continuously smooth close to the regularized dislocation cores unlike the jump occurring in the original description with compact dislocation cores. Once the specific interface conditions \(\mathsf{S}\) in eqs. (3.197) dedicated to interface dislocation patterns are defined, the recursive relations in the layered sub-structures between the semicoherent interfaces and external surfaces can be propagated to obtain the field solutions in the Fourier domain. The following two practical examples give rise to the explicit recursive relations between the transformed displacement and traction vectors that are used for numerical application examples in multilayers with (i) one semicoherent interface, and (ii) two semicoherent interfaces. The multilayered cases of interest with three and more interfaces consist of a straightforward continuation of both subsequent situations with three and more additional recursive sequences. (i) For a single semicoherent interface in multilayers, the transformed displacement and traction vectors are propagated from the bottom surface at \(z=z_{0}\) to the lower side where the semicoherent is located, i.e., at \(z=z_{j-}\), so that eq. (3.188) leads to \[\begin{bmatrix}-i2\pi\eta\,\bar{\mathbf{\mathsf{h}}}(\eta,z_{0})\\ \bar{\mathbf{\mathsf{I}}}(\eta,z_{j-})\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{11 }^{1;j}&\mathbf{S}_{12}^{1;j}\\ \mathbf{S}_{21}^{1;j}&\mathbf{S}_{22}^{1;j}\end{bmatrix}\begin{bmatrix}-i2\pi \eta\,\bar{\mathbf{\mathsf{h}}}(\eta,z_{j-})\\ \bar{\mathbf{\mathsf{I}}}(\eta,z_{0})\end{bmatrix}\,, \tag{3.198}\] and also, from the upper side of the interface at \(z=z_{j+}\) to the top surface at \(z=z_{w}\), i.e. \[\begin{bmatrix}-i2\pi\eta\,\bar{\mathbf{\mathsf{h}}}(\eta,z_{j+})\\ \bar{\mathbf{\mathsf{I}}}(\eta,z_{w})\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{11 }^{j+1:w}&\mathbf{S}_{12}^{j+1:w}\\ \mathbf{S}_{21}^{1;w}&\mathbf{S}_{21}^{1;w}\end{bmatrix}\begin{bmatrix}-i2\pi \eta\,\bar{\mathbf{\mathsf{h}}}(\eta,z_{w})\\ \bar{\mathbf{\mathsf{I}}}(\eta,z_{j+})\end{bmatrix}\,, \tag{3.199}\] where \(\mathbf{S}_{10\times 10}^{1;j}\) and \(\mathbf{S}_{10\times 10}^{j+1:w}\) are individually defined by eqs. (3.189). Equations (3.198) and (3.199) together with the given boundary/interface conditions \(\mathsf{S}\) in eqs. (3.197) solve all the involved transformed-Fourier unknowns. An example for the bilayered system is provided in eq. (3.213) where the system of equations is reordered by arranging all the given quantities to the right-hand side and all the unknowns to be solved to the left-hand side. The field solutions can therefore be propagated to any \(z\)-level to determine the transformed propagating values of interest, e.g., using a relation similar to eq. (3.198) if the field point is above the semicoherent interface, or using a relation similar to eq. (3.199) if the field point is below the interface. Finally, operating the summation of the transformed solutions in the Fourier series expansions, the full-field solutions in the physical domain are obtained. (ii) For two semicoherent interfaces in multilayers, located at \(z=z_{j}\) (corresponding to the previous case (i)) and \(z=z_{k}\) with \(z_{j}<z_{k}\), the prescribed relative displacement is, in general, different than the interface at \(z=z_{j}\) in terms of dislocation structures, so that \[\begin{split}\left[\bar{\mathbf{\mathsf{h}}}\left(\eta,z=z_{k} \right)\right]_{-}^{+}&=\bar{\mathbf{\mathsf{h}}}\left(\eta,z=z_{k+}\right)- \bar{\mathbf{\mathsf{h}}}\left(\eta,z=z_{k-}\right)={}^{*}\mathbf{\mathsf{n}}_{1}^{p} \left(n,z_{k}\right)+{}^{*}\mathbf{\mathsf{n}}_{2}^{p}\left(m,z_{k}\right)\\ &\left(\neq\left[\bar{\mathbf{\mathsf{h}}}\left(\eta,z=z_{j}\right) \right]_{-}^{+}\right)\,,\end{split} \tag{3.200}\] with, for instance, different dislocation spacings and magnitudes of both Burgers vectors (but, with similar directions in the present pure misfit interface cases). Thus, while eq. (3.198) is unchanged, eq. (3.199) is split into two propagation relations, from the upper side \(z_{j+}\) of the first semicoherent interface at \(z_{j}\) to the lower side of the second semicoherent interface at \(z=z_{k-}\), i.e. \[\begin{bmatrix}-i2\pi\eta\,\tilde{\mathbf{\mu}}(\eta,z_{j+})\\ \tilde{\mathbf{\imath}}(\eta,z_{k-})\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{11}^{j +1:k}&\mathbf{S}_{12}^{j+1:k}\\ \mathbf{S}_{21}^{j+1:k}&\mathbf{S}_{22}^{j+1:k}\end{bmatrix}\begin{bmatrix}-i2 \pi\eta\,\tilde{\mathbf{\imath}}(\eta,z_{k-})\\ \tilde{\mathbf{\imath}}(\eta,z_{j+})\end{bmatrix}\,, \tag{3.201}\] and then, from the upper side of the second semicoherent interface at \(z_{k+}\) to the top surface at \(z=z_{w}\), i.e. \[\begin{bmatrix}-i2\pi\eta\,\tilde{\mathbf{\imath}}(\eta,z_{k+})\\ \tilde{\mathbf{\imath}}(\eta,z_{w})\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{11}^{k +1:w}&\mathbf{S}_{12}^{k+1:w}\\ \mathbf{S}_{21}^{k+1:w}&\mathbf{S}_{22}^{k+1:w}\end{bmatrix}\begin{bmatrix}-i2 \pi\eta\,\tilde{\mathbf{\imath}}(\eta,z_{w})\\ \tilde{\mathbf{\imath}}(\eta,z_{k+})\end{bmatrix}\,. \tag{3.202}\] Again, eqs. (3.198) and (3.201 -3.202) combining with the prescribed boundary conditions in eqs. (3.197) and (3.200) are applied for solving the involved unknowns for the given boundary and interface conditions. After determining the involved boundary and interface values, the transformed displacement and traction vectors at any \(z\)-level are obtained by merely propagating the suitable recursive relation (depending upon the relative location of the field point with respect to the two semicoherent interface locations), while the physical-domain solutions are finally deduced by taking the summation of all the Fourier series components. #### Traction boundary conditions at external surfaces The extended traction boundary conditions are vertically and uniformly applied on the top and bottom surfaces, at \(z=z_{w}\) and \(z=z_{0}\), respectively, as referred to Fig. (3.40b). The mechanical normal traction is described by imposing \(t_{3}\), while the electric and magnetic components are characterized by \(t_{4}\) and \(t_{5}\), respectively. For simplicity, the normal traction components are homogeneously distributed along the \(\mathbf{x}_{2}\)-axis, and uniformly imposed along the \(\mathbf{x}_{1}\)-axis only, so that the present case is treated as a two-dimensional plane-strain deformation problem in the \((x_{1},x_{3})\)-plane. In terms of the Cartesian coordinates attached to the present multilayered systems, the prescribed traction \(t_{I}^{p}\) at the top surface are expressed as \[t_{I}^{p}\left(x_{1},z=z_{w}\right)=\begin{cases}-\varGamma&\dfrac{L-l}{2} \leq x_{1}\leq\dfrac{L+l}{2}\\ 0&\text{otherwise}\,,\end{cases} \tag{3.203}\] for \(J=3,4,5\), only, and at the bottom surface as \[t_{I}^{p}\left(x_{1},z=z_{0}\right)=t_{I}^{p}\left(x_{1},z=z_{w}\right)\,. \tag{3.204}\] Thus, the same distribution is applied on the bottom surface to ensure the equilibrium condition of zero in vertical direction. In eqs. (3.203) and (3.204) the uniform mechanical, electric, or magnetic fields \(\varGamma\) are applied over the interval with total length \(l\), while \(L\) is a reference size to translate the center of the loading area from the global coordinate center to avoid the singularity in the series expansion as can be observed below. Furthermore, the present numerical calculation indicates that \(L=5l\) leads to rapid convergent series. Using the similar discrete Fourier series representation with the previous derivation for the semicoherent interfaces, the surface traction relation at \(z=z_{w}\) is consistently given by \[t_{I}^{p}\left(x_{1},z=z_{w}\right)=\text{Re}\ i\sum_{n\geq 1}\text{e}^{-im\,x _{1}/L}\ t_{I}^{p}\left(n,z=z_{w}\right)\,, \tag{3.205}\] Figure 3.41: The core-spreading operation for the internal dislocation networks at semicoherent interfaces. (a) The weighted Burgers vector distribution function \({}^{*}b_{1}\), with \(r_{0}=2.5\,b_{1}\) and \(b_{1}=0.32\) nm, as a function of \(x_{1}\). (b) Disregistries in terms of the original relative displacement \(u_{1}^{p}\) with compact dislocation cores (in black) and the convoluted displacement \({}^{*}u_{1}^{p}\) (red) by the core-spreading dislocation function. Both illustrations are carried out with 20 harmonics and arbitrary dislocation spacings \(p_{1}=7\) nm. (c) The prescribed traction boundary condition on the upper surface with 300 harmonics, \(l=5\) nm, \(L=5l\), and \(\varGamma=1\) (in N\(/\)m\({}^{2}\), C\(/\)m\({}^{2}\), or N\(/\)A.m). where the expansion coefficients \(I_{j}^{p}\) can analytically be obtained by multiplying both sides of eq. (3.205) by \(\sin(\pi m\,x_{1}/L)\), with \(m\) integer (i.e., making use of the periodicity of the sine function over the interval \([0,L]\)) and integrating along \(x_{1}\) from \(0\) to \(L\) at \(z=z_{w}\), i.e. \[\int_{0}^{L}\underbrace{t_{j}^{p}\left(x_{1},z=z_{w}\right)}_{=-\Gamma}\, \sin\Bigl{(}\frac{\pi m\,x_{1}}{L}\Bigr{)}\,\,\mathrm{d}x_{1}=\mathrm{Re}\int_ {0}^{L}i\sum_{n\geq 1}\mathrm{e}^{-i\pi m\,x_{1}/L}\,\sin\Bigl{(}\frac{\pi m\,x_{1}}{L} \Bigr{)}\,\,\tilde{t}_{j}^{p}\left(n,z=z_{w}\right)\,\,\mathrm{d}x_{1}\,, \tag{3.206}\] which also gives rise to \[\Gamma\,\,\left[\frac{L}{\pi m}\!\cos\Bigl{(}\frac{\pi m\,x_{1}}{L}\Bigr{)} \right]_{L/2-l/2}^{L/2+l/2}=\frac{L}{2}\,\tilde{t}_{j}^{p}\left(m,z=z_{w} \right)\,\,\Leftrightarrow\,\tilde{t}_{j}^{p}\left(m,z=z_{w}\right)=-\frac{4 \Gamma}{\pi m}\sin\Bigl{(}\frac{\pi m\,l}{2L}\Bigr{)}\,\,. \tag{3.207}\] Thus, the traction boundary condition on the top surface can be expressed as \[t_{j}^{p}\left(x_{1},z=z_{w}\right)=-\mathrm{Re}\sum_{n=1,3,5,..}i\frac{4T}{ \pi n}\sin\Bigl{(}\frac{\pi n\,l}{2}\Bigr{)}\,\sin\Bigl{(}\frac{\pi n\,l}{2L} \Bigr{)}\,\,\mathrm{e}^{-i\pi m\,x_{1}/L}\,, \tag{3.208}\] exhibiting a sum over positive odd integers, only. Figure (3.41c) illustrates the prescribed traction \(t_{j}^{p}\) from eq. (3.208) with 300 harmonics, and arbitrary values for \(l=5\) nm, \(L=5l\), and \(\Gamma=1\) (in \(\mathrm{N}/\mathrm{m}^{2}\) if \(l=3\), \(\mathrm{C}/\mathrm{m}^{2}\) if \(l=4\), or \(\mathrm{N}/\mathrm{A}\).m if \(l=5\)). It is shown that the external traction boundary condition that acts on the top surface is well-represented in terms of the Fourier series expansion, so that the external loads can consistently and similarly be described with respect to the boundary-value problem as for the semicoherent interface case. Therefore, for \(J=3,4,5\), only, the external load conditions \(\mathrm{L}\) on both the top and the bottom surfaces in the Fourier-transformed domain are finally given by \[\mathrm{L}:\,\begin{cases}\tilde{t}_{j}\left(n,z=z_{w}\right)=-\frac{4\Gamma }{\pi n}\sin\Bigl{(}\frac{\pi n}{2}\Bigr{)}\sin\Bigl{(}\frac{\pi n\,l}{2L} \Bigr{)}\\ \tilde{t}_{j}\left(n,z=z_{0}\right)=\tilde{t}_{j}\left(z=z_{w}\right)\,,\end{cases} \tag{3.209}\] where identical expansion coefficients under uniform pressure are applied on the bottom surface, at \(z=z_{0}\). The particular boundary conditions \(\mathrm{F}\) for free surfaces can also be taken into account by imposing \(\Gamma=0\) in eqs. (3.209), i.e. \[\mathrm{F}:\,\begin{cases}\tilde{t}_{j}\left(n,z=z_{w}\right)=0\\ \tilde{t}_{j}\left(n,z=z_{0}\right)=0\,.\end{cases} \tag{3.210}\] Similar procedure as the internal semicoherent interfaces in section 3.8.1 can be derived for the present external load case to explicitly determine the corresponding displacement and traction field solutions at any \(z\)-level in all layers with perfectly bonded (i.e., coherent) interface conditions. Thus, the solutions in the Fourier-transformed domain at \(z_{j}\) in layer \(j\) can be obtained from the following set of equations \[\begin{split}\left[-i2\pi\eta\,\tilde{\mu}(n,z_{0})\right]& =\begin{bmatrix}\mathbf{S}_{11}^{1,j}&\mathbf{S}_{12}^{1,j}\\ \mathbf{S}_{21}^{1,j}&\mathbf{S}_{21}^{1,j}\end{bmatrix}\left[-i2\pi\eta\, \tilde{\mu}(n,z_{j})\right]\\ \mathbf{S}_{21}^{1,j}&\mathbf{S}_{21}^{1,j}\end{bmatrix}\left[-i2\pi\eta\, \tilde{\mu}(n,z_{0})\right]\\ \left[\begin{matrix}-i2\pi\eta\,\tilde{\mu}(n,z_{j})\right]&=\begin{bmatrix} \mathbf{S}_{11}^{j,jw}&\mathbf{S}_{12}^{j,w}\\ \mathbf{S}_{21}^{j,w}&\mathbf{S}_{22}^{j,w}\end{bmatrix}\left[-i2\pi\eta\, \tilde{\mu}(n,z_{w})\right]\\ \mathbf{S}_{12}^{j,w}&\mathbf{S}_{22}^{j,w}\end{bmatrix}\,,\end{split} \tag{3.211}\] which can be recast into the following linear system to be analytically solved for any \(n\geq 1\), i.e. (3.212) \[\begin{bmatrix}\mathbf{S}_{11}^{1,j}&\mathbf{0}_{5,5}&-\mathbf{I}_{5,5,5}& \mathbf{0}_{5,5}\\ \mathbf{S}_{21}^{1,j}&-\mathbf{I}_{5,5}&\mathbf{0}_{5,5}&\mathbf{0}_{5,5}\\ -\mathbf{I}_{5,5}&\mathbf{S}_{12}^{1,j}&\mathbf{0}_{5,5}&\mathbf{S}_{11}^{2,jw }\\ \mathbf{0}_{5,5}&\mathbf{S}_{22}^{1,w}&\mathbf{0}_{5,5}&\mathbf{S}_{21}^{2,w} \\ \end{bmatrix}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, with and without applied mechanical loads. The first two-dimensional illustrative case deals with one semicoherent interface only, while the subsequent three-dimensional systems are made of two semicoherent interfaces, for which each one contains two sets of interfacial dislocations. In the following, all terminal planes between two adjacent crystals are defined by \(\mathbf{n}\parallel(001)\) in the cube-on-cube orientation, where both \(\mathbf{x}_{1}=[100]\) and \(\mathbf{x}_{2}=[010]\) directions are parallel in the interface planes. The corresponding materials properties used in these examples are listed in Table 3.10. #### 3.8.2 A primary case: 2D bilayered composites In the present two-dimensional bilayered structure without external loads, the lower layer consists of the ferromagnetic (spinel, layer 1) CFO and the upper layer of the ferroelectric (perovskite, layer 2) BTO, with \(h_{\text{CFO}}=h_{\text{BTO}}=10\) nm. Due to the moderately large 5% lattice mismatch in the CFO/BTO system, with lattice parameters \(a_{\text{CFO}}=0.838\) nm and \(a_{\text{BTO}}=0.399\) nm for CFO and BTO, respectively [300], a specific semicoherent interface with discrete edge dislocations is located between these two adjacent crystals. The straight parallel dislocations are defined along the \(\mathbf{x}_{1}\)-axis, for which the infinitely long, straight, parallel dislocations are uniformly spaced by \(p_{1}=p_{\text{FB}}=8.378\) nm, as predicted by the quantized Frank-Bilby equation. Here, the Burgers vectors are given by \(\mathbf{b}_{1}=a_{\parallel}[1\,0\,0]^{\text{t}}\) along the \(\mathbf{x}_{1}\)-axis, where the reference in-plane lattice parameter \(a_{\parallel}\) is determined using the procedure proposed in section 3.2.4 for purely elastic CFO/BTO bilayers, i.e., with electric and magnetic constants equal to zero. For this simple case, eqs. (3.201) and (3.202), combining with the specific interface conditions S in eq. (3.197) for a single set of dislocations, can be recast into the following global linear system, as \[\left[\begin{array}{cccc}\mathbf{0}_{5:5}&-\mathbf{1}_{5:5}&\mathbf{S}_{110}^{\text{BTO} }&\mathbf{S}_{110}^{\text{BTO}}\\ \mathbf{0}_{5:5}&\mathbf{0}_{5:5}&\mathbf{S}_{210}^{\text{BTO}}&\mathbf{S}_{210}^{\text{BTO}} \\ -\mathbf{1}_{5:5}&\mathbf{S}_{110}^{\text{BTO}}&\mathbf{S}_{5:5}&\mathbf{S}_{5:5}^{\text{BG}} \\ \mathbf{0}_{5:5}&\mathbf{S}_{210}^{\text{BG}}&\mathbf{0}_{5:5}&-\mathbf{1}_{5:5}\end{array} \right]\left[\begin{array}{c}-i2\pi\eta\,\tilde{\mathbf{u}}\left(n,z_{0}\right) \\ -i2\pi\eta\,\tilde{\mathbf{u}}\left(n,z_{1-}\right)\\ -i2\pi\eta\,\tilde{\mathbf{u}}\left(n,z_{2-}\right)\\ \tilde{\mathbf{u}}\left(n,z_{1}\right)\end{array}\right]=\left[\begin{array}{c}-i 2\pi\eta\,\tilde{\mathbf{u}}\left(n,z_{1}\right)\\ \mathbf{0}_{5:5}\\ \mathbf{0}_{5:5}\\ \mathbf{0}_{5:5}\end{array}\right]\,, \tag{3.213}\] which solves the Fourier-transformed unknowns, i.e., \(\left\{\tilde{\mathbf{u}}\left(n,z_{0}\right)\text{, }\tilde{\mathbf{u}}\left(n,z_{1-}\right)\text{, }\tilde{\mathbf{u}}\left(n,z_{2}\right)\text{, }\tilde{\mathbf{u}}\left(n,z_{1}\right)\text{,}\right\}\), on both external boundaries as well as on the internal interface for all \(n\geq 1\), with respect to the corresponding submatrices \(\mathbf{S}_{\alpha\beta}^{\text{CFO}}\) and \(\mathbf{S}_{\alpha\beta}^{\text{BTO}}\) for both individual materials, with also \(z_{0}=0\), \(z_{1}=h_{\text{CFO}}\), and \(z_{2}=h_{\text{CFO}}+h_{\text{BTO}}\). Figure (3.42) shows the dislocation-induced fields with three different core-spreading parameters, i.e., \(r_{0}=0\), \(r_{0}=0.3\) and \(r_{0}=2.5\) nm, with 64 harmonics that are sufficient to accurately compute the elastic, electric, and magnetic field solutions. For illustration, Fig. (3.42a) displays the periodical contours of the elastic displacement component \(u_{3}\) (in nm), the electrostatic \(\phi\) (in V) and magnetostatic \(\psi\) (in mC/s) potentials, within the area of \((x_{1},x_{3})\in[-10\text{ nm},10\text{ nm}]^{2}\), for the intermediate value \(r_{0}=0.3\) nm. It is also depicted that the presence of misfit dislocations generates strong short-range electrostatic and magnetostatic potentials in the neighborhood of the semicoherent interface. In particular, these profile should dramatically affect the magnetoelectric effect (induction of magnetization (polarization) by an electric (magnetic) field) and, in general, the coupling between the electric and magnetic fields in laminated piezoelectric/piezomagnetic layers, e.g., the influence of the interfacial dislocations on the effective magnetoelectric coupling coefficients \(\alpha_{11}\) and \(\alpha_{33}\) in such two-phase systems. \begin{table} \begin{tabular}{|c c||c c c c c c|} \hline \multicolumn{2}{|c||}{Properties} & \multicolumn{6}{c|}{Materials} \\ Symbol & Unit & LNO & BTO & 0.75 BTO & 0.50 BTO & 0.25 BTO & CFO \\ \hline \(c_{11}\) & GPa & 203 & 166 & 196 & 225 & 256 & 286 \\ \(c_{12}\) & GPa & 53 & 77 & 101 & 125 & 149 & 173 \\ \(c_{13}\) & GPa & 75 & 78 & 101 & 124 & 147 & 170 \\ \(c_{33}\) & GPa & 243 & 162 & 189 & 216 & 243 & 269 \\ \(c_{44}\) & GPa & 60 & 43 & 44 & 44 & 48 & 45 \\ \(c_{31}\) & C/m\({}^{2}\) & 0.2 & \(-4.4\) & \(-3.3\) & \(-2.2\) & \(-1.1\) & 0 \\ \(c_{33}\) & C/m\({}^{2}\) & 1.3 & 18.6 & 14.0 & 9.3 & 4.6 & 0 \\ \(c_{15}\) & C/m\({}^{2}\) & 3.7 & 11.6 & 8.7 & 5.8 & 2.9 & 0 \\ \(e_{11}\) & \(10^{-9}\) C\({}^{2}\)/N\({}^{2}\)/m\({}^{2}\) & 0.39 & 11.2 & 8.4 & 5.6 & 2.9 & 0.1 \\ \(e_{33}\) & \(10^{-9}\) C\({}^{2}\)/N\({}^{2}\)/m\({}^{2}\) & 0.26 & 12.6 & 9.5 & 6.3 & 3.2 & 0.1 \\ \(\mu_{11}\) & \(10^{-4}\) N\(s^{2}\)/C\({}^{2}\) & 0.05 & 0.05 & 1.51 & 2.97 & 4.44 & 5.90 \\ \(\mu_{33}\) & \(10^{-4}\) N\(s^{2}\)/C\({}^{2}\) & 0.10 & 0.10 & 0.46 & 0.83 & 1.20 & 1.57 \\ \(q_{31}\) & N/A.m & 0 & 0 & 145 & 290 & 435 & 580 \\ \(q_{33}\) & N/A.m & 0 & 0 & 175 & 350 & 525 & 700 \\ \(q_{15}\) & N/A.m & 0 & 0 & 137 & 275 & 412 & 550 \\ \(a_{11}\) & C/A.m & 0 & 0 & 0 & 0 & 0 & 0 \\ \(a_{33}\) & C/A.m & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 3.10: Properties of MEE materials [203] used in the application examples, with LiNbO\({}_{3}\) (piezoelectric lithium niobate, LNO), BaTiO\({}_{3}\) (pure piezoelectric barium titanate, BTO), and CoFe\({}_{2}\)O\({}_{4}\) (pure magnetostrictive cobalt ferrite, CFO). Three MEE material compositions made of BTO and CFO are indicated by \(x\) BTO, where \(x\) is the volume fraction ratio of BTO. Figure (3.42b) exhibits the distribution of \(u_{3}\), \(\phi\) and \(\psi\) for the three core-spreading parameters at \(x_{1}=-p_{1}/2\) nm, i.e., along the vertical \(z\)-axis, as depicted by the white lines in Fig. (3.42a). It is clearly demonstrated that the core widths reduce the intensity of all internal elastic, electric, and magnetic solution fields. For \(r_{0}=2.5\) nm, the derivatives of the normal displacement close to the interfaces change in sign compared to the dislocation networks with compact cores, i.e., \(r_{0}=0\) nm. All depicted solutions are continuous across the CFO/BTO interface with \(r_{0}\neq 0\), which is also the case for the elastic stress component \(\sigma_{33}\) in Fig. (3.42c) that originally diverges for \(r_{0}=0\) using the classical theory of dislocations. However, not all the field quantities are continuous across the semicoherent interfaces, due to the discrete definition of the materials properties along the \(z\)-direction, as illustrated in the next section 3.8.3. For \(r_{0}=2.5\) nm, the elastic stress \(\sigma_{33}\), electric displacement \(D_{3}\) and magnetic induction \(B_{3}\) concentrations are dramatically decreased close to the dislocations as well, as displayed in Fig. (3.42c), which reveals the main role of the spreading cores to release field concentrations produced by the topological interface defects. #### 3.8.3 Energy-based criterion for interlayers in A/B/A trilayers This section aims at deriving an energy-based criterion of zero net work that computes the critical dislocation spacings and thicknesses of interlayers in heterogeneous MEE materials. Such nanoscale inhomogeneities are typical in strain-induced martensitic transformation domains in ferroelectric systems [130, 170, 97]. More generally, the misfit stabilization of these interlayers with intrinsic dislocation networks can play a decisive role in the self-assembled structural (e.g., precipitation hardening) and functional (e.g., conversion of energies stored in electric and magnetic fields) properties of nanoscale MEE heterostructures. The heterogeneous mechanical problem is also to estimate the complete strain energy stored in the inter Figure 3.42: Illustration of some (elastic, electric, and magnetic) field solutions in CFO/BTO bilayers. (a) Cross-sectional contours of the elastic displacement component \(u_{3}\) (in nm), the electrostatic \(\phi\) (in V) and magnetostatic \(\psi\) (in mC/s) potentials, with the regularized dislocation core parameter \(r_{0}=0.3\) nm. Minimum (maximum) values are linearly displayed in blue (red), while the field solution values are equal to zero in gray. (b) The corresponding distribution of \(u_{3}\), \(\phi\), and \(\psi\) with respect to \(x_{3}\) at \(x_{1}=-p_{1}/2\), i.e., along the vertical \(z\)-axis depicted by the white lines in (a). The calculations are performed for the compact dislocation core case, i.e., \(r_{0}=0\) (black curves), and the core-spreading case, with \(r_{0}=0.3\) (red), and \(r_{0}=2.5\) nm (blue). (c) Similar distribution of the stress component \(\sigma_{13}\) (in GPa), the electric displacement component \(D_{3}\) (in C/m\({}^{2}\)), and the magnetic induction component \(B_{3}\) (in Wb/m\({}^{3}\)). layers in the presence of the lattice and stiffness mismatches as well as the interfacial dislocations. ##### From coherent to semicoherent state in trilayers As shown in Fig. (3.43a) a trilayered A/B/A composite is considered, where the adjacent layers A (interlayer B) are characterized by a finite thickness \(h_{\text{A}}\) (\(I_{\text{B}}\)) and the corresponding anisotropic MEE constants. In the present analysis, \(h_{\text{A}}\gg h_{\text{B}}\), so that the thin rectangular-shaped interlayer is assumed to be associated with a large lattice mismatch with flat interfaces and small surface tension. Depending on the lattice constants, the layers A and B are under biaxial tension or compression, such that the coherency strain field in the interlayer B is defined by \(\epsilon_{\text{B}}(a_{||})=(a_{||}-a_{\text{B}})/a_{\text{B}}\), with \(a_{||}\) the reference in-plane lattice parameter for both interfaces in trilayers, which should be different from the previous reference lattice parameter \(a_{||}\) in bilayered systems. In the present analysis, the "reference state" is conceptually created by separating the three layers and by applying uniform distortions to each individual material, as depicted in Fig. (3.43b). After structurally (not chemically) bonding these three distorted layers, the ideal commensurate trilayer is formed, within which forces are needed on both fictitious interface planes to maintain the uniform coherent state, and also the corresponding one-to-one correspondence between lattice planes on the two sides of each interface. In this reference state, which has the interface structure of a single perfect crystal, the interface is also coherent, so that the three layers are in perfect registry with each other across the interface planes. It is illustrated in Fig. (3.43c) that the continuity of the reference lattice is virtually maintained across the interfaces by the presence of continuous distributions of infinitesimal extrinsic dislocations. Because these two continuous distributions of fictitious infinitesimal dislocations are defined by the same magnitude but opposite signs, the non-zero distortions are added and uniformly distributed in the interlayer B only, and are compensated (also canceled) in both adjacent layers A. Finally, the discrete intrinsic dislocation arrays with short-range elastic fields only (i.e., free of far-field stresses) are superposed to reproduce the "natural state" that defines the semicoherent interfaces with non-uniform internal structures comprised of misfit dislocations, as depicted in Fig. (3.43d) with opposite signs compared to continuous distributions of infinitesimal dislocations. By deviating locally the continuity in the reference configuration, these discrete dislocations emerge to release the elastic stored energy in the heterostructures by alleviating the residual lattice-misfit strains from the ideal commensurable trilayers. In practice, both semicoherent interfaces in this natural state have the same internal structures with two sets of dislocations, i.e., in terms of the dislocation spacings \(p_{1}=p_{2}\) and the magnitude of both Burgers Figure 3.43: Schematic illustration of the decomposition of the total strain energy \(E_{t}\) in a representative finite-thickness A/B/A trilayers. (a) A strained three-layered structure with specific anisotropic MEE properties is composed of two different types of (virtual and misfit) dislocation networks at the upper and lower semicoherent interfaces. The region \(R\) represents the unit three-layered cell, within which the total strain energy is decomposed and computed. (b) The three materials are separated, rotated, and strained, such that the common reference configuration (depicted in green) with the same in-plane lattice is described by uniform displacement gradients applied to A (blue arrows) and B (red arrows). (c) The ideal commensurate trilayer is formed after bonding the three individual solids with the presence of continuous infinitesimal dislocations (i.e., virtual dislocations) to maintain the uniform coherent state, i.e., the three materials are in perfect registry with each other across both interface planes. These two continuous distributions of fictitious infinitesimal dislocations with the same magnitude but opposite signs generate uniform distortions that are non-zero in the interlayer B (orange arrows) and are compensated (also, zero) in both materials A. (d) The atomic structures of both semicoherent interfaces lead to formation of networks of discrete misfit dislocations separated by the regions of coherency that decrease the stored strain energy. The corresponding superposition of the three operations gives rise to non-zero stresses that are short-ranged and heterogeneously distributed in the three layers. The relaxed mismatch strain energy that consists of separating (b) and bonding (c) the three layers is denoted by \(E_{m}\), while \(E_{d}\) is associated with the work done in forming the discrete dislocation networks. The white symbols in (d) correspond to the shifted upper dislocation network, i.e., to the specific cases 5 and 6 in Fig. (3.44). vectors \(b_{1}=b_{2}=b\), except that the directions are defined by \(\mathbf{b}_{1}^{\prime}=a_{\parallel}[1\,0\,0]^{\text{t}}\) and \(\mathbf{b}_{2}^{\prime}=a_{\parallel}[0\,1\,0]^{\text{t}}\) at the lower interface, and with opposite signs, \(\mathbf{b}_{1}^{\prime\prime}=-a_{\parallel}[1\,0\,0]^{\text{t}}\) and \(\mathbf{b}_{2}^{\prime\prime}=-a_{\parallel}[0\,1\,0]^{\text{t}}\) at the upper interface, such that the interlayers are formed by periodic arrays of dislocation dipoles in MEE trilayers. ### Coherency and dislocation-induced energies In accordance with the aforementioned three-step strategy to characterize the heterophase interlayer B, the total energy \(E_{t}\) per unit area that is contained in the elementary region \(R\) in Fig. (3.43a) is conveniently expressed as \[E_{t}=2E_{d}+E_{m}\,, \tag{3.214}\] where \(E_{d}\) is the stored dislocation-induced energy due to the heterogeneous short-range stresses generated by a single set of intrinsic dislocation dipoles at the upper and lower interfaces. On the other hand, \(E_{m}\) in eq. (3.214) is the relaxed mismatch strain energy due to the differences in lattice parameter between layers A and B by introducing the continuous distributions of fictitious infinitesimal dislocations. The factor 2 in front of \(E_{d}\) is associated with the second set of dislocation dipoles that is orthogonal to the first set with zero interaction energy, so that the following calculations can conveniently be described in two dimensions. By taking the advantages of the translational periodicity for one set of interfacial dislocations and using the divergence theorem, the dislocation-induced energy contribution per unit area \(E_{d}\) in eq. (3.214) reads \[E_{d}=\frac{1}{2p_{1}}\int_{R}\left(\sigma_{ij}\mu_{i,j}-D_{i}\phi_{j}-B_{i} \psi_{j}\right)\,dx_{1}dx_{3}=\frac{1}{2p_{1}}\int_{\partial R}\sigma_{ij}\hat {n}_{i}u_{j}d\ell=-\frac{1}{2p_{1}}\int_{\rho(r)}\sigma_{ij}\hat{n}_{i}b_{j}d \ell\,, \tag{3.215}\] without electric and magnetic charge densities. The complete dislocation-induced stress field in eq. (3.215) has been derived in the previous section 3.8.1, while \(\partial R\) corresponds to the boundary of the periodic region \(R\) and \(\rho\) to the cut along the line between two discrete dislocations from the lower and upper interfaces, as depicted in Fig. (3.43a). The proper cut \(\rho\) excludes the regions of compact dislocation cores by introducing an out-of-plane cutoff parameter \(r\), so that the stress divergence near the dislocation cores is removed, with in practice: \(r=r_{0}/4\). In the following calculations with core-spreading dislocations, however, this exclusion is not necessary to compute the line integrals, so that \(r=0\). Due to the periodicity of the traction and displacement on the external boundary \(\partial R\) and the zero-traction conditions at the free surfaces, the specific traction is also reduced to the limiting stress \(\sigma_{ij}\) acting on \(\rho\), where \(\hat{n}_{j}\) denotes the unit vector normal to \(\rho\), as displayed in Fig. (3.43a). Evaluation of the integral in eq. (3.215) can also be performed using the appropriate dislocation-induced stresses, which intrinsically depend on the coupled elastic/electric/magnetic field solutions by virtue of eq. (3.164) and on the thicknesses of the three layers as well as the internal dislocation spacings. Similarly to the work done by Willis and co-workers [285, 286], the relaxed mismatch energy \(E_{m}\) in eq. (3.214) is considered as a result of the elastic superposition of the lattice-mismatched strain and the strain-an-an-mililator fields generated by the continuous distribution of infinitesimal dislocations. On the one hand, the determination of the coherent reference state in nano-trilayers (in general, nano-multilayers) would necessitate atomistics simulations because of the complexity of inhomogeneous anisotropic MEE trilayered systems with finite thicknesses. For large (but finite) thicknesses of layers A, however, the lattice parameter of material A can reasonably be selected as the reference state, so that \(a_{\parallel}=a_{\text{A}}\), and also the coherency strain field in A is \(\epsilon_{\text{A}}(a_{\text{A}})=0\), yielding to zero uniform distortions applied in both layers A in Fig. (3.43b), while the corresponding field in B is \(\epsilon_{\text{B}}(a_{\text{A}})=(a_{\text{A}}-a_{\text{B}})/a_{\text{B}}=f_{m}\). As discussed in section 3.2, the continuous distribution of fictitious dislocations with infinitesimal Burgers vectors and spacings can be represented by a linear (macroscopic) displacement field in \(x_{1}\), as \(b_{i}x_{1}/p_{1}\), which generates a corresponding uniform distortion, i.e., \(\left(b_{i}\hat{n}_{j}+b_{j}\hat{n}_{i}\right)/2p_{1}\). Hence, the genuine mismatch energy \(E_{m}\) is given by \[E_{m}=\tfrac{1}{2}\,\mathrm{g}c_{ijkl}\mathbf{\mathrm{g}}\,\mathbf{\mathrm{e}}_{kl}^{m }\,\mathbf{\mathrm{g}}\,\mathbf{\mathrm{e}}_{ij}^{m}\,\mathbf{\mathrm{h}}_{\text{B}}\,, \tag{3.216}\] with \(\mathrm{g}c_{ijkl}\) the elastic constants of the interlayer B, and \(\mathrm{g}c_{ij}^{m}\) the relaxed mismatch strain field defined by \[\mathrm{g}c_{ij}^{m}=f_{m}\delta_{ij}+\frac{b_{i}\hat{n}_{j}+b_{j}\hat{n}_{i}} {p_{1}}\,, \tag{3.217}\] which is homogeneously distributed in the interlayer B for two sets of continuous distributions of dislocation dipoles. ### Critical dislocation spacings and interlayer thicknesses In the following calculations, the layered MEE structure is made of three layers, with A = BTO and B = CFO, for which two different stacking sequences are discussed, i.e., the BTO/CFO/BTO and CFO/BTO/CFO tralayers. The thicknesses of both adjacent layers A are fixed and sufficiently large compared to the dislocation spacings predicted by the Frank-Bilby equation in bilayered systems, i.e., \(h_{\rm A}=40\) nm. In extremely thin (but stable) multiferroics and miniaturized magnetoelectric memory devices, the relations and size effects between the interfacial dislocation spacings and the interlayer thickness \(h_{\rm B}\) become desirable for novel technological paradigms by dislocation engineering. In particular, the estimate of the critical quantities \(\kappa_{c}\), e.g., dislocation spacings and interlayer thicknesses, are obtained by finding the values \(\kappa_{c}\) such that eq. (3.214) yields to \[E_{t}(\kappa_{c})=2E_{d}(\kappa_{c})+E_{m}(\kappa_{c})=0\,, \tag{3.218}\] exhibiting an energy balance criterion between the dislocation-induced energy contribution and the relaxed mismatch strain energy from the perfectly coherent trilayered state. Based on a comparison of energy states in eq. (3.218), the critical values for the inter-dislocation distances and thicknesses correspond to the situation where the background relaxed mismatch stress is completely balanced by the stresses generated by the misfit dislocations. Thus, the zero total energy criterion leads to the structural characteristics in trilayers for which the coherently strained interlayer is stabilized by the presence of discrete misfit dislocations at both interfaces of the interlayers. Figures (3.44a) and (b) illustrate the determination of the critical dislocation spacings and interlayer thicknesses, respectively, by plotting the dislocation stored energy \(2E_{d}\) (black curves), the relaxed mismatch strain energy \(E_{m}\), and the total energy \(E_{t}=2E_{d}+E_{m}\), for six different cases. All energy contributions are expressed in J/m\({}^{2}\). For the given \(i^{\rm th}\) case, the solid (dotted) curves are associated with the BTO/CFO/BTO (CFO) trilayers, while the sub-case \(i^{*}\) (\(i^{**}\)) is related to field solutions with a core parameter \(r_{0}=0.3\) nm (\(r_{0}=2.5\) nm), also called here: "condensed dislocation cores" ("core-spreading dislocations"). Thus, the present results exclude the calculations with the unrealistic compact dislocation cores, i.e., calculations with \(r_{0}=0\) nm. The considered cases are: * Cases 1 and 2 exhibit the effect of the dislocation spacings \(p_{1}\) on the energy profiles, with fixed finite thickness for the interlayers B (= CFO or BTO, depending on the stacking sequence), i.e., \(h_{\rm B}=2\) nm and \(h_{\rm B}=12\) nm, respectively. Thus, the specific calculations of case \(1^{**}\) are performed with \(r_{0}=2.5\) nm and \(h_{\rm B}=12\) nm. * Cases 3 and 4 illustrate the influence of the intermediate thicknesses \(h_{\rm B}\) on the energy profiles, with fixed dislocation spacings, i.e., \(p_{1}=p_{\rm FB}=8.378\) nm and \(p_{1}=12\) nm, respectively. * Case 5 (case 6) corresponds to the previous case 4, within which the upper dislocation array is shifted by half the dislocation spacings \(p_{1}\) with respect to the unchanged lower dislocation network in the BTO/CFO/BTO (CFO/BTO/CFO) trilayer, as displayed by dislocations in white at the upper interface in Fig. (3.43d). Table 3.11 summarizes the aforementioned configurations, while Table 3.12 reports the predictions of the critical quantities for the different cases, obtained from Fig. (3.44) when \(E_{t}(\kappa_{c})=0\). Comparing rows 2 and 3 in Table 3.12, it is concluded that the largest critical values are always associated with the \begin{table} \begin{tabular}{|c|c c c c c c c c c c c|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{Cases} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\ Trilayers & \(1^{*}\) & \(1^{**}\) & \(2^{*}\) & \(2^{**}\) & \(3^{*}\) & \(3^{**}\) & \(4^{*}\) & \(4^{**}\) & \(5^{*}\) & \(5^{**}\) & \(6^{*}\) & \(6^{**}\) \\ BTO/CFO/BTO & 22.28 & 14.95 & 12.20 & 10.01 & \(\sim\infty\) & \(\sim\infty\) & \(12.92\) & 4.44 & 17.26 & 4.45 & & \\ CFO/BTO/CFO & 28.53 & 17.75 & 28.53 & 10.73 & \(\sim\infty\) & \(\sim\infty\) & 23.54 & 9.26 & & & 26.99 & 12.06 \\ \hline \end{tabular} \end{table} Table 3.12: Critical values (in nm) for the six different cases (see text for details), i.e., the critical dislocation spacings for cases 1 and 2, while the others deal with the critical thicknesses of the interlayer. Numerical calculations are performed for layered MEE structure made of three layers, with both BTO and CFO materials. \begin{table} \begin{tabular}{|c|c c c c|c c c c c c|} \hline case \(1^{*}\) & case \(1^{**}\) & case \(2^{*}\) & case \(2^{**}\) & case \(3^{*}\) & case \(3^{**}\) & case \(4^{*}\) & case \(4^{**}\) \\ \(h_{\rm B}=2\) nm & \(h_{\rm B}=2\) nm & \(h_{\rm B}=12\) nm & \(h_{\rm B}=12\) nm & \(p_{1}=8.378\) nm & \(p_{1}=8.378\) nm & \(p_{1}=12\) nm & \(p_{1}=12\) nm & \(p_{1}=12\) nm \\ \(r_{0}=0.3\) nm & \(r_{0}=2.5\) nm & \(r_{0}=0.3\) nm & \(r_{0}=2.5\) nm & \(r_{0}=0.3\) nm & \(r_{0}=2.5\) nm & \(r_{0}=0.5\) nm & \(r_{0}=0.3\) nm & \(r_{0}=2.5\) nm \\ varying \(p_{1}\) & varying \(p_{1}\) & varying \(p_{1}\) & varying \(p_{1}\) & varying \(p_{1}\) & varying \(p_{1}\) & varying \(h_{\rm B}\) & varying \(h_{\rm B}\) & varying \(h_{\rm B}\) & varying \(h_{\rm B}\) \\ \hline \end{tabular} \end{table} Table 3.11: Different configurations in trilayered A/B/A composites, where the specific characteristics are schematically illustrated in Fig. (3.43a) with \(h_{\rm B}\) being the middle layer thickness, \(p_{1}\) the inter-dislocation spacing, and \(r_{0}\) the core-spreading parameter. These configurations are applied to both BTO/CFO/BTO and CFO/BTO/CFO stacking sequences. In addition, cases 5 and 6 correspond to case 4 in the BTO/CFO/BTO and CFO/BTO/CFO trilayers, respectively, within which the upper dislocation array is shifted by half dislocation spacing with respect to the lower dislocation array, as depicted in Fig. (3.43d). CFO/BTO/CFO sequences, as displayed by the vertical dotted versus solid arrows in Fig. (3.44). Here, BTO is elastically softer than CFO, which therefore reduces the magnitude of \(E_{m}\) for the CFO/BTO/CFO trilayers, compared to the BTO/CFO/BTO ones. On the other hand, the stacking MEE sequence has less influence on dislocation-induced energy \(E_{d}\) than the coherency energy \(E_{m}\), especially for case 2, even though the elastic constants for these two materials are considerably different, as listed in Table 3.10. In contrast to purely elastic calculations, it is worth remembering that the present predictions result from the elastic/electric/magnetic coupling phenomenon, which also resorts to the coupled constitutive relation in eq. (3.164) with three distinct (elastic, electric, and magnetic) contributions. Whereas the first elastic part gives rise to different stress distributions from both stacking sequences, the piezoelectric and piezomagnetic terms are therefore able to counterbalance the stress difference that is generated using the purely elastic constitutive relations alone. For both cases 1 and 2 with fixed thicknesses, the positive coherency energy decreases (increases) when \(p_{1}<\hat{p}_{1}\) (\(>\hat{p}_{1}\)), and is equal to zero when \(p_{1}=\hat{p}_{1}\), i.e., \(E_{m}(\hat{p}_{1})=0\). The latter corresponds to the fully relaxed mismatch strain case, i.e., \(\mathfrak{g}e_{ij}^{m}=0\) in eq. (3.217), for which the interlayers are entirely accommodated by the continuous distribution of virtual dislocations. Because the core-spreading regions affect the short-range stress concentration close to the interfaces (not the coherency energy \(E_{m}\)) the dislocation-induced energy \(E_{d}\) is reduced in magnitude when the core-spreading parameter \(r_{0}\) increases, so that \(E_{m}\) becomes more dominant than \(E_{d}\) for large values of the regularized dislocation cores. Furthermore, the energy variations show that \(E_{d}\) decreases monotonically in magnitude when increasing \(p_{1}\) for the condensed cores (e.g., cases \(1^{*}\) and \(2^{*}\)), while \(E_{d}\) becomes fairly constant with respect to \(p_{1}\) for core-spreading dislocations (cases \(1^{**}\) and \(2^{**}\)). Significant differences between these two profiles are observed for very small dislocation spacings (equivalently, for high interfacial dislocation densities). Case 1 versus 2 illustrates that the critical inter-dislocation distance (density) decreases (increases) with increasing the thickness of the interlayers for both sequences, which is qualitatively in ac Figure 3.44: Estimate of critical quantities in MEE trilayers, i.e., dislocation spacings in (a) and thicknesses in (b), for different cases (see text for details of these cases) with condensed dislocations and core-spreading cores. The coherency energy \(E_{m}\) (red curves) can be recovered (except for case 3) by the work done in forming the discrete dislocation networks \(E_{d}\) (black curves), such that the critical quantities are obtained when the total strain energy \(E_{t}\) (blue curves) is zero, as depicted by the vertical solid and dotted arrows. The results for the BTO/CFO/CFO (CFO/BTO/CFO) systems are indicated with solid (dotted) lines. The specific case 5 (6) corresponds to case 4, within which the upper dislocation array is shifted by half the dislocation spacings in the BTO/CFO/CFO (CFO/BTO/CFO) system, as displayed in Fig. (3.43d). For comparison, the thin solid and dotted lines in cases 5 and 6 indicate the results from cases 4\({}^{*}\) and 4\({}^{**}\), respectively. Figure 3.45: Comparison of elastic, electric, and magnetic field quantities induced by dislocation networks with condensed dislocation cores (contours in left-hand sides, with \(r_{0}=0.3\) nm) and with core-spreading regions (right-hand sides, with \(r_{0}=2.5\) nm) in A/B/A trilayers, with A = BTO and B = CFO, i.e., the elastic displacement components \(u_{1}\) and \(u_{3}\) (from \(-0.01\) to \(0.01\) nm), the electrostatic \(\phi\) (from \(-0.01\) to \(0.01\) V) and magnetostatic \(\psi\) (\(-0.01\) to \(0.01\) mC/s) potentials, the electric displacement component \(D_{3}\) (\(-0.2\) to \(0.2\) C/m\({}^{2}\)), the magnetic induction component \(B_{3}\) (\(-8\) to \(8\) Wb/m\({}^{3}\)), and the stress components \(\sigma_{11}\), \(\sigma_{33}\), \(\sigma_{12}\), and \(\sigma_{13}\) (in GPa). Minimum (maximum) values are linearly displayed in blue (red), while the field solution values are equal to zero in gray. cordance with experimental investigations in bilayers [184]. Case 3 shows that \(E_{m}\approx 0\) when \(p_{1}=p_{\text{FB}}\), while \(E_{d}\) decreases slowly in magnitude with increasing \(h_{\text{B}}\), so that no critical thicknesses are reached. This theoretical result suggests that the equilibrium inter-dislocation distances are larger in finite-thickness trilayers than in the semi-infinite bicrystals. Case 3 versus 4 demonstrates that the critical interlayer thicknesses decrease when increasing the dislocation spacings for both MEE sequences and both condensed and core-spreading dislocations, which is due to stronger elastic interactions for high dislocation spacings. Again, case 4 illustrates that the core-spreading regions have a great influence on the determination of the critical interlayer thicknesses. For the shifted case 5 versus 6, the critical values for thicknesses are larger than the previous unshifted cases, except for case 4\({}^{**}\) that has the same value as case 5\({}^{**}\) for large core-spreading dislocation widths. Furthermore, \(E_{d}\) is close to zero for small thickness values, as qualitatively expected using the classical theory of dislocation dipoles. #### Size effects on the coupled MEE field solutions in trilayers Figure (3.45) illustrates the influence of the spreading dislocation cores on various elastic, electric, and magnetic solution field components for both particular cases 3\({}^{*}\) and 3\({}^{**}\) (i.e., with \(r_{0}=0.3\) nm and \(r_{0}=2.5\) nm, respectively), where \(h_{\text{B}}=h_{\text{CGO}}=12\) nm are comparable with the internal inter-dislocation spacings. This MEE system is identified by the red asterisk in Fig. (3.44). Similarly to the previous primary bilayered case, the general tendency is that the spreading-core regions release significantly the aforementioned solution fields in magnitude, such that the larger the spreading widths are, the lower the complex distribution and concentration of these elastic, electric, and magnetic quantities become (especially close to the interfaces). Thus, the present core-spreading treatment can therefore be regarded as flattening and stretching operations in the \(z\)- and \(x\)- directions, respectively, of the released elastic, electric, and magnetic concentration originated from the compact cores. Interestingly, whereas the electrostatic and magnetostatic potentials are non-zero in both materials, the electric displacement \(D_{3}\) and the magnetic induction \(B_{3}\) are strictly equal to zero in the magnetostrictive CFO and piezoelectric BTO layers, respectively. The theoretical coexistence of these highly localized electric and magnetic characteristics that emerge from the interfacial dislocations should unambiguously produce remarkable effects on the electric and magnetic properties in MEE heterostructures, as substantial energy electron fluxes in laminated structures. This suggests also that interfacial dislocation networks cannot always be considered as detrimental, but can present an opportunity to enhance the material performance, and to produce exceptional/exotic performances though dislocation technological concepts. Figure 3.46: Variation of elastic displacement component \(u_{3}\) (in nm) and stress components \(\sigma_{11}\), \(\sigma_{12}\), and \(\sigma_{33}\) (in GPa) with respect to \(x_{3}\) at \(x_{1}=0\), i.e., along the vertical \(z\)-axis, midway between two interfacial dislocation dipoles in Fig. (3.45). Both misfit dislocation arrays have Burgers vectors with the same magnitudes, but with opposite signs. Calculations are performed for interfacial dislocations with condensed dislocation cores (plots in left-hand sides) and spread cores (right-hand sides) in trilayered BTO/CFO/CFO (blue/red/blue) and CFO/BTO/CFO (red/blue/red) systems. The results are illustrated for cases 3\({}^{*}\) and 3\({}^{**}\), with specific thicknesses, i.e., \(h_{\text{B}}=5\) nm (black curves) and \(h_{\text{B}}=12\) nm (red curves), as depicted by the black \(*\) and red \(*\) asterisks in Fig. (3.44). All red curves in Fig. (3.46) illustrate the variation of the elastic displacement \(u_{3}\) and stress components \(\sigma_{11}\), \(\sigma_{12}\), and \(\sigma_{33}\) at \(x_{1}=0\), i.e., along the vertical \(z\)-axis between two interfacial dislocations in Fig. (3.45), for which the interlayer thickness in both stacking sequences is \(h_{\rm B}=12\) nm. For comparison, the black curves that correspond to the similar MEE system with \(h_{\rm B}=5\) nm, identified by the black asterisk in case 3 from Fig. (3.44), are plotted as well (converted to the same interface locations for easy comparison, with a similar treatment in Fig. (3.47)). Calculations are performed for interfacial dislocations with condensed dislocation cores (plots in left-hand sides, with \(r_{0}=0.3\) nm) and spreading-core dislocations (right-hand sides, with \(r_{0}=2.5\) nm) in both trilayered BTO/CFO/BTO (blue/red/blue) and CFO/BTO/CFO (red/blue/red) systems. It can quantitatively be shown that the magnitudes in the normal displacement and stress field components are dramatically released by the core-spreading operations. The normal displacement between two misfit dislocations is continuous across both interfaces with similar characteristic double-well shaped profiles close to the internal boundaries as in Fig. (3.42b), with positive (negative) values at the lower (upper) interfaces. The elastic stress component \(\sigma_{33}\) is continuous across the interfaces as well, as expected by the required boundary conditions in the MEE trilayers. The corresponding profiles of \(\sigma_{33}\) are different in the interlayers for \(h_{\rm B}=5\) nm (black curve) and \(h_{\rm B}=12\) nm (red curve), for which the former (later) exhibit parabolic (double-well) profiles in both CFO and BTO interlayers, with large differences in magnitude (that are reduced by spreading the dislocation cores). On the other hand, \(\sigma_{11}\) and \(\sigma_{12}\) in Fig. (3.46) are discontinuous across the interfaces, which is intrinsically ascribed by the heterogeneous elastic properties of the adjacent layers that differ from the interlayers. It can also be observed that the shear component in the middle layers is very sensible to the associated thicknesses, which increases with decreasing thicknesses due to the strong elastic interactions (and also, the superposition of \(\sigma_{12}\)) between both adjacent dislocation networks with opposite signs. The \(\sigma_{11}\) component, however, results from the superposition of positive and negative regions produced by these adjacent networks, which yields to weaken size effects in the interlayers. It is reasonable to point out that such size effects in the interlayer thicknesses would considerably affect the glide and climb components of the Peach-Kohler force acting on lattice dislocations in the interlayers, and also the corresponding microscopic plastic deformation mechanisms and related macroscopic mechanical properties in MEE multilayers. Figure (3.47) illustrates similar plots as in Fig. (3.46), but for electric quantities (electrostatic potential \(\phi\), and electric displacement \(D_{3}\)) and for magnetic quantities (magnetostatic potential \(\psi\) and magnetic induction \(B_{3}\)). All quantities are continuous across both semicoherent interfaces, and differences in profiles are more discernible between both stacking sequences for the electric and magnetic measures than the relatively small variations in the elastic fields, as excepted. Interestingly, the electric displacement \(D_{3}\) and the magnetic induction \(B_{3}\) have alternatively analogous profiles in CFO and BTO interlayers depending on the stacking sequence. Similar features as in the elastic variations in the parabolic versus double-well distributions are emphasized with respect to the interlayer thicknesses, which are extremely reduced by the spreading-core operations. Important size effects on the electric displacement \(D_{3}\) (the magnetic induction \(B_{3}\)) are observed in the intermediate layer BTO (CFO) in the CFO/BTO/CFO (BTO/CFO/BTO) trilayers. Figure 3.47: Similar illustration as in Fig. (3.46) for electrostatic potential \(\phi\) (in V), electric displacement \(D_{3}\) (in C/s\({}^{2}\)), magnetostatic potential \(\psi\) (in mC/s), and magnetic induction \(B_{3}\) (in Wb/m\({}^{3}\)). #### Dislocation-induced response under applied external loading Two three-dimensional MEE systems are investigated and compared, i.e., the tri- LNO/BTO/CFO (green/orange/maroon) and the six- LNO/BTO/0.75BTO/0.50BTO/0.25BTO/CFO layered systems in the same cube-on-cube orientations as previously discussed, for which the lead-free ferroelectric LiNbO\({}_{3}\) (LNO) is a piezoelectric material as well as the widely used BTO material. Here, 0.25BTO means 25% of BTO in the MEE composite made of BTO and CFO, so that the intermediate 0.75BTO/0.50BTO/0.25BTO trilayer can be regarded as a buffer sequence to progressively accommodate the lattice mismatch between BTO and CFO. For these two cases, the same mechanical load is applied to both external surfaces and two semicoherent interfaces are considered, so that both lower and upper interfaces in the six-layered system are located between LNO and BTO, and, between 0.25BTO and CFO, respectively. The following calculations aim at introducing the capabilities of the present framework to investigate the distribution of the elastic, electric, and magnetic field solutions in complex MEE multilayers under externally applied loads, with buffer sequences in presence of topological defects at two semicoherent interfaces. **Interaction between internal dislocation fields and externally mechanical loads** Both interfaces have different internal structures in terms of dislocation spacings and Burgers vectors. Here, the internal dislocation structure at the lower LNO/BTO interface is described by the same dislocation spacings as in the former studies in trilayers, i.e., \(p_{1}^{l}=p_{2}^{l}=8.378\) nm, with \(b_{1}^{l}=b_{2}^{l}=a_{\text{BTO}}=0.399\) nm, while the upper 0.25BTO/CFO interface is characterized by a higher dislocation density, where \(p_{1}^{u}=p_{2}^{u}=5.911\) nm, and \(b_{1}^{u}=b_{2}^{u}=a_{\text{CFO}}/2=0.419\) nm. The cross-sectional illustration in Fig. (3.48a) illustrates both internal structures in the tri- and six-layered systems of interest. Importantly, all dislocations have Burgers vectors with the same sign, and all three-dimensional calculations are performed with \(r_{0}=0.3\) nm. Because of the miniaturized dimensions of ultrathin multiferroics in the experimental literature [88, 24], Figure 3.48: Distribution of superposed field quantities in the six-layered MEE materials along two lines in the \(x_{3}\)-direction, i.e., at \(x_{1}=0\) (solid lines) and \(x_{1}=(p_{1}+p_{2})/4\) (dotted lines), as displayed in (a). The total field solutions (red curves) result from the superposition of the external load (blue curves) and the dislocation-induced (black curves) fields. Elastic (b) displacement \(u_{3}\) (in nm) and (c) stress \(\sigma_{33}\) (in GPa) components. Electric (d) potential \(\phi\) (in V), (e) displacement \(D_{3}\) (in C/s\({}^{2}\)) and (f) field \(E_{3}\) (in V/\(\mu\)m) components. Magnetic (g) potential \(\psi\) (in mC/s), (h) induction \(B_{3}\) (in Wb/m\({}^{3}\)), and (i) field \(H_{1}\) (in A/\(\mu\)m) components. nominal nanoscale thicknesses are arbitrarily chosen: \(h_{\text{LNO}}=h_{\text{CFO}}=3\) nm, \(h_{\text{BTO}}=1.5\) nm, and the thickness of each intermediate buffer layer (i.e., for the layers of 0.25 BTO, 0.50 BTO, and 0.75 BTO) is equal to 1 nm. The mechanical load that is applied to both external surfaces is \(\Gamma=1\) GPa, over \(l=10\) nm, while the corresponding responses are computed using totally 1024 harmonics in both directions. To complete the present results, external electric and magnetic loadings could also be applied and compared to the mechanical loads in the six-layered heterostructure. ##### Distribution of the MEE field solutions in the six-layered heterostructure Figure (3.48) focuses on the variations of some elastic, electric, and magnetic field components in the six-layered heterostructure, resulting also from the superposition (red curves) of the external load (blue) and the dislocation-induced (black) solutions, along two vertical (solid and dotted) lines in the \(z\)-direction, as depicted in Fig. (3.48a). The solid lines are located at \(x_{1}=0\), while the dotted line are located midway between two adjacent dislocations, at \(x_{1}=(p_{1}+p_{2})/4\). All considered elastic, electric, and magnetic quantities are continuous across the five internal interfaces, except the vertical electric \(E_{3}\) and horizontal magnetic \(H_{1}\) fields that reveal strong and sharp discontinuities at the interfaces. The later are computed by inverting eq. (3.164) and also solving for the extended strains, as \[\begin{bmatrix}\gamma\\ \mathbf{E}\\ \mathbf{H}\end{bmatrix}=\begin{bmatrix}\mathbf{c}&-\mathbf{e}^{\text{t}}&-\mathbf{q}^{\text{t} }\\ \mathbf{e}&\mathbf{\epsilon}&\mathbf{\alpha}\\ \mathbf{q}&\mathbf{\alpha}&\mathbf{\mu}\end{bmatrix}^{-1}\begin{bmatrix}\mathbf{\sigma}\\ \mathbf{D}\\ \mathbf{B}\end{bmatrix}\, \tag{3.219}\] reading in the vector-tensor form. All non-homogeneous solutions are more disturbed and disrupted close to the dislocation cores (dotted lines) with pronounced changes at the interfaces, revealing that the long-range interactions between adjacent dislocation networks have important effects on the distribution of the MEE field components. Significant differences between solid and dotted lines show that the solution fields are not uniformly distributed with dramatically changes in sign (e.g., the vertical electric displacement \(D_{3}\) and the horizontal magnetic induction \(H_{1}\), which are both discontinuously distributed), so that the atomic-scale measure of the layered magnetoelectric effects in dislocated composites by semicoherent interfaces should be interpreted with considerable precautions. As a conclusive illustration, three-dimensional visualization of three elastic (von Mises stress, electric (positive horizontal displacement \(D_{1}\)), and magnetic (positive vertical magnetic induction \(B_{3}\))) quantities in the six-layered multilayer are exhibited in Fig. (3.49). These figures illustrate the highly localized nature of these field components, which are dramatically located at both, lower, and upper interfaces, respectively. For instance, the in-plane von Mises stress concentration along the misfit dislocations indicate the possible nucleation sources for plastic deformation mechanisms and cracks, preferentially at upper interface with the highest dislocation density. The horizontal displacement \(D_{1}\) is asymmetrically concentrated at the lower interface and the lower LNO layer, while the misfit dislocation intersections introduce preferential sites with maximum magnetic induction \(D_{3}\) that is also diffused in the upper CFO layer. Figure 3.49: Three-dimensional spatial distribution in the six-layered MEE composite of the (a) von Mises stress (in GPa), (b) electric displacement \(D_{1}\) (in C/s\({}^{2}\)), and (c) magnetic induction \(B_{3}\) (in Wb/m\({}^{3}\)) components. ## Chapter 4 Conclusion and future works ### 4.1 Concluding remarks During my first years at the French Alternative Energies and Atomic Energy Commission, a three-dimensional continuum thermodynamically consistent formalism for combining elastoplasticity and phase-field theories has been developed for displacive phase transformations in finite strains. In accordance with the Clausius-Duhem inequality, explicit expressions for the Helmholtz free energy and constitutive relations have been used to determine the displacive driving forces for pressure-induced martensitic phase transitions. Inelastic forces are obtained by a representation of the energy landscape using the concept of reaction pathways for multivariants with respect to the point group symmetry properties of crystalline lattices. In particular, the Mao-Bassett-Takahashi transition path is used to characterize the transformational distortion along the reaction pathways for iron. On the other hand, the elastic forces are formulated for the general case that accounts for large strains and rotations, nonlinear and anisotropic elasticity with different pressure-dependent properties of stable and intermediate phases. Implemented in a fully Lagrangian code, the nonlinear formalism is applied to analyze the forward and reverse polymorphic phase transformations under high pressure compression in single-crystal iron, within which the multiple lattice-related variants for (low-pressure) cubic and and (high-pressure) hexagonal structures are distinctly generated. Two loading conditions are investigated, i.e. the quasi-static and shock-wave regimes in Refs. [257] and [264], respectively. The first application shows that a forward bcc \(\rightarrow\) hcp transformation of the initial single-crystal bcc phase into a polycrystal of hcp variants is energetically unfavorable due to the large amplitude of the stored elastic energy interactions between phases, and also remains incomplete without plasticity. However, the polymorphism bcc \(\rightarrow\) hcp \(\rightarrow\) bcc martensitic transformations occurs when plasticity is active. This simulation result is due to the effect of the plastic dissipation that releases considerably the elastic strain energy in the formation of a polycrystalline iron with an unexpected selection of variants. On the other hand, the second dynamics simulations with plasticity accurately reproduce important observable characteristics reported by the experimental literature. For instance, the free-surface velocity exhibits that the shock wave is unstable, which breaks up into elastic, plastic and phase-transition waves for which the bcc-to-hcp phase transformation pressure is in agreement with experiments. The present split three-wave structure is characterized by the dynamical evolution of the strain from one- to three-dimensional compression with a local stress state that also relaxes to a nearly hydrostatic state. Similar plastic relaxations, however without structural phase transformation, have already been revealed in shock-compressed copper by diffraction experiments and large-scale molecular dynamics simulations. Furthermore, the microstructural stress-informed analyzes complement the extensive studies hitherto examined by molecular dynamics simulations with multi-million-atoms in the last two decades. The heterogeneous plastic deformation is quantitatively found to play a significant role in nucleating and selecting the shock-induced variants at high pressure, which significantly differs from samples loaded under hydrostatic external compression. The Lagrangian time-position diagrams reveal that the prompt plastic relaxation to a nearly hydrostatic local state from uniaxial shock-compression is responsible for the peculiar multiphase microstructure with a gradient selection of high-pressure variants behind the phase-transition wave front. The existence of two sets of variants, so-called "release" and "reload" variants appearing in separated zones, results from a nucleation instability that leads to a specific fingerprint of the nonlinear dynamics of unstable shock waves induced by structural phase transformations. The continuum formalism for phase transitions is, however, incomplete. In particular, the formation of homo-phase grain boundaries and the heterophase interfaces between low- and high-pressure phases during the coexistence of the solid-solid phases should be accompanied by a loss the lattice coherence. This lattice mismatch by rotations and strains is ignored in the aforementioned simulations, while experimental observations have revealed that coherent interfaces break down the perfectly-matching interfaces through the presence of misfit dislocation structures at such (semicoherent) interfaces in a variety of conditions. A lattice-based approach has therefore been developed to overcome this significant limitation, first dedicated to materials that are mapped to a common reference state using displacement gradients alone. The ad-hoc strategy has been conveniently applied to interface between fcc and bcc crystals, which could be formed during the temperature-driven polymorphic bcc-fcc phase transition in iron as well as the pressure-driven bcc-fcc-hcp transitions, with the fcc phases as intermediate phases. The lattice-based model combines the closely related Frank-Bilby and O-lattice techniques with the Stroh sextic formalism for the anisotropic elasticity theory of interfacial dislocation networks [249]. Starting from my postdoctoral position at the Massachusetts Institute of Technology, the formalism is used by means of a Fourier series-based analysis to determine the reference states of semicoherent interfaces that gives rise to dislocations whose far-field elastic fields meet the condition of vanishing far-field strains and prescribed misorientations. These interface dislocations are viewed as Volterra dislocations that have been inserted into the reference state, subject to the stated constraints at long range. The complete elastic fields of these dislocations are calculated using heterogeneous anisotropic linear elasticity and interface dislocation configurations consistent with the quantized Frank-Bilby equation. The present model resolves the ambiguity arising from the infinite number of reference states available when the Frank-Bilby equation is analyzed based on geometry alone, i.e. without consideration of the elastic fields. The importance of accounting for the reference state has been illustrated in Refs. [253, 251], for which the selection of incorrect reference states leads to non-zero far field stresses, spurious far-field rotations, or both. Overall, all results reflect the importance of considering the anisotropy of elastic constants in the materials joined at the interface, where unequal partitioning of elastic fields is found. The corresponding energetics have been quantified and used for rapid computational design of interfaces with tailored misfit dislocation patterns [250, 256]. In particular, the coupled approach with an object kinetic Monte Carlo code has revealed that elastic interactions between radiation-induced point defects and semicobetron interfaces lead to significant increases in interface sink strength, compared to the case with no defect-interface interactions [256]. The original version has also been extended to bilayers of finite thickness terminated with free surfaces [254], layered superlattices with differing layer thicknesses [255] as well as multilayered MEE solids [262] for semicoherent interfaces with relaxed dislocation patterns at semicoherent interfaces [258, 259] and core-spreading dislocation networks [268]. For many complicated lattice structures, the elastic full-field solutions have been compared with atomistic calculations [250, 263], which provide an opportunity for rigorous validation of the anisotropic elasticity theory of interfacial dislocations as well as for collaborations with individuals outside the home laboratory. Recently, a unified formalism for intrinsic dislocation arrays and extrinsic dislocation loops has recently been developed in Ref. [269], improving the first investigation on the estimation of elastic interactions between both types of defects from Refs. [260, 261]. Regarding the active research topic on the role played by the dislocations in interface-dominated materials, three inspiring routes are currently emerging, which are focused mainly on the thermoelasticity of imperfect interfaces as well as the interactions between dislocations and cracks by use of theoretical (continuously distributed dislocations based) and numerical (finite-element based) approaches. ### Perspectives #### Thermoelasticity of semicoherent interfaces * **A. Vattre**, E. Pan, V. Chiaruttini. _Free vibration of fully coupled thermoelastic multilayered composites with imperfect interfaces._ Composite Structures, 113203, 2021. * **A. Vattre**, E. Pan. _Thermoplasticity of multilayered plates with imperfect interfaces._ International Journal of Engineering Science, 158, 103409, 2021. In this research line, the thermoelasticity response of the most advanced dislocation-based model from the previous chapter 3 is targeted in the near future, including the presence of intrinsic and extrinsic dislocations in multilayered materials subjected to external thermoelastic loads. A first effort has recently been done in Refs. [265, 266], within which the imperfect interfaces are described by phenomenological constitutive relations. In the former reference, the three-dimensional solutions for time-harmonic temperature and thermoelastic stresses in multilayered anisotropic layers are derived with imperfect boundary conditions at internal interfaces using the extended Stroh formalism combined with nonlocal effects. For illustration, the residual stress fields in graphite fiber-reinforced epoxy matrix composites are investigated in Fig. (4.1). In particular, a unidirectional graphite-epoxy composite with fibers oriented along the \(x_{1}\)-direction (material depicted in grey) and a soft core is considered, where the thermoelastic properties and dimensions of both materials are reported in Ref. [265]. Following Savoia and Reddy [219], the steady-state thermoelastic bending of the three-layered sandwich square plates with \(L_{x}/L_{y}=1\) are subjected to a sinusoidal temperature that rises at both bottom and top surfaces, with \(\hat{T}^{B}=1\) K, and \(\hat{T}^{T}=-1\) K, respectively. Figure (4.1) shows the effects of the ratios of \(L_{x}/H\) and \(l/H\) on various thermoelastic field solutions, by varying the lateral length \(L_{x}\) as well as the nonlocal Eringen-based parameter \(l\), where the entire thickness \(H\) is kept fixed. For thinner plates, the temperature profile tends to a linear distribution through each individual layer, as illustrated in Fig. (4.1a), while nonlinear exponential branches appear in the graphite-epoxy composite plates for larger thicknesses. This trend indicates that when the aspect ratio is small, namely \(L_{x}/H<5\), the standard thin-plate result may be invalid, even though the temperature remains linear (close to zero) in the middle layer. The corresponding curves associated with the heat flux in Fig. (4.1b) are different from the temperature variation along the vertical \(z\)-direction. In particular, the normal heat flux is continuous across the interfaces and tends also to be steeper for thinner systems, while significant gradient emerges at the external surfaces as \(L_{x}/H\) decreases. The in-plane normal stress components \(\sigma_{11}\) and \(\sigma_{22}\) are displayed in Figs. (4.1c-f) for both extreme aspect ratios with further consideration of nonlocal effect. Three ratios for the nonlocal analysis are examined. It is worth noting that with reference to the composite stiff faces, higher in-plane stress levels occur in the direction perpendicular to the fibers. Moreover, due to material property mismatch between the layers, these in-plane normal stresses are discontinuous at both interfaces, with significant discontinuities in \(\sigma_{11}\) when the aspect ratio is small, as shown in Fig. (4.1d). The amplitudes of these discontinuities at internal interfaces are therefore less pronounced for thinner plates, with negligible effect by the nonlocal parameter. However, the nonlocal parameter \(l\) has a significant influence on the stress field for extremely thick plates subjected to thermal loads only, where the nonlocal parameter can completely change the variation trend of the stresses, switching their signs and altering their magnitudes, as depicted by the blue curves in Figs. (4.1d) and (4.1f). #### 4.2.2 Distributed dislocations for periodic networks of cracks [P22]**A. Vattre**. _Kinked and forked crack arrays in anisotropic elastic bimaterials._ Journal of the Mechanics and Physics of Solids, 104744, 2022. In Ref. [267], the fracture problem of multiple branched crack arrays in anisotropic bimaterials has recently been formulated by using the linear elasticity theory of lattice dislocations with compact cores described in section 3.7.1. Yet, the general full-field solutions are obtained from the standard technique of continuously distributed dislocations along finite-sized cracks of arbitrary shapes, which are embedded in dissimilar anisotropic half-spaces under far-field stress loading conditions. The bimaterial boundary-value problem leads to a set of coupled integral equations of Cauchy-type that is numerically solved by using the Gauss-Chebyshev quadrature scheme with appropriate boundary conditions for kinked and forked crack arrays. The path-independent \(l_{k}\)-integrals as crack propagation criterion are therefore evaluated for equally-spaced cracks, while the limiting configuration of individual cracks is theoretically described by means of explicit expressions of the local stress intensity factors \(K\) for validation and comparison purposes on several crack geometries. The non-zero, singular and dimensionless stress components resulting from the idealized configurations of infinitely periodic cracks are illustrated in Fig. (4.2), for which the application setups are given in Ref. [267]. Specially, the \(\sigma_{22}^{\text{\tiny{array} cracks}}\) exhibits a small compressive zone along the crack pointing to the upper surface. Figure (4.2) shows the large discontinuities of the in-plane stress component \(\sigma_{11}^{\text{\tiny{array} cracks}}\) across both crack and interface planes as well as the traction-free conditions for \(\sigma_{22}^{\text{\tiny{array} cracks}}\) and \(\sigma_{12}^{\text{\tiny{array} cracks}}\) along the main crack plane that are therefore fully satisfied, as required. The corresponding non-singular elasticity problem (using the core-spreading treatment from section 3.8.1) for interfacial cracks has recently and successfully been addressed in collaboration with Andreas Kleefeld from the University of Applied Sciences Aachen by use of the Tikhonov method for Fredholm integral equations of the first kind. The novel stress field solutions at interfacial crack tip do not exhibit oscillatory singularity induced by mismatching of the dissimilar materials, while the traction-free conditions are completely fulfilled along the discontinuities. In the near future, the influences of anisotropic elasticity, elastic mismatch, applied stress direction, inter-crack spacings and crack length ratios on the predictions from the crack opening displacement, as well as \(l_{k}\)- and \(K\)- based fracture criteria could therefore be examined in the light of different configurations from the single kinked crack case in homogeneous media to the network of closely-spaced interfacial cracks at bimaterial interfaces. #### 4.2.3 Towards a general treatment for (interfaces, dislocations, cracks) [P23]**A. Vattre**, V. Chiaruttini. _Singularity-free theory and adaptive finite element computations of arbitrarily-shaped dislocation loop dynamics in 3D heterogeneous material structures._ Journal of the Mechanics and Physics of Solids, 104954, 2022. The long-standing problem of arbitrarily-shaped dislocation loops in three-dimensional heterogeneous material structures has been addressed by introducing novel singularity-free elastic field solutions as well as developing adaptive finite element computations for dislocation dynamics simulations in Ref. [268]. The first framework uses the Stroh formalism in combination with the biperiodic Fourier-transform and dual variable and position techniques to determine the finite-valued Peach-Koehler force acting on curved dislocation loops. On the other hand, the second versatile mixed-element method proposes to capture the driving forces through dissipative energy considerations with domain integrals by means of the virtual extension principle of the surfacial discontinuities. Excellent agreement between theoretical and numerical analyses is illustrated from simple circular shear dislocation loops to prismatic dislocations with complicated simply-connected contours in linear homogeneous isotropic solids and anisotropic elastic multimaterials, which also serves as improved benchmarks for dealing with more realistic boundary-value problems with evolving dislocations. For illustration, the singularity-free Peach-Koehler magnitudes for a prismatic dislocation loop with a complex butterfly-shaped front are presented in Fig. (4.3a), using a given core-spreading radius. The theoretical (numerical) solutions are shown as solid lines (with symbols), while the corresponding driving forces are drawn in pink along the contours in Fig. (4.3b), with and without the two-dimensional shear stress \(\sigma_{12}(x_{1},x_{2},z_{s})\) maps in the background for further comparison. The signed magnitudes of the Peach-Koehler forces are plotted against the polar angle \(\theta\), for which \(\theta=0^{\circ}\) corresponds to the points \(M\) in the schematics. In general, the very good agreement in terms of stresses and forces in sign and magnitude is also demonstrated, although slight deviations in direction are noticeable when the local radius of curvature changes drastically in sign. These discrepancies are mainly due to the different core-spreading schemes that have been appropriately adopted for mathematical convenience in each of the theoretical and numerical formulations. Figure (4.4a) illustrates a large-scale three-dimensional finite element computation that cannot, to the Figure 4.1: Steady-state thermoelastic bending of a three-layered structure with square plates subjected to a sinusoidal temperature rise at the two external faces. The first terms in the temperature expansion are considered, thus \(m=n=1\). The light grey regions are the unidirectional graphite-epoxy composites with fibers oriented along the \(x_{1}\)-direction, which are bonded by a soft core material. The through-the-thickness distributions for different values of the aspect ratios \(L_{x}/H\) and of the nonlocal parameters \(l/H\) are depicted for (a) the temperature \(T\), (b) the normal heat flux \(q_{3}\), (c-d) the in-plane stresses \(\sigma_{11}\), and (e-f) \(\sigma_{22}\). The standard local case corresponds to the field solutions with \(l=0\). knowledge of the authors, be achieved by existing numerical approaches in the broader literature, corresponding to the Orowan dislocation-precipitate bypass mechanism in a compressed micropillar of polycrystalline copper. An anisotropic copper polycrystalline micropillar with 80 grains is automatically generated from the intersection of a cubical Voronoi tessellation with a representative pillar specimen, in which a shear dislocation loop with a Burgers vector glides in the \((111)\) slip plane of a specific host grain. The latter lies outside the microstructure, so that the outer grain boundary corresponds to the free surface of the computational sample. A high compressive strain of 7.1% is applied and maintained constant on one external face of the specimen, while the opposite face is blocked. At the grain scale, the Orowan bypass mechanism is described by the presence of the infinitely stiff, also elastically mismatched precipitate of arbitrary shape, for which the elastic constants are fictitiously multiplied by a factor of ten, with impenetrable boundaries and without consideration of cross-slip events. The internal grain boundaries are also considered as impenetrable barriers to dislocation motion, so that the dislocation loop is strictly confined to the host grain. The initial number of degrees of freedom associated with the full mesh is \(\sim 193\)k, while the multiscale problem exhibits three orders of magnitude between the polycrystalline sample length and the representative size of the precipitate. The snapshot in Fig. (4.4b) shows the elastic dislocation/precipitate interaction, and especially the dislocation propagation by bowing around the inclusion as well as the self-coalescence of the dislocation loop once the arms pass the particle in the intermediate configuration,. Thus, an Orowan-like dislocation loop is left around the infinitely strong inclusion, providing a new route in understanding of the Bauschinger effect in realistic precipitation-strengthened material structures. The planar propagation of a dislocation loop completely cuts the host grain and also leaves a surface step of the Burgers vector magnitude on the free surface of the micropillar sample, while the slip transmission of the dislocation loops across the neighboring grain boundaries is let for promising future development. Figure (4.4b) summarizes the various stages of the dislocation loop propagation bypassing the inclusion in the polycrystalline copper micropillar, for which the final configuration mesh is composed of \(\sim 1007\)k degrees of freedom. The corresponding animation of the Orowan precipitate bypass mechanism is referred to as "Orowan bypass Figure 4.2: Contours of non-zero and dimensionless stress field components produced by a network of equally- and closely- spaced forked cracks in an anisotropic bimaterial under traction using the concept of the continuously distributed dislocations. mechanism in a micropillar", computed in less than 20 hours with 291 adaptive remeshing events with an average discrete time step of 0.46 ns. At first glance and in the current form, the finite-element framework should be considered as a computational tool to carry out calculations with several types of discontinuities, such as grain boundaries, free surfaces, dislocation loops and cracks, in multiphase finite material structures. The main interesting feature of the approach is to unify these discontinuities into a single finite-element entity to revisit the fundamental problems concerning the interactions between dislocation loops and cracks, in particular the emission of dislocations from crack fronts in three dimensions, as well as the interactions between dislocations and stress concentrations at grain boundaries and heterophase interfaces, especially the nucleation and emission of dislocation loops from the internal material boundaries. Although the computational approach undoubtedly opens many perspectives, also with close links to experiments, some extensions can be introduced. A current limitation is related to the use of a single regularization rule at the dislocation fronts, whether the dislocation loops are located in the core of the grains or near the internal interfaces. A more physics-based rule could be provided to offer a better description of the short-range elastic fields close to the grain boundaries to analyze the transmission of dislocation loops into neighboring grains, thus overcoming the current impermeability conditions. Furthermore, although the current simulations are performed on a workstation, the numerical framework could benefit from the robust iterative and domain decomposition solvers to handle the discretization of several tens of millions of unknowns. By the use of a parallel mesh generation algorithm for robust domain decomposition techniques, high-performance calculations with a hundred dislocation loops are anticipated to characterize standard dislocation microstructures with typical densities of \(10^{12}/10^{14}\) m\({}^{-2}\) in the 1-to-100 micrometer mesoscale range. Finite element calculations with hundreds of millions of degrees of freedom are therefore expected to achieve such numerical experiments for multiple dislocation loops in three-dimensional material structures. These subsequent boundary-value problems should be accompanied by consideration of additional dislocation junctions, such as the Lomer-Cottrell lock, the Hirth lock and the glissile junction as well as the implementation of the dislocation cross-slip mechanism and energetics, which are left for future investigations. In an extrapolation scenario, computations of several thousand dislocation loops on supercomputers could be carried out with the aim of better understanding dislocation-based strain hardening mechanisms in realistic structures at the macroscale. Figure 4.3: Prismatic dislocation loops with complex simply-connected fronts as the (a) butterfly- and (b) skull-shaped contours. The corresponding magnitude of the singularity-free Peach-Koehler forces are computed at \(z_{s}\) and are displayed on the left-hand side with respect to the polar angle \(\theta\), for which \(\theta=0^{\circ}\) is represented by the point \(M\) in the plots of the right-hand side. The direction and amplitude of the driving forces as well as the shear stress component \(\sigma_{12}(x_{1},x_{2},z_{s})\) are depicted for both theoretical and numerical finite element solutions. For the sake of clarity, the Peach-Koehler forces along both dislocation contours are also shown in pink without the stress maps in the background. Figure 4.4: From the initial dislocation loop embedded in a given grain of the polycrystalline copper micropillar with \(\sim 193\)k degrees of freedom in (a) to the various propagation steps followed by the shear dislocation loop in (b), thus leaving residual dislocation edges around the bypassed heterophase precipitate. The final computational mesh involves \(\sim 1007\)k degrees of freedom.
2308.03690
Safe Multimodal Communication in Human-Robot Collaboration
The new industrial settings are characterized by the presence of human and robots that work in close proximity, cooperating in performing the required job. Such a collaboration, however, requires to pay attention to many aspects. Firstly, it is crucial to enable a communication between this two actors that is natural and efficient. Secondly, the robot behavior must always be compliant with the safety regulations, ensuring always a safe collaboration. In this paper, we propose a framework that enables multi-channel communication between humans and robots by leveraging multimodal fusion of voice and gesture commands while always respecting safety regulations. The framework is validated through a comparative experiment, demonstrating that, thanks to multimodal communication, the robot can extract valuable information for performing the required task and additionally, with the safety layer, the robot can scale its speed to ensure the operator's safety.
Davide Ferrari, Andrea Pupa, Alberto Signoretti, Cristian Secchi
2023-08-07T16:08:21Z
http://arxiv.org/abs/2308.03690v1
# Safe Multimodal Communication in Human-Robot Collaboration ###### Abstract The new industrial settings are characterized by the presence of human and robots that work in close proximity, cooperating in performing the required job. Such a collaboration, however, requires to pay attention to many aspects. Firstly, it is crucial to enable a communication between this two actors that is natural and efficient. Secondly, the robot behavior must always be compliant with the safety regulations, ensuring always a safe collaboration. In this paper, we propose a framework that enables multi-channel communication between humans and robots by leveraging multimodal fusion of voice and gesture commands while always respecting safety regulations. The framework is validated through a comparative experiment, demonstrating that, thanks to multimodal communication, the robot can extract valuable information for performing the required task and additionally, with the safety layer, the robot can scale its speed to ensure the operator's safety. Keywords:human-robot communication, multimodal fusion, safety ## 1 Introduction Effective communication between humans and robots is a crucial element in collaborative robotics. As robots become increasingly present in work and home environments, the ability to communicate naturally and efficiently with humans becomes a determining factor for the success of the interaction and the achievement of common goals. According to literature [1][2], human communication is based on the coexistence and fusion of multiple different communicative modes (or channels), leading to the realization of a **multimodal communication model**. These modes can include verbal language, body language, gestures, facial expressions, and even the use of lights or sounds. In the context of collaborative robotics, implementing multimodal human-robot communication refers to the robot's ability to use different sensory modes to perceive, interpret, and generate communicative signals. This allows the robot to acquire more complete and accurate information about the intentions, desires, and emotions of the human it interacts with, facilitating a deeper understanding and an appropriate response to the needs of the human interlocutor. However, multimodal communication alone is not sufficient to ensure accurate understanding and appropriate response in human-robot collaboration (HRC) [3]; this is where multimodal fusion comes into play [4]. **Multimodal fusion** is the process by which information from different sensory modalities is combined and integrated to obtain a coherent and complete representation of the surrounding environment and interactions with humans. This fusion of information allows the robot to benefit from the different perspectives provided by each modality, improving its understanding of human intentions and enabling a more precise and adaptable response. For example let's consider a situation where a collaborative robot is working in tandem with a human operator in a manufacturing company. During the interaction, the robot may use voice recognition to understand verbal instructions from the human operator, but simultaneously it can also monitor the operator's body language and facial expressions to detect any signs of stress or dissatisfaction. The fusion of information from these different sensory modalities allows the robot to have a more comprehensive understanding of the operator's intentions and emotions, enhancing its ability to provide an appropriate response and adapt to the needs of the human interlocutor. In [5], a deep learning-based multimodal fusion architecture for robust multimodal HRC manufacturing systems is proposed. Experiments using speech command, hand motion, and body motion, show that the proposed multimodal fusion model outperforms the three unimodal models. Other works [6] have proposed multiple combinations of unimodal input and fusion architectures, focusing solely on improving the robot's communication capabilities, but lacking consideration for the safety of the operator. In conclusion, it is evident that multimodal human-robot communication and multimodal fusion play a central role in collaborative robotics; however, **safety** remains a crucial aspect to consider during human-robot interactions. In particular [7] introduces a computationally efficient control scheme for safe human-robot interaction based on the Explicit-Reference-Governor formalism, ensuring that the robot can safely works close proximity to humans. In [8] the authors proposed a trajectory planning algorithm to produce safe minimum-time trajectories in a shared workspace with humans, with the addition of a re-planning module to optimally adapt the generated trajectory online to ensure safety limits. In this paper we integrate a multimodal fusion architecture with a two layer framework, proposed in [9], that plans a trajectory ensuring a collision-free path by monitoring the skeleton of the operator. Our goal is to _develop an architecture that can ensure natural and efficient multimodal human-robot communication while also ensuring safety_ during operations by utilizing communication channels to seek input from the operator on how to resolve potential errors or unforeseen circumstances. This combination of multimodal communication and integrated safety represents a significant step towards advanced and efficient collaborative robotics, where the collaboration between humans and robots occurs harmoniously and safely. We hope that this work can provide a solid foundation for future research and developments in the field of collaborative robotics, leading to new solutions that improve the effectiveness, interaction, and safety of human-robot systems. Thus, the contributions of this paper are: * A Multimodal Fusion Architecture using 3D Gestures and Voice. * The integration of the Fusion with a Safety Layer to ensure the respect of the safety measures. * An experimental validation by comparing the safe and unsafe architectures during a pick-and-place job. This paper is organized in the following way: in Section II, we introduce the problem statement. In Sections III and IV, we describe the architecture, focussing on the communication channels and the safety layer. In Section V, we present the experimental validation, providing detailed information about the implementation, and analyze the obtained results. We conclude and outline some ideas for future works in Section VI. ## 2 Problem Statement Consider a scenario of Human-Robot Collaboration in which a 6-dof velocity-controlled manipulator needs to collaborate and communicate with a human operator to fulfill a shared objective. The collaborative robot can be modeled as: \[\dot{\mathbf{q}}=\mathbf{u}, \tag{1}\] where \(\dot{\mathbf{q}}\in\mathbb{R}^{n}\) represent the joints velocities and \(\mathbf{u}\in\mathbb{R}^{n}\) the controller input. We consider scenarios where the human operator actively lead the collaboration, who is the most expert member. This is because we want to reproduce a situation in which the robot has to assists the user in performing a series of tasks, by following its instructions and replying to its questions. To this aim, the collaborative scenario is endowed with a set of sensors that enable the communication between the two actors implementing a communication strategy that can guarantee a natural and efficient exchange of information, in order to achieve good performance and approach the results of human-human collaboration. Multimodal fusion allows combining information from multiple communication channels, including voice and gestures, which represent the foundations of human communication since early communication development [10]. Each communication command can be modeled as a desired final configuration \(\mathbf{q}_{des}(t_{f})=\mathbf{q}_{f}\in\mathbb{R}^{n}\) that the robot has to reach by executing safe trajectories \(\mathbf{q}_{des}(t)\in\mathbb{R}^{n}\) from an initial configuration \(\mathbf{q}_{des}(t_{i})=\mathbf{q}_{i}\in\mathbb{R}^{n}\), ensuring compliance with ISO/TS 15066, which imposes a limit on the maximum speed in the direction of the operator [9]: \[\begin{split} v_{rh}(t)\leq&\sqrt{v_{h}(t)^{2}+(a_{ max}T_{r})^{2}-2a_{max}(C+Z_{d}+Z_{r}-S_{p}(t))}+\\ &-a_{max}T_{r}-v_{h}(t),\end{split} \tag{2}\] where \(v_{rh}(t)\in\mathbb{R}\) represents the velocity of the robot towards the human and \(v_{h}(t)\in\mathbb{R}\) represents the velocity of the human. \(a_{max}\in\mathbb{R}\) and \(T_{r}\in\mathbb{R}\) are the maximum deceleration and the robot reaction time, respectively. In order to ensure compliance with the safety standards while keeping the overall path, we can explicitly isolate the magnitude of the velocity along the trajectory by applying a path-velocity decomposition and act on the derivative \(\dot{s}\) of the curvilinear abscissa \(s\) that parameterizes the geometric path \(\mathbf{q}_{des}(s(t))\): \[\mathbf{q}_{des}(t) =\mathbf{q}_{des}(s(t)) t\in\left[t_{i},t_{f}\right], \tag{3}\] \[\dot{\mathbf{q}}_{des}(t) =\mathbf{q}^{\prime}_{des}(s(t))\dot{s} t\in\left[t_{i},t_{f}\right], \tag{4}\] In this paper we propose a multimodal fusion architecture that: * Enables natural and efficient communication by utilizing two of the main channels of human communication. * Performs multimodal fusion to combine information from multiple communication modes and extract an overall meaning from them. * Ensures safety compliance during human-robot collaboration. ## 3 Proposed Architecture The proposed architecture, summarized in Figure 1, consists of a **vocal communication** and a **gesture recognition** channels, which are then fused by a **multimodal fusion** algorithm, enabling bidirectional and dynamic communication between humans and robots. The information that comes from these two channels are firstly collected by a **time manager**, which is responsible for synchronizing and merging them into a single tensor, and subsequently fused with a **neural classifier** to obtain a coherent and comprehensive representation of the communicated message. Additionally, a text-to-speech channel has been added to let the robot able to provide information to the operator, such as feedback on the status of the task or the occurrence of errors or issues. The commands obtained from multimodal fusion are sent to the safety layer, which is responsible for planning and executing a trajectory while ensuring compliance with safety distances from the operator. The vocal communication channel, built around a commercial voice assistant, has been created through a custom application that connects a front-end interface for operator interaction with a back-end running locally to exchange information with the rest of the architecture. Additionally, a Text-To-Speech channel has been added to enable vocal feedback, allowing the robot to communicate with the operator. The gesture communication channel was created using a gesture recognition algorithm based on a neural network classifier. A real-time video stream is captured by a webcam and each frame is processed frame-by-frame by a skeletonization algorithm that extract a series of keypoints containing the spatial coordinates of skeleton, face and hands. These keypoints are then encoded into a tensor representing a 3D gesture (a gesture that extends over time) and passed through the neural classifier to categorize the performed gesture. The classifier consists Figure 1: Multimodal Architecture of an LSTM (Long Short-term Memory) layer [11], followed by several fully connected layers. It provides an output vector of probabilities indicating which of the trained gestures is most likely to have been executed. In addition, we have developed some "raw functions", which are functions that enhance the meaning of certain gestures. For example, the "Point At" gesture requires the direction in which the user is pointing to provide meaningful information. When this gesture is recognized by the neural network, a specific function is triggered to calculate the direction by tracing a line intersecting shoulder, elbow, and wrist keypoints. ### Multimodal Fusion Multimodal fusion allows for obtaining a command by combining information from multiple unimodal communication channels that must be synchronized and merged together since different communication modalities have varying operating times and frequencies. For example, if the operator asks the robot "Bring me that object" using a "Point At" gesture, the gestural information is captured almost instantaneously and multiple times (at approximately 15 frames per second), while the vocal information must wait for the operator to complete the sentence before being processed. Therefore, it is necessary to pre-process the information through a **time manager**, whose task is to synchronize and combine them into a tensor that is then passed through a neural network classifier, which performs the multimodal fusion. The time manager, inspired by the recognition lines discussed in Cacace et al. [12], handles these delays and repetitions to combine and synchronize information related to the same command by receiving information from each channel for a specific _time window_ and encoding them into a tensor to be passed to the neural network. The time window opens when the first information is received and has an arbitrarily chosen duration. The tensor synchronized by the time manager is then passed to a _neural classifier_ responsible for multimodal fusion, along with any additional information generated by the _raw functions_ if necessary. The network, trained with a dataset created by collecting possible outcomes of the task, produces an output that represents a single multimodal command. This command can be an instruction to be given to the robot, a signal of error or lack of information to achieve a complete meaning, or a response to be provided to the operator using the text-to-speech feedback channel. ## 4 Safety Layer Once the message is received and forwarded to the robot, it is crucial for the robot to perform the desired task in a secure and efficient manner. To accomplish this, the overall framework incorporates a well-defined motion planning strategy, called safety layer, responsible for planning trajectories that are safe for the human operator [9]. The implemented strategy operates in two stages. Initially, it computes a collision-free trajectory \(\mathbf{q}_{des}(t)\), allowing the robot to ideally execute it at maximum speed. Subsequently, it dynamically adjusts the velocity along the path in real-time to ensure safety. This is achieved by employing a path-velocity decomposition technique, see equations (3)-(4), and solving online the following optimization problem: \[\begin{split}&\min_{\alpha}-\alpha,\\ \text{s.t.}&\\ & J_{r_{i}}(\mathbf{q})\mathbf{q}^{\prime}(s)\dot{s}\alpha\leq v_{ max_{i}}\hskip 36.135pt\forall i\in\{1,\ldots,n\},\\ &\dot{\mathbf{q}}_{min}\leq\mathbf{q}^{\prime}(s)\dot{s}\alpha \leq\dot{\mathbf{q}}_{max},\\ &\ddot{\mathbf{q}}_{min}\leq\frac{\mathbf{q}^{\prime}(s)\dot{s} \alpha-\dot{\mathbf{q}}}{T_{r}}\leq\ddot{\mathbf{q}}_{max},\\ & 0\leq\alpha\leq 1.\end{split} \tag{5}\] \(\alpha\in\left[0,1\right]\) is the optimization variable and represents the scaling factor. \(J_{r_{i}}(\mathbf{q})\in\mathbb{R}^{1\times n}\) is a _modified jacobian_ that takes into account only the scalar velocity of the \(i\)-th link towards the human operator. \(v_{max_{i}}\) is the velocity limit imposed by the ISO/TS 15066 [13]. \(\dot{\mathbf{q}}_{min}\in\mathbb{R}^{n}\) and \(\dot{\mathbf{q}}_{max}\in\mathbb{R}^{n}\) are the joint velocity lower bounds and the joint velocity upper bounds, respectively. While \(\ddot{\mathbf{q}}_{min}\in\mathbb{R}^{n}\) and \(\ddot{\mathbf{q}}_{max}\in\mathbb{R}^{n}\) are the acceleration limits. \(\dot{\mathbf{q}}\in\mathbb{R}^{n}\) is the actual robot velocity and \(T_{r}\) is the robot execution time. The goal of the optimization problem (5) is to maximize the scaling factor moving exactly at the planned velocity, i.e. \(\alpha=1\) that is the maximum speed. However, when the human approaches the robot, the safety standards require to decrease the velocity until, in the worst case, stopping the robot. This is guaranteed by the solution \(\alpha=0\). ## 5 Experimental Validation The experimental validation1 was carried out through a comparative experiment, simulating a daily task in a home environment. The objective of the experiment was to assist a person in gathering items from the pantry to prepare a meal, using a collaborative manipulator to perform pick-and-place tasks. Communications were provided through the multimodal fusion architecture, utilizing both vocal and gesture channels to instruct the robot about which object it should pick. The first experiment was conducted without the safety layer, while it was enabled during the second experiment to compare the results obtained and highlight the differences and issues that may arise when disregarding safety regulations. Footnote 1: Video of experiments available at [https://doi.org/10.5281/zenodo.8083948](https://doi.org/10.5281/zenodo.8083948) [14] ### Implementation Details The architecture was built using ROS [15], dividing the various components into independent nodes in order to ensure the modularity of the architecture and make it compatible with multiple communication channels and multimodal fusion algorithms. ### Voice Communication Channel The voice communication channel has been built by developing an Amazon Alexa custom skill, which is an application that enables the voice assistant to perform customized tasks or provide information on specific topics. The skill consists of a _front-end_ that contains a set of customized _intents_ (request-response structures) that are connected to a _back-end_ responsible for gathering information or performing the requested tasks. The back-end was developed to run locally and integrated within ROS, using _Flask Ask_, a Flask [16] extension that allows the creation of skills in Python, and _ngrok_ to expose the back-end and connect it to the front-end through HTTPS tunneling. The Text-To-Speech has been created by integrating _Node-RED_ with ROS using the 'node-red-contrib-ros' node package. These nodes leverage the JavaScript library 'roslibjs' and connect to a ROS bridge through a WebSocket. The Text-to-Speech node enables the conversion of a text string into speech output, which is played on the Echo Dot device. Figure 2: Voice Communication Channel ### B. Gesture Recognition Channel The gesture recognition channel was created using the Holistic landmarks detection solution API from MediaPipe [17], a framework developed by Google that provides a suite of libraries and tools for applying artificial intelligence (AI) and machine learning (ML) techniques. Holistic combines components of pose, face, and hand landmarkers to create a comprehensive landmarker for the human body on a continuous stream of images in real-time. The landmarks extrapolated from each image are encoded into a tensor to represent the 3D gesture and then passed to a neural network classifier composed by an LSTM (Long Short-term Memory) layer followed by several fully connected layers, to classify gestures based on the landmarks. The neural network model was trained with a dataset of communicative gestures for human-robot collaboration, based on the work by Tan et al. [18]. For each gesture, a series of videos were recorded and processed using MediaPipe Holistic to extract frame-by-frame landmarks, which were then encoded into matrices or tensors representing all the recordings of that specific gesture. These matrices of tensors were used for training the neural network model. Figure 4: Gesture Recognition Figure 3: MediaPipe Holistic ### Multimodal Fusion The multimodal fusion is implemented using a time manager to aggregate and synchronize the flow of information into a tensor, that represents unimodal information pertaining to the same command, and a classifier neural network that takes the aforementioned tensor as input and outputs the fused multimodal command. As described in Algorithm 1, upon the arrival of new information on either the voice or gesture channel, a new temporal window \(\mathcal{W}\) is opened, and the information is encoded into a tensor \(\mathcal{T}\). During the duration of the temporal window, any new received information is appended to the tensor. Afterwards, the tensor is passed through a pre-trained neural network \(\mathcal{N}\), which outputs the fused multimodal command \(\mathcal{M}\). ``` 0: Vectors \(G\) and \(V\) containing Gesture and Voice information 0: Multimodal Command \(\mathcal{M}\) \(\mathcal{T}\leftarrow\) empty tensor Recognition Time: \(R_{T}\gets 2s\) while new value of \(G\) or \(V\) is received do Open a Temporal Window \(\mathcal{W}\) \(\mathcal{T}\leftarrow\) Received Value (\(G\) or \(V\)) while\(\mathcal{W}<R_{T}\)do if new gesture/voice \(G\) or \(V\) is received then Append new \(G\) or \(V\) to tensor \(\mathcal{T}\) endif endwhile Pass tensor \(\mathcal{T}\) through Neural Classifier \(\mathcal{N}\)\(\rightarrow\) Multimodal Command \(\mathcal{M}\) Send the Multimodal Command \(\mathcal{M}\) to the Safety Layer endwhile ``` **Algorithm 1** Multimodal Fusion Algorithm ### Safety Layer The safety layer is implemented exploiting _fmincon_ Matlab solver and works at the frequency of the robot controller, i.e. 500 Hz for the UR10e which has been used in this experiment. The monitoring of the human operator in the scene is achieved exploiting 6 Optitrack Prime\({}^{x}\) cameras with the Motive software, which works at 240 Hz. ### Experiment Description and Results The experiment consists of a series of pick-and-place tasks where the operator can request an object by describing it and indicating the corresponding area either through the voice channel alone (e.g., _"Fetch me the pasta in the right area"_) or by combining the vocal command _"bring me me the pasta"_ with the gesture command _"Point-At"_ indicating the area. Figure 5 shows the operator requesting an object while indicating the corresponding area. By using the "raw functions", it is possible to calculate the direction in which the operator is pointing by interpolating the shoulder, elbow, and wrist landmarks. Consequently, this allows the multimodal fusion to obtain a complete command such as _"take object x in the right/left area"_. Multimodal Fusion combines the received information and sends the command to the safety layer, which is responsible for planning the trajectory and monitoring the operator to perform movements that are always safe and compliant with the ISO standards. In Figure 6, we can see how during the execution of the trajectory, the maximum speed of the manipulator is always below the speed limit required by the ISO. This speed limit is calculated based on the minimum distance from the operator and the direction of the robot's movement. If the robot is moving away from the human, the speed will be scaled down to ensure safety. However, if the robot is moving in a direction that does not pose any risk, it can operate at full speed. Figure 5: Multimodal Fusion Object Request On the contrary, when the safety is deactivated, the robot is not aware of the operator's position and performs unsafe trajectories that force the user to retract their arm to avoid a collision (Figure 7). In more critical cases where the collision cannot be avoided, the robot triggers an emergency stop, requiring the operator to manually reinitialize the control algorithm. Figure 6: Maximum Allowed Velocity in relation to ISO Velocity Limit Figure 7: Unsafe Experiment with Safety Layer Deactivated ## 6 Conclusions This paper presented a multimodal communication architecture that integrates voice and gestures to achieve a simpler and more natural interaction with the robot, with particular attention to ensuring safety during collaboration. To validate its effectiveness, we conducted a comparative experiment, with and without the safety layer, simulating a daily task in a home environment. This experiment confirmed the ability of the architecture to correctly fuse the operator's communications and successfully complete all the required tasks. Furthermore, we highlighted how the lack of attention to safety regulations can jeopardize the operator's safety during close collaboration with a robot. Our architecture, prioritizing safety at all times, enables a simple and natural interaction with the robot, avoiding situations of danger for the operator. As further extensions of this architecture, we evaluated the idea of adding more communication channels to improve and expand the ability of the system to interact with the operator. We have also planned to leverage the communication channels to collaborate with the safety layer in resolving any errors, such as obstacles in the path or alerts of critical situations where the robot needs to move in the area occupied by the operator. This approach aims to increase the complexity of the exchanged information, bringing us closer to achieving communication that is more similar to that between human beings.
2306.04453
A Bijection between Unbalanced Dyck Path and NE Lattice Path
Lattice paths are important tools on solving some combinatorial identities. This note gives a new bijection between unbalanced Dyck path (a path that never reaches the diagonal of the lattice) and NE (North and East only) lattice path from (0,0) to (n,n) by several partial reflections.
Yannan Qian
2023-06-06T15:07:22Z
http://arxiv.org/abs/2306.04453v2
**A Bijection between Unbalanced Dyck Path and NE Lattice Path** **A Bijection between Unbalanced Dyck Path and NE Lattice Path** **Yannan Qian** University of Exeter, Penryn Campus, Penryn, TR10 9FE, UK Nanjing University of Information, Science and Technology, Ningliu Road 219, Nanjing, 211544, China June 6, 2023 **Abstract:** Lattice paths are important tools on solving some combinatorial identities. This note gives a new bijection between unbalanced Dyck path (a path that never reaches the diagonal of the lattice) and NE (North and East only) lattice path from (0,0) to (n,n) by several partial reflections. ## Introduction The combinatorial identity \(\sum_{i=0}^{n}\binom{2i}{i}\binom{2n-2i}{n-i}=2^{2n}\) can be proved easily by matching up the coefficients of generating function [1]. However, the direct combinatorial proofs are not so obvious. According to Stanley [1], the first combinatorial proof was given by G. Hajos, 1830s. Sved [2] gave a view of lattice path to solve this problem in 1984. The basic idea is separate the NE-lattice path of \(2n\) steps (which has \(2^{2n}\) permutations) into a NE lattice path from (0,0) to (i, i), which has \(\binom{2i}{i}\) permutations, and a path of length \(2n-2i\) from (i, i) that would never reach the diagonal \(y=x\). Such paths are called unbalanced Dyck paths in this note because they have different numbers of N-steps and E-steps. The difficulty is to prove the unbalanced Dyck path of length \(2k\) has \(\binom{2k}{k}\) permutations. A natural thought is that there are some bijections between unbalanced Dyck paths and NE lattice paths. Sved [2] gave a bijection by cutting and replacing the paths. This note gives another bijection by several partial reflections. ## Notations For convenience, set the diagonal as a new \(y\) axis. Then the N-step becomes (1,1) and the E-step becomes (1,-1). Call them upstep and downstep. Now a lattice path should end on the line \(y=0\). An unbalanced Dyck path will not touch the line \(y=0\) expect at (0,0).
2308.12945
A Thom Spectrum Model for $C_2$-Integral Brown--Gitler Spectra
A Thom spectrum model for a $C_2$-equivariant analogue of integral Brown--Gitler spectra is established and shown to have a multiplicative property. The $C_2$-equivariant spectra constructed enjoy properties analogous to classical nonequivariant integral Brown--Gitler spectra and thus may prove useful for producing $C_2$-equivariant analogues of splittings of $BP \langle 1 \rangle \wedge BP \langle 1 \rangle$ and $bo \wedge bo.$
Guchuan Li, Sarah Petersen, Elizabeth Ellen Tatum
2023-08-24T17:34:23Z
http://arxiv.org/abs/2308.12945v2
# A Thom spectrum model for \(C_{2}\)-integral Brown-Gitler spectra ###### Abstract. A Thom spectrum model for a \(C_{2}\)-equivariant analogue of integral Brown-Gitler spectra is established and shown to have a multiplicative property. The \(C_{2}\)-equivariant spectra constructed enjoy properties analogous to classical nonequivariant integral Brown-Gitler spectra and thus may prove useful for producing splittings of \(BP\langle 1\rangle\wedge BP\langle 1\rangle\) and \(bo\wedge bo\) in the \(C_{2}\)-equivariant setting. ###### Contents * 1 Introduction * 2 Statement of Theorems * 3 Equivariant Preliminaries * 4 A Thom spectrum model for \(H\underline{\mathbb{Z}}_{2}\) * 5 A product fibration * 6 Proof of Main Theorem ## 1. Introduction In the 1970's, Brown and Gitler constructed a family of spectra realizing certain sub-comodules of the dual Steenrod algebra at the prime \(p=2\)[1]. Brown-Gitler's motivation for constructing the original spectra was to study immersions of manifolds. These spectra were indeed useful for that purpose, in particular allowing Cohen to prove the immersion conjecture for differentiable manifolds [10]. They have also been useful for studying maps out of classifying spaces. For example, they were used by Miller to prove the Sullivan Conjecture [14]. Brown-Gitler used an obstruction-theoretic approach, first constructing an algebraic resolution, then constructing a tower of spectra realizing that resolution, and Introduction The \(C_{2}\)-equivariant setting of the \(C_{2}\)-equivariant setting is the \(C_{2} zero by endowing both \(S^{1}\) and the bundle with trivial action. Note we follow the convention that \(\mathbb{Z}_{(2)}\) denotes the 2-local integers and \(\mathbb{Z}_{2}\) the 2-adic integers. In [10], Hahn-Wilson generalize the result of Behrens-Wilson. They show the \(G\)-equivariant mod \(p\) Eilenberg-MacLane spectrum arises as an equivariant Thom spectrum for any finite, \(p\)-power cyclic group \(G\). Hahn-Wilson also establish a Thom spectrum model for \(H\underline{\mathbb{Z}}_{(p)}\), building a base space from the \(G\)-space \(\Omega^{\lambda}S^{\lambda+1}\), where \(\lambda\) denotes the standard representation of \(G\) on the complex numbers with a generator acting by \(e^{2\pi i/p^{n}}.\) They observe that this space carries the arity filtration from the \(E_{\lambda}\)-operad and thus one could define equivariant Brown-Gitler spectra as the spectra coming from this filtration. This is the perspective we take and extend to the integral setting in this paper. Non-equivariantly, two conditions characterize Brown-Gitler spectra. First, these spectra realize certain sub-comodules of the dual Steenrod algebra. Additionally, they satisfy a surjectivity condition coming from the geometry involved in Brown-Gitler's original construction [1]. While the Thom spectrum model does satisfy this additional condition in the non-equivariant setting, it is not clear, neither from the geometry nor the Thom spectra filtration models, how to generalize this second condition in the equivariant setting. We discuss this in Remark 6.2, taking the philosophy proposed by Hahn-Wilson: one can simply define equivariant Brown-Gitler spectra using filtrations on the Thom spectra models and see if these spectra are computationally useful. Several factors motivate our choice to study \(C_{2}\)-equivariant, as opposed to \(C_{p}\)-equivariant for all primes \(p\), spectra. First, \(RO(C_{2})\)-graded \(H\underline{\mathbb{F}}_{2}\)-homology is free over its coefficients in important examples such as the dual Steenrod algebra \(H\underline{\mathbb{F}}_{2\star}H\underline{\mathbb{F}}_{2}\)[11] and \(H\underline{\mathbb{F}}_{2\star}H\underline{\mathbb{Z}}\)[12]. This is helpful because integral Brown-Gitler spectra topologically realize certain sub-comodules of \(H\underline{\mathbb{F}}_{2\star}H\underline{\mathbb{Z}}\). In contrast, \(RO(C_{p})\)-graded \(H\underline{\mathbb{F}}_{p}\)-homology is often not free over its coefficients when \(p>2\). In particular, the \(C_{p}\)-dual Steenrod algebra is not free [13], which suggests the odd primary \(C_{p}\)-equivariant story may be more complicated, requiring techniques beyond those developed in this paper. Further, when \(G=C_{2}\), there is a non-obvious \(C_{2}\)-equivalence of spaces \(\Omega^{\lambda}S^{\lambda+1}\simeq\Omega^{\rho}S^{\rho+1}\)[10]. Thus there are filtrations of \(\Omega^{\lambda}S^{\lambda+1}\simeq\Omega^{\rho}S^{\rho+1}\) by both the \(E_{\lambda}\) and \(E_{\rho}\) operads, leading to at least two definitions of Brown-Gitler spectra. In this paper, we choose to work with the \(E_{\rho}\)-filtration because it is the most computationally accessible. However, in [14], Levy unified and extended the work of Behrens, Hahn, and Wilson constructing the Eilenberg-MacLane spectrum \(H\underline{\mathbb{F}}_{p}.\) Studying how the \(E_{\lambda}\) and \(E_{\rho}\) filtrations arise from faithful representations of \(p\)-groups could unify the study of these two definitions of Brown-Gitler spectra, and may prove an interesting direction for future work. ## 2. Statement of Theorems Our main result is a \(C_{2}\)-equivariant analogue of [15, Theorem 1.5(i), (ii)]. To state this theorem precisely, we recall \(H\underline{\mathbb{F}}_{2}\) has distinguished elements \(a\in H\underline{\mathbb{F}}_{2\{-\sigma\}}\) and \(u\in H\underline{\mathbb{F}}_{2\{1-\sigma\}}\), where \(\sigma\) is the one-dimensional sign representation of \(C_{2}\). We define a weight filtration on \[H\underline{\mathbb{F}}_{2\centerdot},H\underline{\mathbb{Z}}\cong H\underline{ \mathbb{F}}_{2\centerdot}[\bar{\xi}_{1},\bar{\xi}_{2},\bar{\xi}_{3},\cdots,c( \tau_{1}),c(\tau_{2}),\cdots]/(c(\tau_{i}^{2})=ac(\tau_{i+1})+u\xi_{i+1}^{-}),\] where \(|\tau_{j}|=2^{j}\rho-\sigma\), \(|\bar{\xi}_{i}|=(2^{i}-1)\rho\), and \(c\) denotes the antiautomorphism of the dual Steenrod algebra \(\mathcal{A}\cong\pi_{\centerdot}H\underline{\mathbb{F}}_{2}\wedge H\underline{ \mathbb{F}}_{2}\) (the computation of \(H\underline{\mathbb{F}}_{2\centerdot}H\underline{\mathbb{Z}}\) follows from [1, Theorem 3.8]). The weight filtration is defined by \[\operatorname{wt}(c(\tau_{j}))=\operatorname{wt}(\bar{\xi}_{j})=2^{j},\qquad \operatorname{wt}(xy)=\operatorname{wt}(x)+\operatorname{wt}(y).\] **Theorem** (Theorem 6.1).: _For \(n>0,\) there is an \(H\underline{\mathbb{F}}_{2}\)-complete spectrum \(B_{1}(n)\) and a map_ \[B_{1}(n)\xrightarrow{g}H\underline{\mathbb{F}}_{2}\] _such that_ * \(g_{\centerdot}\) _sends_ \(H\underline{\mathbb{F}}_{2\centerdot}(B_{1}(n))\) _isomorphically onto the span of monomials of weight_ \(\leq 2n;\)__ * _there are pairings_ \[B_{1}(m)\wedge B_{1}(n)\to B_{1}(m+n)\] _whose homology homomorphism is compatible with the multiplication in_ \(H\underline{\mathbb{F}}_{2\centerdot}(H\underline{\mathbb{Z}}_{2}).\)__ The spectra \(B_{1}(n)\) are \(C_{2}\)-equivariant analogues of Cohen, Davis, Goerss and Mahowald's integral Brown-Gitler spectra in the sense that they realize certain sub-comodules of \(H\underline{\mathbb{F}}_{2\centerdot}H\underline{\mathbb{Z}}.\) This result follows from the more technical Theorem 5.1, which is a \(C_{2}\)-equivariant analogue of [1, Theorem 1.3]. To state Theorem 5.1, we introduce both an increasing filtration of the space \(\Omega^{\rho}S^{\rho+1}\) and a homotopy fiber sequence. Work of Rourke-Sanderson [10] shows the space \(\Omega^{\rho}S^{\rho+1}\) admits an increasing filtration by spaces \[F_{n}\Omega^{\rho}S^{\rho+1}\simeq\copro_{0\leq k<n}C_{k}(\rho)\underset{ \Sigma_{k}}{\times}(S^{1})^{\times k}/\sim, \tag{2.1}\] where \[C_{k}(\rho)=\{m_{1},m_{2},\cdots,m_{k}|m_{i}\neq m_{j}\text{ if }i\neq j\} \subset\rho^{k}\] is the configuration space of \(k\) ordered points in the \(C_{2}\)-regular representation \(\rho\). Note when \(k=0\), this gives the base point \(x_{0}.\) If \(x_{n}=x_{0},\) the relation \(\sim\) identifies \[(m_{1},\cdots,m_{n};x_{1},\cdots,x_{n})\sim(m_{1},\cdots,m_{n-1};x_{1},\cdots, x_{n-1}).\] Let \(X_{2}\) denote the Bousfield localization of \(X\) with respect to \(H\underline{\mathbb{F}}_{2}\) and let \(\mathcal{F}_{n}=(F_{n}\Omega^{\rho}S^{\rho+1})_{2}.\) Then there are product maps \[\mathcal{F}_{m}\times\mathcal{F}_{n}\xrightarrow{\mu}\mathcal{F}_{m+n}\] induced by the corresponding maps for the filtration spaces and the fact that localization preserves finite products. Define \(A_{n}\) by the homotopy fiber sequence \[A_{n}\to\mathcal{F}_{2n+1}\to S^{1}_{2} \tag{2.2}\] where the second map is the localization of the composite \[F_{2n+1}\Omega^{\rho}S^{\rho+1}\to\Omega^{\rho}S^{\rho+1}\to S^{1}.\] **Theorem** (Theorem 5.1).: _The fiber sequence (2.2) is equivalent to a product fibration. Indeed, there is a \(C_{2}\)-equivariant map \(A_{n}\stackrel{{\phi}}{{\to}}\mathcal{F}_{2n}\) and a commutative diagram of fibrations_ _which is an equivalence on total spaces and on fibers._ Our argument for deducing Theorem 6.1 from Theorem 5.1 follows that of Cohen-Davis-Goerss-Mahowald and thus relies on a Thom spectrum construction of \(H\underline{\mathbb{Z}}_{2}\) with a \(2\)-complete base space (Theorem 4.1). **Remark 2.3**.: Hahn-Wilson [10] establish a construction of \(H\underline{\mathbb{Z}}_{(2)}\) as a \(C_{2}\)-Thom spectrum. However, they observe this construction does not yield \(H\underline{\mathbb{Z}}_{(p)}\) as a \(C_{2}\)-spectrum for other primes \(p\), and thus does not readily generalize to a full integral construction. In Section 4, we extend Hahn-Wilson's arguments to construct \(H\underline{\mathbb{Z}}_{2}\) as a \(C_{2}\)-equivariant Thom spectrum with a \(2\)-complete base space, resulting in Theorem 4.1, an equivariant analogue of [11, Theorem 1], which was originally proposed by Mahowald in the nonequivariant setting. To state Theorem 4.1, we require the following maps. Consider the \(\rho\)-loops of the unit map \(S^{\rho+1}\to K(\mathbb{Z},\rho+1).\) Composing with the adjoint to \(-1\in\pi_{0}^{C_{2}}(S_{2}^{0})\) we get a map \[\Omega^{\rho}S^{\rho+1}\to\Omega^{\rho}K(\underline{\mathbb{Z}},\rho+1)\to BGL _{1}(S_{2}^{0}).\] Note \(\Omega^{\rho}K(\underline{\mathbb{Z}},\rho+1)\simeq S^{1}\) with trivial \(C_{2}\)-action. Let \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\) denote the homotopy fiber of the map \(\Omega^{\rho}S^{\rho+1}\to S^{1}\) and consider the composition \[\mu:\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to\Omega^{\rho}S^{\rho+1}\to S ^{1}\to BGL_{1}(S_{2}^{0}).\] **Theorem** (4.1).: _There is an equivalence of \(C_{2}\)-spectra_ \[(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle_{2})^{\mu}\to H\underline{ \mathbb{Z}}_{2}.\] ### Acknowledgments The authors would like to thank to Agnes Beaudry, Tobias Barthel, Mark Behrens, David Chan, Bert Guillou, Paul Goerss, and Jeremy Hahn for enlightening conversations. The first and second authors would also like to thank the Max Planck Institute for Mathematics in Bonn for its hospitality and financial support. The third author thanks the Hausdorff Institute in Bonn for its hospitality and financial support, as well as the Knut and Alice Wallenberg Foundation for financial support. All three authors are grateful to the Hausdorff Institute in Bonn for its hospitality during the Spectral Methods in Algebra, Geometry, and Topology trimester program in Fall of 2022, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy- EXC-2047/1-390685813. This material is based upon work supported by the National Science Foundation under Grant No. DMS 2135884. ### Structure of Argument We extend the arguments of Cohen-Davis-Goerss-Mahowald to the \(C_{2}\)-equivariant setting. To make this extension clear, we first recall Cohen-Davis-Goerss-Mahowald's nonequivariant argument, then outline the structure of our equivariant extension. In [12], Cohen-Davis-Goerss-Mahowald make a Thom spectrum model for integral Brown-Gitler spectra precise by explicitly defining the base space. Specifically, they consider the space \(\Omega^{2}S^{3}\langle 3\rangle,\) the double loop space of the homotopy fiber of the unit map \(S^{3}\to K(\mathbb{Z},3),\) and show that after localization with respect to mod \(p\) homology, a weight filtration on the homology of \(\Omega^{2}S^{3}\langle 3\rangle\) is in fact induced by an actual filtration of the space \(\Omega^{2}S^{3}\langle 3\rangle.\) The filtration pieces of the space \(\Omega^{2}S^{3}\langle 3\rangle_{2}\) are the base spaces in the Thom spectrum model for integral Brown-Gitler spectra. To extend Cohen-Davis-Goerss-Mahowald's nonequivariant argument to the \(C_{2}\)-equivariant setting, we localize with respect to \(H\underline{\mathbb{E}}_{2}\)-homology, and then show that a weight filtration on the homology of \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\) is in fact induced by an actual filtration of the space \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle.\) This requires a model of \(H\underline{\mathbb{Z}}_{2}\) as a Thom spectrum with 2-complete base space \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle_{2}.\) We provide equivariant preliminaries in Section 3. We construct \(H\underline{\mathbb{Z}}_{2}\) as a \(C_{2}\)-equivariant Thom spectrum with 2-complete base space (Theorem 4.1) in Section 4. In Section 5, we extend Cohen-Davis-Goerss-Mahowald's technical argument making the filtration of the base space rigorous. This culminates in the proof of Theorem 5.1, from which our main result concerning a Thom spectrum model for \(C_{2}\)-integral Brown-Gitler spectra (Theorem 6.1) immediately follows. ## 3. Equivariant Preliminaries As in [1, Section 4], we will make use of an identification of the \(C_{2}\)-fixed points of the space \(\Omega^{\rho}S^{\rho+1}.\) Consider the cofiber sequence \[C_{2+}\to S^{0}\hookrightarrow S^{\sigma}.\] Mapping out of this cofiber sequence gives a fiber sequence \[\Omega N^{\times}\Omega S^{\rho+1}\to\Omega^{\rho}S^{\rho+1}\to\Omega S^{\rho+ 1}\xrightarrow{\Delta}N^{\times}\Omega S^{\rho+1},\] where \(N^{\times}X:=\operatorname{Map}(C_{2},X)=X\underset{C_{2}}{\times}X\) is the norm with respect to Cartesian product (i.e. the coinduced space). On taking fixed points, we get a fiber sequence \[\Omega^{2}S^{3}\xrightarrow{}(\Omega^{\rho}S^{\rho+1})^{C_{2}}\to\Omega S^{2 }\xrightarrow{\operatorname{null}}\Omega S^{3}.\] In particular, there is an equivalence \[(\Omega^{\rho}S^{\rho+1})^{C_{2}}\simeq\Omega S^{2}\times\Omega^{2}S^{3}. \tag{3.1}\] Behrens-Wilson [1] also established an additive isomorphism \[H\underline{\mathbb{F}}_{2\centerdot\centerdot\centerdot}\Omega^{\rho}S^{\rho+1} \cong H\underline{\mathbb{F}}_{2\centerdot\centerdot}\otimes E[t_{0},t_{1}, \cdots]\otimes P[e_{1},e_{2},\cdots] \tag{3.2}\] where \[|t_{i}| =2^{i}\rho-\sigma\] \[|e_{i}| =(2^{i}-1)\rho.\] We define a weight on the monomials in \(H\underline{\mathbb{F}}_{2\centerdot\centerdot\centerdot}(\Omega^{\rho}S^{ \rho+1})\) by \[\operatorname{wt}(t_{j})=\operatorname{wt}(e_{j})=2^{j},\qquad \operatorname{wt}(ab)=\operatorname{wt}(a)+\operatorname{wt}(b).\] and recall the space \(\Omega^{\rho}S^{\rho+1}\) admits an increasing filtration by spaces \[F_{n}\Omega^{\rho}S^{\rho+1}\simeq\coprod_{0\leqslant k<n}C_{k}(\rho)\underset {\Sigma_{k}}{\times}(S^{1})^{\times k}/\sim,\] where the relation is defined in equation (2.1). This filtration is such that \(H\underline{\mathbb{F}}_{2\centerdot\centerdot}(F_{n}\Omega^{\rho}S^{\rho+1})\) is the span of monomials of weight \(\leqslant n\). **Remark 3.3**.: To verify the claim that \(H\underline{\mathbb{F}}_{2\centerdot\centerdot}(F_{n}\Omega^{\rho}S^{\rho+1})\) is the span of monomials of weight \(\leqslant n\), observe that there is a \(C_{2}\)-equivariant Snaith splitting [1, Chapter VII Theorem 5.7] and inclusions \[\Sigma_{+}^{\infty}\Omega^{\rho}S^{\rho+1}\simeq\bigvee\mathcal{E }_{n}^{+}\underset{\Sigma_{n}}{\times}(S^{1})^{\wedge n}\] \[\big{\uparrow}\] \[\Sigma_{+}^{\infty}\mathcal{E}_{n}\underset{\Sigma_{n}}{\times}(S ^{1})^{\times n}\] where \(\mathcal{E}_{\bullet}\) is the little \(2\)-disks operad in the \(C_{2}\)-regular representation \(\rho\). We will consider the cases where \(n=2,3,4\). The remaining cases are similar. Note that \(\mathcal{E}_{n}\simeq C_{n}(\rho)\). The \(\rho\) loops space structure means we have a diagram Using the notation and Dyer-Lashof operations defined in [1], the map \[H\underline{\mathbb{F}}_{2\centerdot\centerdot}S^{1}\underset{\Sigma_{2}}{ \times}\Omega^{\rho}S^{\rho+1}\times\Omega^{\rho}S^{\rho+1}\to H\underline{ \mathbb{F}}_{2\centerdot\centerdot}\Omega^{\rho}S^{\rho+1}\] sends \[\iota\otimes t_{0}\otimes t_{0} \mapsto Q^{\rho}t_{0}=t_{1}\] \[*\otimes t_{0}\otimes t_{0} \mapsto n(x_{1})=e_{1}\] \[*\otimes*\otimes t_{0} \mapsto t_{0}.\] For degree reasons, we pick up no additional generators when \(n=3.\) To complete the statement for \(n=4,\) consider the diagram Recall \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\) denotes the homotopy fiber of the map \(\Omega^{\rho}S^{\rho+1}\xrightarrow{r}S^{1},\) the \(\rho\)-loops of the unit map \(S^{\rho+1}\to K(\underline{\mathbb{Z}},\rho+1),\) so \[\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to\Omega^{\rho}S^{\rho+1} \xrightarrow{r}S^{1}\] is a fiber sequence. The fibration splits if there is an epimorphism \[\pi_{1}^{C_{2}}\Omega^{\rho}S^{\rho+1}\to\pi_{1}^{C_{2}}S^{1}.\] **Lemma 3.4**.: _There is an epimorphism \(\pi_{1}^{C_{2}}\Omega^{\rho}S^{\rho+1}\to\pi_{1}^{C_{2}}S^{1}.\) Hence, \(\Omega^{\rho}S^{\rho+1}\simeq S^{1}\times\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\) and \(H\underline{\mathbb{E}}_{2\star}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle) \subseteq H\underline{\mathbb{E}}_{2\star}(\Omega^{\rho}S^{\rho+1})\) is the span of monomials of weight divisible by 2._ Proof.: Stably, \(\underline{\pi}_{1}(\Omega^{\rho}S^{\rho+1})\cong\underline{\pi}_{\varphi+1}S ^{\rho+1}\) is the Mackey functor The induced map \(\underline{\pi}_{1}(r):\underline{\pi}_{1}^{s}\Omega^{\rho}S^{\rho+1}\to \underline{\pi}_{1}K(\mathbb{Z},1)\) must be an epimorphism since the diagram of Mackey functors commutes. Evaluation at the \(C_{2}/C_{2}\) level gives an epimorphism \[\pi_{1}^{C_{2}}\Omega^{\rho}S^{\rho+1}\to\pi_{1}^{C_{2}}S^{1}.\] ### Thom spectra of stable spherical fibrations We describe a Lewis-May Thom spectrum functor for \(p\)-complete stable spherical fibrations \(\mu:X\to BF_{p}\) (see [1, SS3.4], for instance, for a \(p\)-local description). In the language of structured ring spectra, \(BF_{p}\) can be identified with \(BGL_{1}(\mathbb{S}_{p}),\) the classifying space of the units of the \(p\)-complete sphere spectrum \(\mathbb{S}_{p},\) and a spectrum can be viewed as an \(\mathbb{S}_{p}\)-module. This language appears in Lemma 3.5 and Theorem 4.1. Let \(V\) be a finite dimensional real \(C_{2}\)-representation and \(F_{p}(V)\) be the topological monoid of basepoint-preserving \(C_{2}\)-equivariant homotopy equivalences of \(S_{p}^{V}\), the \(p\)-completion of the one-point compactification of \(V.\) There is an associated quasifibration \[EF_{p}(V)\to BF_{p}(V)\] with fiber \(S_{p}^{V}.\) Write \(BF_{p}\) for the colimit of the spaces \(BF_{p}(V)\) where the colimit is taken over a diagram with one object for each finite dimensional real \(C_{2}\)-representation and one arrow \(V\to W\) if and only if \(V\subset W.\) Given a map \(f:X\to BF_{p}\), let \(X(V)\) be the closed subset \(f^{-1}BF_{p}(V)\) and \[T(f)(V):=f^{*}EF_{p}(V)/X\] be the Thom space of the induced map \(f_{V}:X(V)\to BF(V).\) Here \(f^{*}EF_{p}(V)\) is the pullback of \(EF_{p}(V)\) and \(X\) is viewed as a subspace via the induced section. This defines an object \(T(f)\) in the category of \(C_{2}\)-orthogonal prespectra. Composition with the pectralification functor gives a Thom spectrum functor \[T_{\mathbb{S}_{p}}:\mathcal{U}/BF_{p}\rightarrow\mathbb{S}_{p}\] defined on the category of \(\mathcal{U}\) of \(C_{2}\)-spaces over the classifying space \(BF_{p}\) for \(p\)-complete stable spherical fibrations with values in \(p\)-complete spectra \(\mathbb{S}_{p}.\) ### A Thom spectrum model for \(\mathbf{H}\underline{\mathbb{E}}_{2}\) Consider the \(\rho\)-loops of the unit map \(S^{\rho+1}\to K(\mathbb{Z},\rho+1).\) Let \(\mu\) denote composition with the adjoint to \(-1\in\pi_{0}^{C_{2}}(S_{2}^{0}):\) \[\mu:\Omega^{\rho}S^{\rho+1}\rightarrow\Omega^{\rho}K(\underline{\mathbb{Z}}, \rho+1)\to BGL_{1}(S_{2}^{0}).\] **Lemma 3.5**.: _The Thom class \((\Omega^{\rho}S^{\rho+1})^{\mu}\to H\underline{\mathbb{E}}_{2}\) is an equivalence of \(C_{2}\)-spectra._ This follows from [11, Proof of Theorem A]. ## 4. A Thom spectrum model for \(H\underline{\mathbb{Z}}_{2}\) Arguing as in Hahn-Wilson [14, Theorem 9.1], which uses arguments of Antolin-Camarena-Barthel [1, SS 5.2], we prove **Theorem 4.1**.: _There is an equivalence of \(C_{2}\)-spectra_ \[(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle_{2})^{\mu}\to H\underline{ \mathbb{Z}}_{2}.\] Proof.: We first show the Thom class \[(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\to H\underline{ \mathbb{Z}}_{2}. \tag{4.1}\] is an equivalance of \(C_{2}\)-spectra. Decomposing \(S^{1}\) into a \(0\)-cell and a \(1\)-cell and trivializing the fiber on each cell produces a decomposition of the Thom spectrum \((\Omega^{\rho}S^{\rho+1})^{\mu}\) as a cofiber \[(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\xrightarrow{x}(\Omega^{ \rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\rightarrow(\Omega^{\rho}S^{\rho+1} )^{\mu}\simeq H\underline{\mathbb{E}}_{2}.\] Each of these Thom spectra come from bundles classified by \(\mathbb{A}_{2}\)-maps, which is enough to ensure the map \(x\) induces a map \[\underline{\pi}_{*}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\to \underline{\pi}_{*}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\] of modules over \(\underline{\pi}_{0}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}.\) In particular, on homotopy the map corresponds to multiplication by some element \(x\in\underline{\pi}_{0}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}.\) Taking loops of \(\mu:\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to BGL_{1}(S_{2}^{0})\), we obtain a map \[\Omega\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to GL_{1}(S_{2}^{0}).\] By the universal property of \(GL_{1}(S_{2}^{0})\), we equivalently have an \(E_{1}\)-ring map \[f:\Sigma_{+}^{\infty}\Omega\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to S_{2 }^{0}.\] Similarly, there is also a map \(\epsilon:\Sigma_{+}^{\infty}\Omega\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle \to S_{2}^{0}\) coming from the trivial map \(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to GL_{1}(S_{2}^{0})\). The construction of Thom spectra as bar constructions [1, Definition 4.1] then implies \(\underline{\pi}_{0}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\cong \operatorname{coker}\underline{\pi}_{0}(f-\epsilon)\). Thus \(\underline{\pi}_{0}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\) is \[\begin{array}{ccccc}\underline{\pi}_{0}(S_{2}^{0})&=&\begin{array}{c}1\\ \end{array}&\begin{array}{c}t\\ \end{array}&\begin{array}{c}\underline{\mathbb{Z}}_{2}[t]/(t^{2})\\ \end{array}&\begin{array}{c}t\\ \end{array}\\ &\begin{array}{c}\underline{\mathbb{Z}}_{2}\\ \end{array}&\begin{array}{c}t\\ \end{array}\\ &\begin{array}{c}\underline{\mathbb{Z}}_{2}\\ \end{array}&\begin{array}{c}1\\ \end{array}\\ \end{array}\end{array}\] modulo classes in the image of \(f-\epsilon.\) This also fits into a short exact sequence \[\underline{\pi}_{0}\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\xrightarrow {x}\underline{\pi}_{0}\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\to \underline{\mathbb{E}}_{2}.\] From the \(C_{2}/e\) spot we deduce \(x=2\) and from the short exact sequence and Mackey functor structure deduce that \[\underline{\pi}_{0}(\Omega^{\rho}S^{\rho}+1\langle\rho+1\rangle)^{\mu}= \underline{\mathbb{Z}}_{2}.\] We have checked that the Thom class (4.1) induces an isomorphism on \(\underline{\pi}_{0}\). To show that it is an equivalence of \(C_{2}\)-spectra, it remains to check that this map induces isomorphism in \(\underline{\pi}_{V}\) for \(V\neq 0\), that is, \(\underline{\pi}_{V}((\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu})\) is trivial. The underlying level follows from the classical nonequivariant result. The genuine fixed point level follows from Nakayama's lemma once one argues that the genuine fixed point spectra have finitely generated homotopy groups in each degree. The isotropy separation reduces us to the corresponding statement on geometric fixed points. By the Thom isomorphism, \[H_{*}[\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}]^{\Phi C_{2}}\cong H _{*}((\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{C_{2}}).\] Recall from Lemma 3.4 that \(\Omega^{\rho}S^{\rho+1}\simeq S^{1}\times\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\), and from Equation (3.1) that \(\Omega^{2}S^{3}\times\Omega S^{2}\simeq(\Omega^{\rho}S^{\rho+1})^{C_{2}}\). It follows that \[\Omega^{2}S^{3}\times\Omega S^{2}\simeq(\Omega^{\rho}S^{\rho+1})^{C_{2}}\simeq (S^{1}\times\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{C_{2}}\simeq S^{1} \times(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{C_{2}}.\] Thus \(H_{*}(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{C_{2}}\subseteq H_{*}(\Omega^ {2}S^{3}\times\Omega S^{2})\) has finitely generated homology groups in each degree. Hence, it also has finitely generated homotopy groups in each degree and using Nakayama's lemma finishes the proof of the genuine fixed level. The completion map \[\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle\to\Omega^{\rho}S^{\rho+1}\langle \rho+1\rangle_{2}\] induces an isomorphism on homology and using this one can show that the map \[(\Omega^{\rho}S^{\rho+1}\langle\rho+1\rangle)^{\mu}\to(\Omega^{\rho}S^{\rho+1} \langle\rho+1\rangle_{2})^{\mu}\] is a \(2\)-complete equivalence. On the other hand, the left hand side is automatically \(2\)-complete, being an (equivariant) homotopy colimit of \(2\)-complete spectra by construction. ## 5. A product fibration Recall \(X_{2}\) denotes the Bousfield localization of \(X\) with respect to \(H\underline{\mathbb{E}}_{2}\), we have set \[\mathcal{F}_{n}=(F_{n}\Omega^{\rho}S^{\rho+1})_{2},\] and there are product maps \[\mathcal{F}_{m}\times\mathcal{F}_{n}\stackrel{{\mu}}{{\to}} \mathcal{F}_{m+n}.\] Further \(A_{n}\) is defined by the homotopy fiber sequence \[A_{n}\to\mathcal{F}_{2n+1}\to S_{2}^{1} \tag{5.1}\] where the second map is the localization of the composite \[F_{2n+1}\Omega^{\rho}S^{\rho+1}\to\Omega^{\rho}S^{\rho+1}\to S^{1}.\] We prove the \(C_{2}\)-equivariant analogue of [12, Theorem 1.3]. **Theorem 5.1**.: _The fiber sequence (2.2) is equivalent to a product fibration. Indeed, there is a \(C_{2}\)-equivariant map \(A_{n}\stackrel{{\phi}}{{\to}}\mathcal{F}_{2n}\) and a commutative diagram of fibrations_ _which is an equivalence on total spaces and on fibers._ Our proof of Theorem 5.1 is a \(C_{2}\)-equivariant analogue of [12, SS2. Proof of Theorem 1.3]. Following [12], we first show that the inclusion \[F_{m-1}\Omega^{\rho}S^{\rho+1}\to F_{m}\Omega^{\rho}S^{\rho+1}\] may be considered as a \(C_{2}\)-equivariant inclusion into a mapping cone. For a pointed \(C_{2}\times\Sigma_{m}\)-space \(X,\) we define \[M_{m}(X)=C_{m}(\rho)\underset{\Sigma_{m}}{\times}X/C_{m}(\rho)\underset{\Sigma_{ m}}{\times}*.\] Let \(I\) denote the unit interval, \(I^{m}\) the \(m\)-dimensional unit interval, and \(\partial I^{m}\) the boundary of \(I^{m},\) all with trivial \(C_{2}\)-action. Note that \(I/\partial I\simeq S^{1}\) and \(\partial I^{m}\simeq S^{m-1}.\) **Lemma 5.2**.: _Let \(F_{m}=F_{m}\Omega^{\rho}S^{\rho+1}.\) There is a cofibration sequence_ \[M_{m}(\partial I^{m})\overset{c}{\to}F_{m-1}\to F_{m}.\] Proof.: Let \(T^{m}(S^{1})\) denote the wedge with trivial \(C_{2}\)-action consisting of points in the \(m\)-fold Cartesian product having at least one component the basepoint. Viewing \(I^{m}\) as the cone of the natural map \(\partial I^{m}\to T^{m}(I/\partial I)\) yields a \(C_{2}\times\Sigma_{m}\)-equivariant cofibration \[\partial I^{m}\to T^{m}(I/\partial I)\hookrightarrow(I/\partial I)^{\times m},\] with trivial \(C_{2}\)-action and hence a cofibration \[M_{m}(\partial I^{m})\overset{k}{\to}M_{m}(T^{m}(I/\partial I))\to M_{m}((I/ \partial I)^{\times m}. \tag{5.3}\] Recall from [10] that \[F_{m}=\left(\bigcup_{k\leqslant m}M_{k}((I/\partial I)^{\times k})\right)/ \sim.\] The map \(c\) of Lemma 5.2 is the composite \[M_{m}(\partial I^{m})\to M_{m}(T^{m}(I/\partial I))\to F_{m-1},\] where the second map uses the equivalence relation from (2.1) to ignore the basepoint in at least one component. The required homeomorphism from the mapping cone of \(c\) to \(F_{m}\) is a quotient of the homeomorphism in (5.3) from the mapping cone of \(k\) to \(M_{m}((I/\partial I)^{\times m}.\) We now return to the proof of Theorem 5.1. Assume by induction that the theorem has been proved for \(n-1\) so \(\mathcal{F}_{2(n-1)+1}\simeq S^{1}_{2}\times A_{n-1}.\) Note that \(\mathcal{F}_{2(n-1)+1}\to\mathcal{F}_{2n-1}\) is an equivalence. Localizing Lemma 5.2 yields a map \[M_{2n}(\partial I^{2n})_{2}\to\mathcal{F}_{2n-1}\simeq S^{1}_{2}\times A_{n-1}. \tag{5.4}\] **Lemma 5.5**.: _The cohomology \(H^{1}_{C_{2}}(M_{2n}(\partial I^{2n})_{2};\underline{\mathbb{Z}}_{p})=0,\) so the map (5.4) is of the form \(*\times h\), for some map \(h:M_{2n}(\partial I^{2n})_{2}\to A_{n-1}.\)_ Proof.: We will show \(H^{1}(M_{2n}(\partial I^{2n})^{C_{2}};\mathbb{Z})=0.\) Since \(H^{1}(M_{2n}(\partial I^{2n})^{e};\mathbb{Z})\) is observed to be zero in [11], this will imply \(H^{1}_{C_{2}}(M_{2n}(\partial I^{2n});\underline{\mathbb{Z}})=0\) by [12, Lemma 2.8], and upon localizing, \(H^{1}_{C_{2}}(M_{2n}(\partial I^{2n})_{2};\underline{\mathbb{Z}}_{p})=0.\) To compute \(H^{1}(M_{2n}(\partial I^{2n})^{C_{2}};\mathbb{Z}),\) we first identify \(M_{2n}(\partial I^{2n})^{C_{2}}.\) Since \(M_{2n}(\partial I^{2n})\) is a quotient space, it can be constructed as the colimit \[C_{2n}(\rho)\times_{\Sigma_{2n}}*\hookrightarrow C_{2n}(\rho)\times_{\Sigma _{2n}}S^{2n-1}\to M_{2n}(S^{2n-1}).\] Further, since taking fixed points commutes with filtered colimits for finite groups, there is an equivalence \[(C_{2n}(\rho)\times_{\Sigma_{2n}}*)^{C_{2}} \quad\hookrightarrow\quad(C_{2n}(\rho)\times_{\Sigma_{2n}}S^{2n-1})^ {C_{2}} \quad\rightarrow\quad M_{2n}(S^{2n-1})^{C_{2}}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad fibrations extending the induction and completing the proof of Theorem 5.1. ## 6. Proof of Main Theorem Now we can use Theorems 4.1 and 5.1 to define \(B_{1}(n)\), a \(C_{2}\)-equivariant analogue of the \(n\)th integral Brown-Gilter spectrum at the prime 2 as a Thom spectrum. Let \(\tilde{A}_{n}\) denote the homotopy fiber of the composite \[F_{2n+1}\Omega^{\rho}S^{\rho+1}\to\Omega^{\rho}S^{\rho+1}\to S^{1}\] so the \(H\underline{\mathbb{F}}_{2}\)-localization of \(\tilde{A}_{n}\) is \(A_{n}.\) Using the \(H\underline{\mathbb{F}}_{2}\)-localization of the commutative diagram (6.1) we define \(B_{1}(n)\) to be the Thom spectrum \((A_{n})^{\mu}.\) Taking the Thom spectrum of the composites \[A_{m}\times A_{n}\xrightarrow{\phi_{m}\times\phi_{n}}\mathcal{F}_{2m}\times \mathcal{F}_{2n}\to\mathcal{F}_{2m+2n}\to\mathcal{F}_{2n+2m+1}\to A_{m+n},\] where the last map splits the equivalence of Theorem 5.1, yields pairings \[B_{1}(m)\wedge B_{1}(n)\to B_{1}(m+n).\] Theorem 5.1 implies the Thom map \((i_{n})^{\mu}:B_{1}(n)\to H\underline{\mathbb{F}}_{2}\) induces a monomorphism in homology. Recall \[H\underline{\mathbb{F}}_{2}{{}_{\star}}H\underline{\mathbb{F}}_{2}\cong H \underline{\mathbb{F}}_{2}{{}_{\star}}[\bar{\xi}_{1},\bar{\xi}_{2},\bar{\xi} _{3},\cdots,c(\tau_{1}),c(\tau_{2}),\cdots]/(c(\tau_{i}^{2})=ac(\tau_{i+1})+uc( \xi_{i+1}))\] where \(|c(\tau_{j})|=2^{j}\rho-\sigma,\)\(|\bar{\xi}_{i}|=(2^{i}-1)\rho,\) and \(c\) denotes the antiautomorphism of the dual Steenrod algebra \(\mathcal{A}\cong\pi{{}_{\star}}H\underline{\mathbb{F}}_{2}\wedge H\underline{ \mathbb{F}}_{2}\)[Orm11]. Also recall that \[\operatorname{wt}(\bar{\tau}_{j})=\operatorname{wt}(\bar{\xi}_{j})=2^{j}, \qquad\operatorname{wt}(xy)=\operatorname{wt}(x)+\operatorname{wt}(y),\] and that all monomials have weight divisible by 2. Then our \(C_{2}\)-equivariant analogue of [Coh+88, Theorem 1.5(i), (ii)] follows immediately from Theorem 5.1: **Theorem 6.1**.: _For \(n>0,\) there is an \(H\underline{\mathbb{F}}_{2}\)-complete spectrum \(B_{1}(n)\) and a map_ \[B_{1}(n)\xrightarrow{g}H\underline{\mathbb{Z}}_{2}\] _such that_ 1. \(g_{\star}\) _sends_ \(H\underline{\mathbb{F}}_{2\star}(B_{1}(n))\) _isomorphically onto the span of monomials of weight_ \(\leqslant 2n;\)__ 2. _there are pairings_ \[B_{1}(m)\wedge B_{1}(n)\to B_{1}(m+n)\] _whose homology homomorphism is compatible with the multiplication in_ \(H\underline{\mathbb{F}}_{2\star}(H\underline{\mathbb{Z}}_{2}).\)__ **Remark 6.2**.: It is not currently known if a \(C_{2}\)-equivariant analogue of [1, Theorem 1.5(iii)] holds or if there should be some other criterion determining (integral) Brown-Gitler spectra in the \(C_{2}\)-equivariant case. In the non-equivariant case, [1, Theorem 1.5(iii)] states that for any CW-complex \(X,\) \[g_{\ast}:B_{1}(n)_{i}(X)\to H_{i}(X;\mathbb{Z}_{2})\] is surjective if \(i\leqslant 2p(n+1)-1.\) This condition originated in the geometry of inversions of manifolds and would be interesting to study further in the \(C_{2}\)-equivariant setting.
2305.12761
Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer
Cross-lingual natural language inference is a fundamental problem in cross-lingual language understanding. Many recent works have used prompt learning to address the lack of annotated parallel corpora in XNLI. However, these methods adopt discrete prompting by simply translating the templates to the target language and need external expert knowledge to design the templates. Besides, discrete prompts of human-designed template words are not trainable vectors and can not be migrated to target languages in the inference stage flexibly. In this paper, we propose a novel Soft prompt learning framework with the Multilingual Verbalizer (SoftMV) for XNLI. SoftMV first constructs cloze-style question with soft prompts for the input sample. Then we leverage bilingual dictionaries to generate an augmented multilingual question for the original question. SoftMV adopts a multilingual verbalizer to align the representations of original and augmented multilingual questions into the same semantic space with consistency regularization. Experimental results on XNLI demonstrate that SoftMV can achieve state-of-the-art performance and significantly outperform the previous methods under the few-shot and full-shot cross-lingual transfer settings.
Shuang Li, Xuming Hu, Aiwei Liu, Yawen Yang, Fukun Ma, Philip S. Yu, Lijie Wen
2023-05-22T06:31:29Z
http://arxiv.org/abs/2305.12761v1
# Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer ###### Abstract Cross-lingual natural language inference is a fundamental problem in cross-lingual language understanding. Many recent works have used prompt learning to address the lack of annotated parallel corpora in XNLI. However, these methods adopt discrete prompting by simply translating the templates to the target language and need external expert knowledge to design the templates. Besides, discrete prompts of human-designed template words are not trainable vectors and can not be migrated to target languages in the inference stage flexibly. In this paper, we propose a novel **Soft** prompt learning framework with the **M**ultilingual **V**erbalizer (SoftMV) for XNLI. SoftMV first constructs cloze-style question with soft prompts for the input sample. Then we leverage bilingual dictionaries to generate an augmented multilingual question for the original question. SoftMV adopts a multilingual verbalizer to align the representations of original and augmented multilingual questions into the same semantic space with consistency regularization. Experimental results on XNLI demonstrate that SoftMV can achieve state-of-the-art performance and significantly outperform the previous methods under the few-shot and full-shot cross-lingual transfer settings1. Footnote 1: The source code will be available at [https://github.com/THU-BPM/SoftMV](https://github.com/THU-BPM/SoftMV). ## 1 Introduction Multilingual NLP systems have gained more attention due to the increasing demand for multilingual services. Cross-lingual language understanding (XLU) plays a crucial role in multilingual systems, in which cross-lingual natural language inference (XNLI) is a fundamental and challenging task Conneau et al. (2018); MacCartney and Manning (2008); Li et al. (2023, 2022). NLI is a fundamental problem in NLU that could help with tasks like semantic parsing Liu et al. (2022); Lin et al. (2022), and relation extraction Liu et al. (2022); Hu et al. (2020, 2021). In XNLI settings, the model is trained on the source language with annotated data to reason the relationship between a pair of sentences (namely premise and hypothesis) and evaluated on the target language without parallel corpora. Pre-trained multilingual language models, such as mBERT Devlin et al. (2019), XLM Conneau and Lample (2019), and XLM-R Conneau et al. (2020), have demonstrated promising performance in cross-lingual transfer learning. These language models learn a shared multilingual embedding space to represent words in parallel sentences. However, these models are trained on a large number of parallel corpora, which are not available in many low-resource languages. The major challenge of XNLI is the lack of annotated data for low-resource languages. To address this problem, some works explored using prompt learning Brown et al. (2020); Schick and Schutze (2021); Shin et al. (2020) when adapting pre-trained language models to downstream tasks in cross-lingual scenarios. Prompt learning reformulates the text classification problem into a masked language modeling (MLM) problem by constructing cloze-style questions with a special token <MASK>. The model is trained to predict the masked word in the cloze-style questions. As shown in Table 1, prompt learning can \begin{table} \begin{tabular}{l l} \hline \hline Type & Prompt Templates \\ \hline DP & Prompt. Question: Hypothesis? Answer: \textless{}MASK. \\ SP & Prompt. Hypothesis? \textless{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{} \textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{} \textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{} \textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{} \textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{} \textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textgreater{} \textless{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{} \textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{}\textless{}\textgreater{}\textgreater{} be divided into three types: Discrete Prompts (DP), Soft Prompts (SP), and Mixed Prompts (MP). Zhao and Schutze (2021) investigated the effectiveness of prompt learning in multilingual tasks by simply applying soft, discrete, and mixed prompting with a uniform template in English. Qi et al. (2022) proposed a discrete prompt learning framework that constructs an augmented sample by randomly sampling a template in another language. By comparing the augmented samples and the original samples in the English template, the model can effectively perceive the correspondence between different languages. However, discrete prompts of human-designed template words require extensive external expert knowledge and are not flexible enough to adapt to different languages. Therefore, the model can't perform well when transferred from high-resource to low-resource languages. In this paper, we propose a novel **Soft** prompt learning framework with the **M**ultilingual **V**erbalizer (SoftMV) for XNLI. First, we construct cloze-style questions for the input samples with soft prompts which consist of trainable vectors. Second, we apply the code-switched substitution strategy Qin et al. (2021) to generate multilingual questions which can be regarded as cross-lingual views for the English questions. Compared with discrete prompts, soft prompts perform prompting directly in the embedding space of the model and can be easily adapted to any language without human-designed templates. Both the original and augmented questions are fed into a pre-trained cross-lingual base model. The classification probability distribution is calculated by predicting the masked token with the multilingual verbalizer to reduce the gap between different languages. Finally, the two probability distributions are regularized by the Kullback-Leibler divergence (KLD) loss Kullback and Leibler (1951) to align the representations of original and augmented multilingual questions into the same space. The entire model is trained with a combined objective of the cross-entropy term for classification accuracy and the KLD term for representation consistency. The well-trained soft prompt vectors will be frozen in the inference stage. Experimental results on the XNLI benchmark show that SoftMV outperforms the baseline models by a significant margin under both the few-shot and full-shot settings. Our contributions can be summarized as follows: * We propose a novel **Soft** prompt learning framework with a **M**ultilingual **V**erbalizer (SoftMV) for XNLI. SoftMV leverages bilingual dictionaries to generate augmented multilingual code-switched questions for original questions constructed with soft prompts. * We adopt the multilingual verbalizer to align the representations of original and augmented questions into the same semantic space with consistency regularization. * We conduct extensive experiments on XNLI and demonstrate that SoftMV can significantly outperform the baseline methods under the few-shot and full-shot cross-lingual transfer settings. ## 2 Related Work Early methods for cross-lingual natural language inference are mainly neural networks, such as Conneau et al. (2018) and Artetxe and Schwenk (2019). which encode sentences from different languages into the same embedding space via parallel corpora Hermann and Blunsom (2014). In recent years, large pre-trained cross-lingual language models have demonstrated promising performance. Devlin et al. (2019) extend the basic language model BERT to multilingual scenarios by pre-trained with multilingual corpora. Conneau and Lample (2019) propose a cross-lingual language model (XLM) which enhances BERT with the translation language modeling (TLM) objective. XLM-R Conneau et al. (2020) is an improvement of XLM by training with more languages and more epochs. Although these methods do not rely on parallel corpora, they still have limitations because fine-tuning needs annotation efforts which are prohibitively expensive for low-resource languages. To tackle this problem, some data augmentation methods have been proposed for XNLI. Ahmad et al. (2021) propose to augment mBERT with universal language syntax using an auxiliary objective for cross-lingual transfer. Dong et al. (2021) adopt Reorder Augmentation and Semantic Augmentation to synthesize controllable and much less noisy data for XNLI. Bari et al. (2021) improve cross-lingual generalization by unsupervised sample selection and data augmentation from the unlabeled training examples in the target language. Zheng et al. (2021) propose a cross-lingual fine-tuning method to better utilize four types of data augmentations based on consistency regularization. However, these methods do not perform well under the few-shot settings. Recently, prompt learning (Brown et al., 2020; Shin et al., 2020; Lester et al., 2021; Vu et al., 2022; Li and Liang, 2021; Qin and Eisner, 2021; Liu et al., 2022c) has shown promising results in many NLP tasks under the few-shot setting. The key idea of prompt learning for XNLI is reformulating the text classification problem into a masked language modeling problem by constructing cloze-style questions. Su et al. (2022) propose a novel prompt-based transfer learning approach, which first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. Wu and Shi (2022) adopt separate soft prompts to learn embeddings enriched with domain knowledge. Schick and Schutze (2021) explore discrete prompt learning to NLI with manually defined templates. Zhao and Schutze (2021) demonstrate that prompt learning outperforms fine-tuning for few-shot XNLI by simply applying soft, discrete, and mixed prompting with a uniform template in English. Qi et al. (2022) proposed a discrete prompt learning framework that constructs an augmented sample by randomly sampling a template in another language. However, discrete prompts of human-designed template words require extensive external expert knowledge and are not flexible enough to adapt to different languages. In our work, we adopt trainable soft prompts to capture correspondence between different languages by comparing the augmented multilingual and original questions. ## 3 Framework The proposed SoftMV framework is illustrated in Figure 1. The training process of SoftMV is formalized in Algorithm 1. For every training triple (premise, hypothesis, label) in English, SoftMV first constructs a cloze-style question with soft prompts initialized from the vocabulary. Then, we apply the code-switched substitution strategy to generate multilingual questions which can be regarded as cross-lingual views for the English questions. Both the original and augmented questions are fed into a pre-trained cross-lingual model to calculate the answer distributions of the mask token with a multilingual verbalizer. SoftMV is trained by minimizing the cross-entropy loss for classification accuracy and the Kullback-Leibler divergence (KLD) loss for representation consistency. Finally, the well-trained soft prompt vectors are frozen in the inference stage. ### Soft Prompting Each instance in batch \(\mathcal{I}\) in XNLI dataset is denoted as \((P_{i},H_{i},Y_{i})_{i\in\mathcal{I}}\), where \(P_{i}=\{w_{j}^{P}\}_{j=1}^{m}\) denotes the word sequence of premise, \(H_{i}=\{w_{j}^{H}\}_{j=1}^{n}\) denotes the word sequence of hypothesis, and \(Y_{i}\in\mathcal{Y}\) denotes the class label. SoftMV first constructs a cloze-style question with soft prompts as illustrated in Table 1. The question template is expressed as "<s>Premise.</s> <s>Hypothesis? <\(v_{1}\)>...<\(v_{n}\)> <MASK></s>", where <s> and </s> are special tokens to separate sentences, <MASK> is the mask token, and \(v_{i}\) is associated with a trainable vector (in the PLM's first embedding layer). Soft prompts are tuned in the continuous space and initialized with the average value of embeddings of the PLM's multilingual vocabulary. In cross-lingual transfer ``` 0: the number of epochs \(E\) and the training set \(\mathbb{D}=\{(P_{i},H_{i},Y_{i})\}_{i=1}^{M}\). 1: Reform \(\mathbb{D}\) to a set of cloze-style questions \(\mathbb{Q}=\{(Q_{i},Y_{i})\}_{i=1}^{M}\) with soft prompts for each \((P_{i},H_{i})\) as illustrated in Figure 1. 2: Extend the set \(\mathbb{Q}=\{(Q_{i},Q_{i}^{a},Y_{i})\}_{i=1}^{M}\) by generating augmented multilingual questions with the code-switched strategy. 3: Divide \(\mathbb{Q}\) into a set of batches \(\mathbb{B}\). 4:for epoch \(e=1\) to \(E\)do 5: Shuffle \(\mathbb{B}\). 6:for each batch \(\{(Q_{i},Q_{i}^{a},Y_{i})\}_{1\leq i\leq N}\) in \(\mathbb{B}\)do 7: Compute total loss \(\mathcal{L}\) by Eq. 7. 8: Update the parameters \(\theta\). 9:endfor 10:endfor ``` **Algorithm 1** The training process of SoftMV. ### Soft Prompting Each instance in batch \(\mathcal{I}\) in XNLI dataset is denoted as \((P_{i},H_{i},Y_{i})_{i\in\mathcal{I}}\), where \(P_{i}=\{w_{j}^{P}\}_{j=1}^{m}\) denotes the word sequence of premise, \(H_{i}=\{w_{j}^{H}\}_{j=1}^{n}\) denotes the word sequence of hypothesis, and \(Y_{i}\in\mathcal{Y}\) denotes the class label. SoftMV first constructs a cloze-style question with soft prompts as illustrated in Table 1. The question template is expressed as "<s>Premise.</s> <s>Hypothesis? <\(v_{1}\)>...<\(v_{n}\)> <MASK></s>", where <s> and </s> are special tokens to separate sentences, <MASK> is the mask token, and \(v_{i}\) is associated with a trainable vector (in the PLM's first embedding layer). Soft prompts are tuned in the continuous space and initialized with the average value of embeddings of the PLM's multilingual vocabulary. In cross-lingual transfer ``` 0: the number of epochs \(E\) and the training set \(\mathbb{D}=\{(P_{i},H_{i},Y_{i})\}_{i=1}^{M}\). 1: Reform \(\mathbb{D}\) to a set of cloze-style questions \(\mathbb{Q}=\{(Q_{i},Y_{i})\}_{i=1}^{M}\) with soft prompts for each \((P_{i},H_{i})\) as illustrated in Figure 1. 2: Extend the set \(\mathbb{Q}=\{(Q_{i},Q_{i}^{a},Y_{i})\}_{i=1}^{M}\) by generating augmented multilingual questions with the code-switched strategy. 3: Divide \(\mathbb{Q}\) into a set of batches \(\mathbb{B}\). 4:for epoch \(e=1\) to \(E\)do 5: Shuffle \(\mathbb{B}\). 6:for each batch \(\{(Q_{i},Q_{i}^{a},Y_{i})\}_{1\leq i\leq N}\) in \(\mathbb{B}\)do 7: Compute total loss \(\mathcal{L}\) by Eq. 7. 8: Update the parameters \(\theta\). 9:endfor 10:endfor ``` **Algorithm 2** The training process of SoftMV. race." in English, we can generate a multilingual code-switched sample "Two Manner(DE) on Bicyclettes(FR) competing in a yaris(TR)." which can be regarded as the cross-lingual view of the same meaning across different languages. The original and augmented cloze-style questions are fed into a pre-trained cross-lingual model to obtain the contextualized representation of the mask token, denoted as \(h^{o}_{\text{mask}}\) and \(h^{a}_{\text{mask}}\). Let \(l\) denote the size of the vocabulary and \(l\) the dimension of the representation of the mask token, the answer probability distribution of the original question is calculated by: \[y^{o}=softmax(\mathbf{W}h^{o}_{\text{mask}}), \tag{1}\] where \(\mathbf{W}\in\mathbb{R}^{l\times d}\) is the trainable parameters of the pre-trained MLM layer. The answer probability distribution \(y^{a}\) of the augmented question is calculated in the same way. ### Multilingual Verbalizer After calculating the answer probability distribution of the mask token, we use the verbalizer to calculate the classification probability distribution. The verbalizer \(\mathcal{M}\rightarrow\mathcal{V}\) is a function that maps NLI labels to indices of answer words in the given vocabulary. The model is trained to predict masked words that correspond to classification labels, as determined by the verbalizer. Concretely, the verbalizer of English is defined as {"Entailment" \(\rightarrow\) "yes"; "Contradiction" \(\rightarrow\) "no"; "Neutral" \(\rightarrow\) "maybe"} according to Schick and Schutze (2021). Without parallel corpora in cross-lingual scenarios, there is a gap in the classification space between the original and multilingual representations. Using the English verbalizer for all languages might hinder the model's ability to capture semantic representations for multilingual inputs. Thus we use a multilingual verbalizer to learn a consistent classification probability distribution across different languages. The multilingual verbalizer comprises a set of verbalizers for different languages. The multilingual verbalizer is denoted as \(\{\mathcal{M}_{l},l\in\mathcal{L}\}\), where \(\mathcal{L}\) is the set of languages and \(l\) is a specific language. The non-English verbalizers are translated from English using bilingual dictionaries. Specifically, the verbalizer of Turkish is defined as {"Entailment" \(\rightarrow\) "Evet."; "Contradiction" \(\rightarrow\) "hibpir"; "Neutral" \(\rightarrow\) "belki"}. ### Training Objective In the training stage, given a batch \(\mathcal{I}\) of \(N\) triples denoted as \((X^{o}_{i},X^{a}_{i},Y_{i})_{1\leq i\leq N}\), the cross-entropy losses for the original question \(X^{o}_{i}\) and the augmented question \(X^{a}_{i}\) are respectively calculated Figure 1: The framework of SoftMV. The left part is the original questions. The right part is the augmented multilingual questions. The model is trained with a combined objective of the cross-entropy losses and the KLD loss. by: \[\ell_{i}^{o} =-\frac{1}{|\mathcal{L}|}\sum_{l\in\mathcal{L}}\sum_{j=1}^{N}I(j= \mathcal{M}_{l}(Y_{i}))\log y_{i,j}^{o}, \tag{2}\] \[\ell_{i}^{a} =-\frac{1}{|\mathcal{L}|}\sum_{l\in\mathcal{L}}\sum_{j=1}^{N}I(j= \mathcal{M}_{l}(Y_{i}))\log y_{i,j}^{a}, \tag{3}\] where \(y_{i,j}^{o}\) (resp. \(y_{i,j}^{a}\)) denotes the \(j\)-th element of the answer probability distribution \(y^{o}\) for the original question \(X_{i}^{o}\) (resp. for the input \(X_{i}^{a}\)) and \(I(C)\) is the indicator function that returns 1 if \(C\) is true or 0 otherwise. The cross-entropy losses of the original and augmented questions on the batch \(\mathcal{I}\) are calculated by: \[\mathcal{L}_{O} =-\frac{1}{N}\sum_{i=1}^{N}\ell_{i}^{o}, \tag{4}\] \[\mathcal{L}_{A} =-\frac{1}{N}\sum_{i=1}^{N}\ell_{i}^{a}. \tag{5}\] However, for the same premise and hypothesis, the answer probability distribution of the augmented multilingual question created by the code-switched strategy may lead to a deviation from that of the original question due to the misalignment of representations in the multilingual semantic space. Such a deviation may cause the model to learn the wrong probability distribution when the model is evaluated on target languages. To alleviate this problem, we propose a consistency regularization to constrain the answer probability distribution. In particular, we adopt the Kullback-Leibler divergence (KLD) to encourage the answer probability distribution of the augmented question to be close to that of the original question. The consistency loss is defined as: \[\mathcal{L}_{KLD}=\frac{1}{N}\sum_{i=1}^{N}(\mathrm{KL}(y_{i}^{o}||y_{i}^{a}) +\mathrm{KL}(y_{i}^{a}||y_{i}^{o})), \tag{6}\] The cross-entropy loss encourages the model to learn correct predictions for the augmented inputs, while the KLD loss enforces consistency between the original and augmented representations in the same multilingual semantic space. Using these loss terms together ensures that the model not only performs well on the original inputs but also generalizes to the augmented inputs, resulting in a more robust model that effectively handles cross-lingual tasks. The overall objective in SoftMV is a tuned linear combination of the cross-entropy losses and KLD loss, defined as: \[\mathcal{L}=\lambda_{O}\mathcal{L}_{O}+\lambda_{A}\mathcal{L}_{A}+\lambda_{ KLD}\mathcal{L}_{KLD}, \tag{7}\] where \(\lambda_{*}\) are tuning parameters for each loss term. ## 4 Experiment Setup ### Benchmark Dataset We conducted experiments on the large-scale multilingual benchmark dataset of XNLI Conneau et al. (2018), which extends the MultiNLI Williams et al. (2018) benchmark (in English) to 15 languages2 through translation and comes with manually annotated development sets and test sets. For each language, the training set comprises 393K annotated sentence pairs, whereas the development set and the test set comprise 2.5K and 5K annotated sentence pairs, respectively. Footnote 2: The languages are English (EN), French (FR), Spanish (ES), German (DE), Greek (EL), Bulgarian (BG), Russian (RU), Turkish (TR), Arabic (AR), Vietnamese (VI), Thai (TH), Chinese (ZH), Hindi (HI), Swahili (SW), and Urdu (UR) We evaluate SoftMV and other baseline models under the few-shot and full-shot cross-lingual settings, where the models are only trained on English and evaluated on other languages. For the few-shot setting, the training and validation data are sampled by Zhao and Schutze (2021) with \(k\in\{1,2,4,8,16,32,64,128,256\}\) shots per class from the English training data in XNLI. We report classification accuracy as the evaluation metric. ### Implementation Details We implement SoftMV using the pre-trained XLM-RoBERTa model Conneau et al. (2020) based on PyTorch Paszke et al. (2019) and the Huggingface framework Wolf et al. (2020). XLM-R is a widely used multilingual model and the baseline (PCT) we compare with only report the results using XLM-R. We train our model for 70 epochs with a batch size of 24 using the AdamW optimizer. The hyper-parameter \(\alpha\) is set to 0.3 for combining objectives. The maximum sequence length is set to 256. All the experiments are conducted 5 times with different random seeds ({1, 2, 3, 4, 5}) and we report the average scores. The trained soft prompt vectors will be frozen in the inference stage. Appendix A shows the hyperparameters and computing devices used under different settings in detail. ### Baseline Models We compared SoftMV with the following cross-lingual language models: (1) mBERT Devlin et al. (2019) is a BERT model pre-trained on Wikipedia with 102 languages; (2) XLM Conneau and Lample (2019) is pre-trained for two objectives (MLM and TLM) on Wikipedia with 100 languages; (3) XLM-R Conneau et al. (2020) extends XLM with larger corpora and more epochs; (4) The work Dong et al. (2021) proposes an adversarial data augmentation scheme based on XLM-R; (5) UXLA Bari et al. (2021) enhances XLM-R with data augmentation and unsupervised sample selection; (6) The work Zhao and Schutze (2021) explores three prompt-learning methods for few-shot XNLI, including DP, SP, and MP; (7) PCT Qi et al. (2022) is a discrete prompt learning framework with cross-lingual templates. ## 5 Experiment Results ### Main Results We conducted experiments on the XNLI dataset under the cross-lingual transfer setting, where models are trained on the English dataset and then directly evaluated on the test set of all languages. The settings can be further divided into two sub-settings: the few-shot setting using a fixed number of training samples per class, and the full-shot setting using the whole training set. **Few-shot results** Table 2 reports the results for comparing SoftMV with other models on XNLI under the few-shot setting. The results of compared models are taken from Zhao and Schutze (2021); Qi et al. (2022). PCT\({}^{\dagger}\) in the 1/2/4/8-shot experiments are reproduced by us, for not being reported before. Note that all models are based on XLM-R\({}_{\text{base}}\) and trained on the same split of data from Zhao and Schutze (2021). Results show that SoftMV significantly outperforms all baselines for all languages under all settings by 3.5% on average. As expected, all models benefit from more shots. When the \(k\) shots per class decrease, the gap between the performance of SoftMV and the state-of-the-art model (PCT) becomes larger, implying our model has a stronger ability to align contextualized representations in different languages into the same space when training data are fewer. In particular, SoftMV outperforms PCT by 4.4%, 2.8%, 4.3%, and 8.9% in the 1/2/4/8-shot experiments respectively. When the \(k\) shots per class are larger than 8, the average performance of SoftMV also outperforms PCT by an absolute gain of 2.5% on average. Furthermore, for different languages, all methods perform best on EN (English) and worst on AR (Arabic), VI (Vietnamese), UR (Urdu), and SW (Swahili). It is difficult to obtain usable corpora for these low-resource languages for XLM-R. Thus, the model has a poor learning ability for these languages. SoftMV also outperforms PCT on these low-resource languages, which demonstrates that our model is more effective in cross-lingual scenarios, especially for low-resource languages. **Full-shot results** Table 3 shows the results on XNLI under the full-shot setting. The results of compared models are taken from Qi et al. (2022). SoftMV-XLM-R\({}_{\text{base}}\) achieves 78.8% accuracy averaged by 15 target languages, significantly outperforming the basic model XLM-R\({}_{\text{base}}\) by 4.6% on average. Compared with PCT, SoftMV improves by 3.5% on average based on XLM-R\({}_{\text{base}}\). Furthermore, we can observe that the accuracy of SoftMV exceeds PCT by 0.3% on EN, but 4.6% on AR, 11.8% on SW, and 10.5% on UR. This indicates that SoftMV has better transferability across low-resource languages with well-trained soft prompt vectors. To further investigate the effectiveness, we also evaluated SoftMV with baselines based on XLM-R\({}_{\text{large}}\) model. It can be seen that SoftMV achieves 82.1% accuracy on average, significantly outperforming PCT and XLM-R\({}_{\text{large}}\) by 0.8% and 1.7%. Compared with the results on XLM-R\({}_{\text{base}}\), the improvements of SoftMV on XLM-R\({}_{\text{large}}\) are smaller, which indicates that SoftMV is more effective on XLM-R\({}_{\text{base}}\) which has fewer parameters and worse cross-lingual ability. The performance gains are due to the stronger ability of SoftMV to align contextualized representations in different languages into the same semantic space with consistency regularization. ### Ablation Study To better understand the contribution of each key component of SoftMV, we conduct an ablation study under the 8-shot setting with XLM-R\({}_{\text{base}}\). The results are shown in Table 4. After removing the code-switched method, the performance decreases by 1.9% on average which shows the augmented multilingual samples can help the model to understand other languages. When we remove the consistency loss, the average accuracy decreases by 2.5%. The consistency loss can help the model align the representations across different languages into the same semantic space. Removing the multilingual verbalizer leads to 1.7% accuracy drop on average. This demonstrates that the multilingual verbalizer can reduce the gap between different languages when calculating the classification probability distribution. We also replace soft prompts with discrete prompts as illustrated in Table 1, which leads to an accuracy drop of 1.3% on average. The accuracy decreases by 1.0% when using mixed prompts instead of soft prompts. The reason is that template words in mixed prompts have a bad effect on SoftMV if not specifically designed with expert knowledge. Furthermore, we use randomly initialized prompts to replace the prompts initialized from the multilingual vocabulary, which leads to 0.5% accuracy drop on average. ### Analysis of Code-switched Method To further investigate the code-switched method, we conduct experiments using a single language to create augmented multilingual samples. Figure 2 shows the results of SoftMV with 10 different seeds under the 8-shot setting for 15 languages on average. We can observe that SoftMV performs worst with an accuracy of 42.1% when using AR (Arabic) to replace the words in sentences. When using TR (Turkish) to replace the words in sentences, the performance of SoftMV outperforms the results using another language. The reason is that TR is different from EN, while not too rare like low-resource languages such as UR (Urdu) and AR. Thus the model can better align contextualized representations in different languages into the same semantic space. When randomly selecting languages for the words of each sentence, SoftMV performs best with a lower standard deviation. Therefore, we apply a random strategy for the code-switched method in our experiments. ### Analysis of Soft Prompts We also conducted experiments to show how the length of soft prompts impacts performance. The \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c|c|c} \hline \hline Models & EN & FR & ES & DE & EL & BG & RU & TR & AR & VI & TH & ZH & HI & SW & UR & AVG. \\ \hline Original & **47.5** & **46.7** & **47.0** & **46.4** & **47.5** & **46.5** & **46.3** & **43.7** & **46.5** & **45.8** & **45.1** & **42.5** & **43.2** & **42.1** & **42.8** & **45.3** \\ w/o code-switched & 46.8 & 45.4 & 44.9 & 45.2 & 45.7 & 45.4 & 45.0 & 41.4 & 44.8 & 44.2 & 42.7 & 38.5 & 40.4 & 38.9 & 41.1 & 43.4 \\ w/o consistency loss & 45.3 & 44.3 & 44.9 & 43.6 & 44.8 & 43.6 & 43.5 & 40.7 & 44.3 & 43.7 & 43.0 & 39.8 & 40.2 & 39.9 & 40.7 & 42.8 \\ w/o multilingual verbalizer & 44.8 & 44.7 & 44.5 & 43.7 & 45.0 & 44.8 & 44.8 & 43.2 & 43.0 & 43.6 & 43.1 & 42.0 & 42.9 & 41.6 & 42.4 & 43.6 \\ using discrete prompts & 46.0 & 45.4 & 46.0 & 45.1 & 45.4 & 45.4 & 45.5 & 42.2 & 44.6 & 44.7 & 44.2 & 40.8 & 42.2 & 41.4 & 41.6 & 44.0 \\ using mixed prompts & 46.2 & 45.8 & 46.1 & 45.6 & 45.7 & 45.1 & 45.8 & 42.3 & 44.7 & 44.9 & 44.6 & 41.0 & 42.5 & 42.0 & 41.7 & 44.3 \\ using randomly initialized prompts & 47.6 & 46.6 & 46.4 & 45.8 & 46.7 & 45.8 & 44.8 & 43.0 & 46.1 & 45.7 & 44.7 & 42.6 & 42.9 & 40.3 & 42.6 & 44.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study results for SoftMV under the 8-shot setting in accuracy(%). “AVG.” is the average accuracy for 15 languages. Figure 3: Evaluation results of different lengths of soft prompts under the 8-shot setting for 15 languages on average. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c|c} \hline \hline Models & EN & FR & ES & DE & EL & BG & RU & TR & AR & VI & TH & ZH & HI & SW & UR & AVG. \\ \hline mBERT & 73.7 & 70.4 & 70.7 & 68.7 & 69.1 & 70.4 & 67.8 & 66.3 & 66.8 & 66.5 & 64.4 & 68.3 & 64.2 & 61.8 & 59.3 & 67.2 \\ XLM & 83.2 & 76.7 & 77.7 & 74.0 & 72.7 & 74.1 & 72.7 & 68.7 & 68.6 & 72.9 & 68.9 & 72.5 & 65.6 & 58.2 & 62.4 & 70.7 \\ XLM-R\({}_{\text{base}}\) & 84.6 & 78.2 & 79.2 & 77.0 & 75.9 & 77.5 & 75.5 & 72.9 & 72.1 & 74.8 & 71.6 & 73.7 & 69.8 & 64.7 & 65.1 & 74.2 \\ Dong et al. (2021) & 80.8 & 75.8 & 77.3 & 74.5 & 74.9 & 76.3 & 74.9 & 71.4 & 70.0 & 74.5 & 71.6 & 73.6 & 68.5 & 64.8 & 65.7 & 73.0 \\ DP-XLM-R\({}_{\text{base}}\) & 83.9 & 78.1 & 78.5 & 76.1 & 75.7 & 77.1 & 75.3 & 73.2 & 71.6 & 74.7 & 70.9 & 73.4 & 70.2 & 63.6 & 65.5 & 73.9 \\ SP-XLM-R\({}_{\text{base}}\) & 84.7 & 78.3 & 78.5 & 75.3 & 76.3 & 75.7 & 73.3 & 70.3 & 74.0 & 70.6 & 74.1 & 70.2 & 62.8 & 64.9 & 73.7 \\ MP-XLM-R\({}_{\text{base}}\) & 84.2 & 78.4 & 78.8 & 76.9 & 75.3 & 76.5 & 75.7 & 72.7 & 71.2 & 75.2 & 70.8 & 72.8 & 70.7 & 61.5 & 66.0 & 73.8 \\ PCT-XLM-R\({}_{\text{base}}\) & 84.9 & 79.4 & 79.7 & 77.7 & 76.6 & 78.9 & 76.9 & 74.0 & 72.9 & 76.0 & 72.0 & 74.9 & 71.7 & 65.9 & 67.3 & 75.3 \\ SoftMV-XLM-R\({}_{\text{base}}\) & **85.2** & **80.8** & **79.9** & **78.7** & **84.1** & **81.3** & **79.5** & **76.0** & **77.5** & **78.8** & **77.0** & **76.0** & **72.0** & **77.7** & **77.8** & **78.8** \\ \hline XLM-R\({}_{\text{large}}\) & 88.9 & 83.6 & 84.8 & 83.1 & 82.4 & 83.7 & 80.7 & 79.2 & 79.0 & 80.4 & 77.8 & 79.8 & 76.8 & 72.7 & 73.3 & 80.4 \\ UXL & - & - & 85.7 & 84.2 & - & - & - & - & 80.5 & - & - & - & - & 78.7 & 74.7 & 73.4 & - \\ PCT-XLM-R\({}_{\text{large}}\) & 88.3 & 84.2 & 85.1 & 83.7 & 83.1 & 84.4 & 81.9 & 81.2 & 80.9 & 80.7 & 78.8 & 80.3 & 78.4 & 73.6 & 75.6 & 81.3 \\ SoftMV-XLM-R\({}_{\text{large}}\) & **88.9** & **85.1** & **85.8** & **84.2** & **83.7** & **85.2** & **82.3** & **82.1** & **81.5** & **81.4** & **79.7** & **81.2** & **79.1** & **74.2** & **76.4** & **82.1** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison results on XNLI under the full-shot cross-lingual transfer setting in accuracy(%). Each number is the mean performance of 5 runs. “AVG.” is the average accuracy for 15 languages. The best performance is in **bold**. Figure 2: Evaluation results of different strategies of the code-switched method under the 8-shot setting for 15 languages on average. results are illustrated in Figure 3 under the 8-shot setting. We can observe that the performance of SoftMV is very sensitive to the value of length. As the length of soft prompts increases, the performance of SoftMV first increases and then decreases. As the length of soft prompts increases, the model has the more expressive power to reduce the gaps across different languages. Therefore, the performance of the model is gradually improved. SoftMV achieves the best performance when the length of soft prompts is 4. When the length is larger than 4, the accuracy decreases sharply. The reason is that the model with longer soft prompts tends to overfit the training data under the few-shot setting. ## 6 Conclusion In this paper, we propose a novel **Soft** prompt learning framework with a **M**ultilingual **V**erbalizer (SoftMV) for XNLI. SoftMV applies the code-switched substitution strategy to generate multilingual questions for original questions constructed with soft prompts. We adopt the multilingual verbalizer to align the representations of original and augmented samples into the same semantic space with consistency regularization. Experimental results on XNLI demonstrate that SoftMV significantly outperforms the previous methods under the few-shot and full-shot cross-lingual transfer settings. The detailed analysis further confirms the effectiveness of each component in SoftMV. ## 7 Limitations SoftMV is specifically designed for cross-lingual natural language inference. We believe that some of the ideas in our paper can be used in other tasks of XLU, which remains to be further investigated by subsequent research. In addition, we conduct experiments on the XNLI dataset which consists of 15 languages. SoftMV outperforms the baseline methods under the cross-lingual transfer settings. However, the cross-lingual ability of SoftMV on other languages, especially those lacking relevant datasets, needs to be verified in future work. ## Acknowledgements The work was supported by the National Key Research and Development Program of China (No. 2019YFB1704003), the National Nature Science Foundation of China (No. 62021002), Tsinghua BNRist and Beijing Key Laboratory of Industrial Bigdata System and Application.
2302.08946
Personal autonomy and surveillance capitalism: possible future developments
The rise of social media and the increase in the computational capabilities of computers have allowed tech companies such as Facebook and Google to gather incredibly large amounts of data and to be able to extract meaningful information to use for commercial purposes. Moreover, the algorithms behind these platforms have shown the ability to influence feelings, behaviors, and opinions, representing a serious threat to the independence of their users. All of these practices have been referred to as "surveillance capitalism", a term created by Shoshana Zuboff. In this paper I focus on the threat imposed on the autonomy of human beings in the context of surveillance capitalism, providing both an analysis of the reasons why this threat exists and what consequences we could face if we take no action against such practices.
Davide Foini
2023-02-17T15:27:14Z
http://arxiv.org/abs/2302.08946v1
# Personal autonomy and surveillance capitalism: possible future developments ###### Abstract The rise of social media and the increase in the computational capabilities of computers have allowed tech companies such as Facebook and Google to gather incredibly large amounts of data and to be able to extract meaningful information to use for commercial purposes. Moreover, the algorithms behind these platforms have shown the ability to influence feelings, behaviors, and opinions, representing a serious threat to the independence of their users. All of these practices have been referred to as "surveillance capitalism", a term created by Shoshana Zuboff. In this paper I focus on the threat imposed on the autonomy of human beings in the context of surveillance capitalism, providing both an analysis of the reasons why this threat exists and what consequences we could face if we take no action against such practices. ## Introduction ### Problem presentation The last fifteen years have been characterized by the large diffusion of the internet and social media, such as Facebook and Instagram, along with the tendency of users to share their data, both consciously (through posts, photos, etc.), and unconsciously (accepting the terms of service, allowing cookies when navigating the web, etc.). All this information has become incredibly valuable when coupled with big data practices because that allows companies that hold it to exploit it, extracting new information or behavioral models from it, in order to influence and predict the users' behavior and so to capitalize on the advertisements. Episodes in which this influence has been used outside the logic of business are well-known, from trying to influence the result of an election, like the Cambridge Analytica case [1], to mass-surveillance, like revealed by Edward Snowden [2]. Therefore it is important to note that threats to our autonomy do not just undermine our integrity as individuals, but are also a serious risk to society as a whole, making it essential to discuss future developments of surveillance capitalism. ### Purpose of the paper and definitions The purpose of this paper is to focus on the autonomy of individuals and analyze how it could be affected by surveillance capitalism, but before starting the discussion I find it appropriate to introduce some concepts and their definitions. As already mentioned, my discussion revolves around _surveillance capitalism_ that was first introduced by Shoshana Zuboff, indicating the increasing ability of capitalism to modify and predict human behavior to increase revenues and control over the market, especially thanks to the new capabilities of information technology [3]. The main concept that my debate is going to revolve around is the one of _autonomy_. Among all the definitions available in the literature I decided to take as reference the definition given by Joseph Raz: "(t)he ruling idea behind the ideal of personal autonomy is that people should make their own lives."[4, p. 396]. This involves both being able to take choices regarding one's own life and also to be able to reason about them, taking into account all the personal beliefs and each own background, without being influenced by external factors. Another necessary definition is the one of _manipulation_. There are different ways to define manipulation, but in my opinion, the one that suits the most the purpose of my work is the one given by Susser et al.: "In our view manipulation is hidden influence. Or more fully, manipulating someone means intentionally and covertly influencing their decision-making, by targeting and exploiting their decision-making vulnerabilities."[5, p. 4]. The reason why I find this definition so appropriate is that it emphasizes the hidden property of the influence, which is also how surveillance capitalism's mechanisms work, as I am going to point out in one of the next sections. The last concept that I find essential to define is the one of _big data_. When mentioning big data and the correlated mechanisms, I will be referring to the operations of data extraction from all the possible sources, and to the operations performed on the data in order to analyze and extract patterns useful for behavior prediction and therefore manipulation. Big data is essential because it can be considered the turning point for concerns about autonomy: forms of manipulation have long been enforced through traditional media (like newspapers, radio, and lastly via television), but the amount of data gathered with big data enables a "tailored influence", taking it to another order of magnitude of effectiveness. ### Thesis I strongly believe that if no action is taken against surveillance capitalism our autonomy will be downsized in the near future. ### Paper structure Having expressed my belief, to support it the paper will be structured as follows. In the next sections, I provide some arguments that reinforce my opinion both on a theoretical level and by providing some examples. In the second part, some counterarguments will be presented and I will discredit them through simple reasoning. In the end, I will sum up everything that was said and provide a conclusion for the discussion. ## Autonomy as an obstacle to revenue The first reason why I believe my concerns about our autonomy in the future are valid is that the big tech companies that benefit from surveillance capitalism will only increase their revenues if they are more able to model, predict and influence our behavior and choices, therefore they will try to reduce our autonomy as much and as fast as possible. This trend comes from the modus operandi of companies in the classic capitalistic market, which can be defined as revenue-driven, as also expressed by Zuboff: "Just as industrial capitalism was driven to the continuous intensification of the means of production, so surveillance capitalists are now locked in a cycle of continuous intensification of the means of behavioral modification" [6, p. 9]. The following is a straightforward example: Volkswagen would jump at the opportunity to reduce the consumption of their cars in order to improve considerably the performances (and therefore the number of sales and the resulting revenue) as much as Facebook would quickly use a new algorithm that is twice as effective in making us prone to buy a new pair of shoes. Having said this, we can realize that the process of reducing our independence has been going on long before the advent of information technology, but the pace has increased exponentially. ## Social embedding The second reason why I think we will face a reduction in our autonomy is that all of the mechanisms that surveillance capitalism uses, meaning the ability to gather incredibly large amounts of data from the users, are embedded in the social tissue through social media, such as Facebook, and smart assistants, such as Amazon Alexa. While smart assistants are still spreading, social media today are used by almost everyone on a daily basis (58.4% of the world's total population in January 2022 according to [7]), and are used not only to keep in touch with friends and family but also to read the news and interact with politicians. I think that it is enough to ask ourselves some questions such as: "Could someone live without using social media nowadays?", "Could a politician run a campaign without using Facebook?" and "Could I go somewhere new without using Google Maps?" to realize that the answer to the question "Can we actually choose to be free from being subjects of surveillance capitalism?" is negative. To better understand how much the use of social media is already an important aspect of our social life and can be expected to become more important is the so-called "fear of missing out", better known as FOMO. FOMO can be defined as "a pervasive apprehension that others might be having rewarding experiences from which one is absent" [8, p. 1], which has shown the tendency to cause the desire to stay continually connected online. Even though studies ([8], [9]) have shown how FOMO can be linked with negative effects on one's mood and life satisfaction, this phenomenon highlights the fact that surveillance capitalism mechanisms (in this particular case social media) have become so powerful to influence us psychologically to promote their usage. At this point, it is fair to ask ourselves if the choice of being subject to surveillance capitalism practices is forced upon us or not, and if so, if it can be considered a decision taken with complete autonomy. ## Unawareness Another cause for the loss of autonomy that I think we will face due to surveillance capitalism is that most part of individuals is totally unaware, not just of how mechanisms such as cookies and Google advertisements work, but also of their mere existence. It is quite intuitive how this situation is very favorable for companies such as Google and Facebook since they can continue to operate without having to worry about users getting concerned about practices that undermine their autonomy but also because this unawareness has enabled them to extend the level of depth to which our behaviors can be predicted and steered. I think that to better understand how this characteristic of surveillance capitalism could be an important factor in future developments it is useful to analyze a similar problem that we are facing nowadays: climate change. We have seen that even though climate change has been known as a serious danger since the last century, serious action to mitigate the effects has been taken only since the majority of people have learned what it is and has developed a conscience about it. In my opinion, it is the same situation we are facing with surveillance capitalism: as long as common knowledge is not developed there will be no actions against it. A further reason that prevents the creation of collective consciousness is that the mechanisms of surveillance capitalism are shaped in such a way that they are hardly detectable, even for an individual that knows about their existence, as I am going to explain in the next chapter. ## Underlying Functioning An important feature is that mechanisms that are used to influence our behavior and choices, for example, what is shown in our Facebook feed or which articles Amazon recommends to us, are fed with data that the user generates without being conscious of it (an example could be the area on the screen of the smartphone that is touched) and that these algorithms work without the users noticing. Information technology has already gone "in the background", meaning that besides being socially embedded, as I explained previously, we use it without noticing it: it has become a natural way of behaving for certain operations. For example, it is quite natural, when hearing about a person we don't know, to automatically look for her/his profile on social media or check the reviews of a restaurant on Google or Trip Advisor before going out for dinner. To better understand the level of this underlying functioning Susser et al. defined digital platforms as eyeglasses instead of magnifying glasses, meaning that our attention is focused on the information that technologies provide us (like videos, photos, or directions to follow) instead of the technology itself [10]. Following this realization, we can see how the aim of the big tech companies that profit from surveillance capitalism practices is to develop technologies that tend to become invisible even when they are actually in our hands, like smartphones and smartwatches, or inside our houses, like smart assistants. All of this is another obstacle to the creation of collective consciousness that will be necessary to regulate surveillance capitalism. ## Limitless Reaching A further aspect that can be underestimated is that technologies that are used to gather useful data and through which we "look" at the world, as seen in the previous section, are everyone around us and in all aspects of everyday life. This gives surveillance capitalism an unlimited scope of action and furthermore makes it more difficult for us to avoid being subjects of both data gathering and manipulation, giving us little or no actual space when we are completely free from being influenced. In this sense, our smartphones represent the first way of gathering data about us: we take them with us all of the time and they are able to know our location, they have access to our audio to detect if we are calling the smart assistant, and they have become the filter through which we interact with the world. The scope of surveillance capitalism does not end with everything that can be gathered from us, but it also extends to what can be derived from the data thanks to big data analyses. An example of future developments is the insurance sector: gathering data from our cars, like the way we drive and where we drive, big data would be able to understand how much will be the chance of us getting into a car accident, and therefore the insurance will be able to require a higher fee even if we always respect the traffic regulations. Other information, like the ethical group, could be derived from data such as the type of videos watched and could be taken into account for an insurance policy. Another growing concern is that this scope, which already includes our social and private life, is also starting to involve the workplace. An example is the wristbands that Amazon patented for warehouse workers that via haptic signals can point the wearer to the right products and keep track of the position of the workers' hands. Allegedly the main reason for such devices is to save labor time, but it is intuitive to see how this mechanism could be used to steer the hand movements of the employees to avoid "useless gestures" like scratching and impose a pace of operations. Fortunately, the news ([11], [12]) of this patent attracted a lot of attention and the bracelets are still only patented and not put into operation (as of today). So far I have defended my claim with different arguments, and in the next sections, I will present possible counterarguments, illustrating why they do not discredit my opinion. ## Free influence or expensive autonomy? A possible reason for supporting surveillance capitalism practices is that they are the reason why, nowadays, we have access to an enormous variety of services and content for free (not taking into account the cost of having an internet connection, which is negligible and has been diminishing since its beginning) or products for a very moderate cost (like smart assistants). Therefore it would be reasonable to give up our data for such a bargain. Social media enable us to instantly connect with potentially everyone in the world, sharing messages, pictures, and videos without a fee, unlike phone calls and SMSs or MMSs. Information is also accessible everywhere, anytime, and of every type just with a "click" or a "tap", when before it was necessary to go to a library or to buy a newspaper. It is undeniable that being free represents a point in favor at first sight, but are they actually free? First, I think that it is plain to see that data has become a high-value strategic asset: the wealth of big tech companies originates from it, therefore it is wrong to argue that those services are free, we just give back something of value that is not money. Furthermore, as seen so far, data is just the starting point because it is not only used for general purposes but can also be put up against us to undermine our autonomy. An example is online shopping: if we are interested in a new smartphone, not just because we actually need it but maybe just out of curiosity, we search on Amazon and we instantly get a list of different items from various brands, we can then compare prices, the specifications, and also read the reviews from other users. While we get all of this, Amazon keeps track of our research and sells this information to third parties, and in the next period, our Facebook home page will start to contain posts about smartphone sales and we will end up buying a new one. To sum up, we are not giving up mere information about ourselves (and all the data that can be inferred from it), but also pieces of ourselves, that can be our opinions, beliefs, and tastes just to name a few. These are the reasons why everything that is promoted as free in this field has a twofold price truthfully: our data and our autonomy. ## Giving up autonomy for our own sake Another argument against the limitations of surveillance capitalism practices is that the amount of data that is gathered is so huge and can unlock a knowledge so deep about us that it is possible to influence us to act for what is perceived as our own good. An example in this sense can be a mobile application that can promote a healthy lifestyle, encouraging exercising and a healthy diet. In this case, manipulation would be perceived with the welfare of the subject as objective as a goal. To understand why this reasoning is flawed it is enough to point out that "our own good" must be defined by someone, resulting in a paternalistic model, where that someone "takes the wheel" for us because he is better at it. What is most concerning is that this paradigm could be extended not just to matters related to our physical well-being, but also to all the aspects of our daily lives, like a smart device that tracks the energy usage of a household adjusting the heating and cooling systems accordingly. This would mean giving up the freedom to make the wrong decisions, such as having a habit like smoking cigarettes or deciding to buy the car we most like and not because it would be the best for us in terms of consumption, range, etc. It is also essential to remember that all the data and the power of prediction and manipulation are in the hands of private companies, whose goal is mere profit and not our interests. In the two examples cited before the company owning the mobile application or the smart device would sell the data collected (our habits or data about the dimension of our house) to third parties, emphasizing the lack of reasons to trust such organizations with our autonomy. ## Conclusion In this paper, I argued that if we don't take measures against surveillance capitalism, we will be subjected to a reduction of our autonomy. In the first part, I have explained why I fear this is going to happen. The first reason is that such mechanisms follow the capitalistic models and therefore aim at becoming more efficient in influencing and manipulating us. Moreover, they have a pervasive presence and we are not able to escape them. They also work without being noticed and without hardly anyone knowing how they actually work, making it very hard to create a collective consciousness that would be necessary to take action against such mechanisms. In the second part, I discussed possible reasons to be in favor of surveillance capitalism or against restricting it. The first reason treated is the fact that it is thanks to surveillance capitalism that we have access to so many contents and services for free, but I have shown that is intrinsically wrong to define those services as "free" since we don't pay with money but with our data, that is sold to third parties and it is also used with the scope of influencing us and undermining our autonomy, therefore having a doubled price. The second idea was the one that getting influenced could be for our own good, but I have pointed out that this is dangerous since what is considered good for us is to be defined by private companies, whose objective is ultimately profit and for sure not our welfare, therefore raising concerns about willingly giving them the keys to our freedom. In conclusion, it is fair to say that the current situation with regard to surveillance capitalism practices, like massive data gathering, behavioral prediction, and manipulation, already rises a lot of concerns for our autonomy, and it is expected to get worse, following the trend that it has followed since the early days if no action is taken against such techniques. Therefore to tackle these problems it is necessary to act both on a social and a policy level, otherwise, the consequences are already before our very own eyes.
2306.10426
Understanding Certified Training with Interval Bound Propagation
As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a robustness specification. Curiously, training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounding methods. Still, we lack an understanding of the mechanisms making IBP so successful. In this work, we thoroughly investigate these mechanisms by leveraging a novel metric measuring the tightness of IBP bounds. We first show theoretically that, for deep linear models, tightness decreases with width and depth at initialization, but improves with IBP training, given sufficient network width. We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, explaining the empirically observed trade-off between robustness and accuracy in certified training. Our extensive experimental evaluation validates our theoretical predictions for ReLU networks, including that wider networks improve performance, yielding state-of-the-art results. Interestingly, we observe that while all IBP-based training methods lead to high tightness, this is neither sufficient nor necessary to achieve high certifiable robustness. This hints at the existence of new training methods that do not induce the strong regularization required for tight IBP bounds, leading to improved robustness and standard accuracy.
Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev
2023-06-17T21:13:30Z
http://arxiv.org/abs/2306.10426v2
# Understanding Certified Training ###### Abstract As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a robustness specification. Curiously, training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounding methods. Still, we lack an understanding of the mechanisms making IBP so successful. In this work, we thoroughly investigate these mechanisms by leveraging a novel metric measuring the tightness of IBP bounds. We first show theoretically that, for deep linear models, tightness decreases with width and depth at initialization, but improves with IBP training, given sufficient network width. We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, explaining the empirically observed trade-off between robustness and accuracy in certified training. Our extensive experimental evaluation validates our theoretical predictions for ReLU networks, including that wider networks improve performance, yielding state-of-the-art results. Interestingly, we observe that while all IBP-based training methods lead to high tightness, this is neither sufficient nor necessary to achieve high certifiable robustness. This hints at the existence of new training methods that do not induce the strong regularization required for tight IBP bounds, leading to improved robustness and standard accuracy. ## 1 Introduction The increasing deployment of deep-learning-based systems in safety-critical domains has made their trustworthiness and especially formal robustness guarantees against adversarial examples [1; 2] an ever more important topic. As significant progress has been made on neural network certification [3; 4], the focus in the field is increasingly shifting to the development of specialized training methods that improve certifiable robustness while minimizing the accompanying reduction in standard accuracy. Certified trainingThese certified training methods aim to compute and then optimize approximations of the network's worst-case loss over an input region defined by an adversary specification. To this end, most methods compute an over-approximation of the network's reachable set using symbolic bound propagation methods [5; 6; 7]. Surprisingly, training methods based on the least precise bounds, obtained via interval bound propagation (IBP), empirically yield the best performance [8]. Jovanovic et al. [9] investigated this surprising observation theoretically and found that more precise bounding methods induce harder optimization problems. This deeper understanding inspired a new class of unsound certified training methods [10; 11; 12], which leverage IBP bounds to compute _precise_ but not necessarily sound approximations of the worst-case loss to reduce (over)-regularization while retaining well-behaved optimization problems, thus yielding networks with higher standard and certified accuracies. However, despite identifying precise approximations of the worst-case loss as crucial for their success [10; 11], none of these methods develop a theoretical understanding of how IBP training affects IBP bound tightness and network regularization. This workWe take a first step towards building a deeper understanding of the mechanisms underlying IBP training and thereby pave the way for further advances in certified training. To this end, we derive necessary and sufficient conditions on a network's weights under which IBP bounds become tight, a property we call _propagation invariance_, and prove that it implies an extreme regularization, agreeing well with the empirically observed trade-off between certifiable robustness and accuracy [10; 13]. To investigate how close real networks are to full propagation invariance, we introduce the metric _propagation tightness_ as the ratio of optimal and IBP bounds, and show how to efficiently compute it globally for deep linear networks (DLNs) and locally for ReLU networks. This novel metric enables us to theoretically investigate the effects of model architecture, weight initialization, and training methods on IBP bound tightness for deep linear networks (DLNs). We show that (i) at initialization, tightness decreases with width (polynomially) and depth (exponentially), (ii) tightness is increased by IBP training, and (iii) sufficient width becomes crucial for trained networks. Conducting an extensive empirical study, we confirm the predictiveness of our theoretical results for deep ReLU networks and observe that: (i) increasing network width but not depth improves state-of-the-art certified accuracy, (ii) IBP training significantly increases tightness, almost to the point of propagation invariance, (iii) unsound IBP-based training methods increase tightness to a smaller degree but yield better performance, and (iv) non-IBP-based training methods do not increase tightness, leading to higher accuracy but worse robustness. These findings suggest that while IBP-based training methods improve robustness by increasing tightness at the cost of standard accuracy, tightness is not generally necessary for certified robustness. This observation in combination with the theoretical and practical insights developed in this work promises to be a key step towards constructing novel and more effective certified training methods. ## 2 Background Here we provide a background on adversarial and certified robustness. We consider a classifer \(\mathbf{f}\colon\mathbb{R}^{d_{\mathbf{n}}}\mapsto\mathbb{R}^{c}\) predicting a numerical score \(\mathbf{y}:=\mathbf{f}(\mathbf{x})\) per class given an input \(\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{d_{\mathbf{n}}}\). Adversarial Robustnessdescribes the property of a classifier \(\mathbf{f}\) to consistently predict the target class \(t\) for all perturbed inputs \(\mathbf{x}^{\prime}\) in an \(\ell_{p}\)-norm ball \(\mathcal{B}_{p}^{\epsilon_{p}}(\mathbf{x})\) of radius \(\epsilon_{p}\). As we focus on \(\ell_{\infty}\) perturbations in this work, we henceforth drop the subscript \(p\) for notational clarity. More formally, we define _adversarial robustness_ as: \[\operatorname*{arg\,max}_{j}f(\mathbf{x}^{\prime})_{j}=t,\quad\forall\mathbf{x}^{ \prime}\in\mathcal{B}_{p}^{\epsilon_{p}}(\mathbf{x}):=\{\mathbf{x}^{\prime}\in\mathcal{ X}\mid\|\mathbf{x}-\mathbf{x}^{\prime}\|_{p}\leq\epsilon_{p}\}. \tag{1}\] Neural Network Certificationcan be used to formally prove the robustness of a classifier \(\mathbf{f}\) for a given input region \(\mathcal{B}^{\epsilon_{p}}(\mathbf{x})\). Interval bound propagation (IBP) [7; 14] is a simple but popular such certification method. It is based on propagating the input region \(\mathcal{B}^{\epsilon_{p}}(\mathbf{x})\) through the neural network by computing Box over-approximations (each dimension is described as an interval) of the hidden state after every layer until we reach the output space. One can then simply check whether all points in the resulting over-approximation of the network's reachable set yield the correct classification. As an example, consider an \(L\)-layer network \(\mathbf{f}=\mathbf{h}_{L}\circ\mathbf{\sigma}\circ\mathbf{h}_{L-2}\circ\ldots\circ\mathbf{h}_{1}\), with linear layers \(\mathbf{h}_{i}\) and ReLU activation functions \(\mathbf{\sigma}\). We first over-approximate the input region \(\mathcal{B}^{\epsilon}(\mathbf{x})\) as Box with radius \(\mathbf{\delta}^{0}:=\epsilon\) and center \(\dot{\mathbf{x}}^{0}:=\mathbf{x}\), such that we have the \(i^{\text{th}}\) dimension of the input \(x_{0}^{0}\in[\underline{x}_{i},\bar{x}_{i}]\coloneqq[\dot{x}_{0}^{i}-\dot{ \theta}_{i}^{i},\dot{x}_{0}^{i}+\dot{\theta}_{i}^{0}]\). Propagating such a Box through the linear layer \(\mathbf{h}_{i}(\mathbf{x}^{i-1})=\mathbf{W}\mathbf{x}^{i-1}+\mathbf{b}=:\mathbf{x}^{i}\), we obtain the output hyperbox with centre \(\dot{\mathbf{x}}^{i}=\mathbf{W}\dot{\mathbf{x}}^{i-1}+\mathbf{b}\) and radius \(\mathbf{\delta}^{i}=|\mathbf{W}|\mathbf{\delta}^{i-1}\), where \(|\cdot|\) denotes the element-wise absolute value. To propagate a Box through the ReLU activation \(\operatorname{ReLU}(\mathbf{x}^{i-1})\coloneqq\max(0,\mathbf{x}^{i-1})\), we propagate the lower and upper bound separately, resulting in an output Box with \(\dot{\mathbf{x}}^{i}=\frac{\dot{\mathbf{x}}^{i}+\mathbf{x}^{i}}{2}\) and \(\mathbf{\delta}^{i}=\frac{\ddot{\mathbf{x}}^{i}-\mathbf{x}^{i}}{2}\) where \(\mathbf{x}^{i}=\operatorname{ReLU}(\dot{\mathbf{x}}^{i-1}-\mathbf{\delta}^{i-1})\) and \(\bar{\mathbf{x}}^{i}=\operatorname{ReLU}(\dot{\mathbf{x}}^{i-1}+\mathbf{\delta}^{i-1})\). We proceed this way for all layers obtaining first lower and upper bounds on the network's output \(\mathbf{y}\) and then an upper bound \(\bar{\mathbf{y}}^{\Delta}\) on the logit difference \(y_{i}^{\Delta}:=y_{i}-y_{t}\). Showing that \(\bar{y}_{i}^{\Delta}<0,\ \forall i\neq t\) is then equivalent to proving adversarial robustness on the considered input region. We illustrate this propagation process for a two-layer network in Figure 1. There, we show the exact propagation of the input region in blue, its optimal box approximation in green, and the IBP approximation as dashed boxes. Note how after the first linear and ReLU layer (third column), the box approximation (both optimal and IBP contains already many points outside the reachable set, despite it being the smallest hyper-box containing the exact region. These so-called approximation errors accumulate quickly when using IBP, leading to an increasingly imprecise abstraction, as can be seen by comparing the optimal box and IBP approximation after an additional linear layer (rightmost column). To verify that this network classifies all inputs in to show the upper bound of the logit difference to be less than. While the concrete maximum of (black\(-0.3\geq y_{2}-y_{1}\) (black\(\times\)) is indeed less than, showing that the network is robust, IBP only yields (red\(\geq y_{2}-y_{1}\) (red\(\times\)) and is thus too imprecise to prove it. In contrast, the optimal box yields a precise approximation of the true reachable set, sufficient to prove robustness. Training for Robustnessis required to obtain (certifiably) robust neural networks. For a data distribution, standard training optimizes the network parametrization to minimize the expected cross-entropy loss: (2) To train for robustness, we, instead, aim to minimize the expected _worst-case loss_ for a given robustness specification, leading to a min-max optimization problem: (3) As computing the worst-case loss by solving the inner maximization problem is generally intractable, it is commonly under- or over-approximated, yielding adversarial and certified training, respectively. Adversarial Trainingoptimizes a lower bound on the inner optimization objective in Equation (3). It first computes concrete examples on approximately maximizing the loss term and then optimizing the network parameters for these examples. While networks trained this way typically exhibit good empirical robustness, they remain hard to formally certify and are sometimes vulnerable to stronger attacks [15, 16]. Certified Trainingtypically optimizes an upper bound on the inner maximization objective in Equation (3). This robust cross-entropy loss \(\mathcal{L}_{\text{CE,rob}}(\mathcal{B}^{\epsilon}(\mathbf{x}),t)=\mathcal{L}_{ \text{CE}}(\overline{\mathbf{y}}^{\Delta},t)\) is obtained by first computing an upper bound on the logit differences with a bound propagation method as described above and then plugging it into the standard cross-entropy loss. Surprisingly, the imprecise IBP bounds [7, 8, 14] consistently yield better performance than methods based on tighter approximations [17, 18, 19]. Jovanovic et al. [9] trace this back to the optimization problems induced by the more precise methods becoming intractable to solve. However, IBP trained Figure 1: Comparison of exact (), optimal box () and IBP () propagation through a one layer network. We show the concrete points maximizing the logit difference as a black \(\times\) and the corresponding relaxation as a red \(\times\). networks are heavily regularized, making them amenable to certification but severely reducing their standard accuracies. To alleviate the resulting robustness-accuracy trade-off, recent certified training methods combine IBP and adversarial training by using IBP bounds only for regularization (IBP-R [12]), by only propagating small, adversarially selected regions (SABR [10]), or by using IBP bounds only for the first layers and PGD bounds for the remainder of the network (TAPS [11]). In light of this surprising dominance of IBP-based training methods, understanding the regularization it induces and its effect on tightness promises to be a key step towards developing novel and more effective certified training methods. ## 3 Understanding IBP Training In this section, we theoretically investigate the relationship between the box bounds obtained by layer-wise propagation, _i.e._, IBP, and optimal propagation. We illustrate both in Figure 1 and note that the latter are sufficient for exact robustness certification (see Lemma 3.1). We first formally define layer-wise (IBP) and optimal box propagation, before deriving sufficient and necessary conditions under which the resulting bounds become identical. We then show that these conditions induce strong regularization, motivating us to introduce propagation tightness \(\tau\) as a relaxed measure of their precision, that can be efficiently computed globally for deep linear (DLN) and locally for ReLU networks. Based on these results, we first investigate how tightness depends on network architecture at initialization, before showing that it improves with IBP training. Finally, we demonstrate that even linear dimensionality reduction is inherently imprecise for both optimal and IBP propagation, making sufficient network width key for tight box bounds. We defer all proofs to App. B. SettingWe focus our theoretical analysis on deep linear networks (DLNs), _i.e._, \(\mathbf{f}(x)=\Pi_{i=1}^{L}\mathbf{W}^{(i)}\mathbf{x}\), popular for theoretical discussion of neural networks [20; 21; 22]. While they are linear functions, they perfectly describe ReLU networks for infinitesimal perturbation magnitudes, retaining their layer-wise structure and joint non-convexity in the weights of different layers, making them a popular analysis tool [23]. ### Layer-wise and Optimal Box Propagation We define the optimal hyper-box approximation \(\mathrm{Box}^{*}(\mathbf{f},\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x}))\) as the smallest hyper-box \([\underline{\mathbf{z}},\overline{\mathbf{z}}]\) such that it contains the image \(\mathbf{f}(\mathbf{x}^{\prime})\) of all points \(\mathbf{x}^{\prime}\) in \(\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})\), _i.e._, \(\mathbf{f}(\mathbf{x}^{\prime})\in[\underline{\mathbf{z}},\overline{\mathbf{z}}],\forall\mathbf{x} ^{\prime}\in\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})\). Similarly, we define the layer-wise box approximation as the result of applying the optimal approximation to every layer individually, in a recursive fashion \(\mathrm{Box}^{\dagger}(\mathbf{f},\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})):=\mathrm{ Box}^{*}(\mathbf{W}_{L},\mathrm{Box}^{*}(\cdots,\mathrm{Box}^{*}(\mathbf{W}^{(1)}, \mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x}))))\). We write their upper- and lower-bounds as \([\underline{\mathbf{z}}^{\dagger},\overline{\mathbf{z}}^{\dagger}]\) and \([\underline{\mathbf{z}}^{\dagger},\overline{\mathbf{z}}^{\dagger}]\), respectively. We note that optimal box bounds on the logit differences \(\mathbf{y}^{\Delta}:=\mathbf{y}-y_{t}\) (instead of on the logits \(\mathbf{y}\) as shown in Figure 1) are sufficient for exact robustness verification: **Lemma 3.1**.: _Any \(\mathcal{C}^{0}\) continuous classifier \(\mathbf{f}\), computing the logit difference \(y_{i}^{\Delta}:=y_{i}-y_{t},\forall i\neq t\), is robustly correct on \(\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})\) if and only if \(\mathrm{Box}^{*}(\mathbf{f},\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x}))\subseteq\mathbb{ R}_{<0}^{c-1}\), i.e., \(\bar{y}_{i}^{\Delta}<0,\forall i\neq t\)._ For DLNs, we can efficiently compute both optimal \(\mathrm{Box}^{*}\) and layerwise \(\mathrm{Box}^{\dagger}\) box bounds as follows: **Theorem 3.2** (Box Propagation).: _For an \(L\)-layer DLN \(\mathbf{f}=\Pi_{k=1}^{L}\mathbf{W}^{(k)}\), we obtain the box centres \(\dot{\mathbf{z}}^{*}=\dot{\mathbf{z}}^{\dagger}=\mathbf{f}(\mathbf{x})\) and the radii_ \[\frac{\overline{\mathbf{z}}^{*}-\underline{\mathbf{z}}^{*}}{2}=\left|\Pi_{k=1}^{L}\bm {W}^{(k)}\right|\mathbf{\epsilon},\quad\text{and}\quad\frac{\overline{\mathbf{z}}^{ \dagger}-\underline{\mathbf{z}}^{\dagger}}{2}=\left(\Pi_{k=1}^{L}\left|\mathbf{W}^{( k)}\right|\right)\mathbf{\epsilon}. \tag{4}\] Comparing the radius computation of the optimal and layer-wise approximations, we observe that the main difference lies in where the element-wise absolute value \(|\cdot|\) of the weight matrix is taken. For the optimal box, we first multiply all weight matrices before taking the absolute value \(|\Pi_{k=1}^{L}\mathbf{W}^{(k)}|\), thus allowing for cancellations of terms of opposite signs. For the layer-wise approximation, in contrast, we first take the absolute value of each weight matrix before multiplying them together \(\Pi_{k=1}^{L}|\mathbf{W}^{(k)}|\), thereby losing all relational information between variables. Let us now investigate under which conditions layer-wise and optimal bounds become identical. ### Propagation Invariance and IBP Bound Tightness Propagation InvarianceWe call a network (globally) _propagation invariant_ (PI) if the layer-wise and optimal box over-approximations are identical for every input box. Clearly, non-negative weight matrices lead to propagation invariant networks [24], as the absolute value in Theorem 3.2 loses its effect. However, non-negative weights significantly reduce network expressiveness and performance [25], raising the question of whether they are a necessary condition. Indeed, we show that they are not necessary, by deriving the following sufficient _and_ necessary condition for a two-layer DLN: **Lemma 3.3** (Propagation Invariance).: _A two-layer DLN \(\mathbf{f}=\mathbf{W}^{(2)}\mathbf{W}^{(1)}\) is propagation invariant if and only if for every fixed \((i,j)\), we have \(\left|\sum_{k}W_{i,k}^{(2)}\cdot W_{k,j}^{(1)}\right|=\sum_{k}|W_{i,k}^{(2)} \cdot W_{k,j}^{(1)}|\), i.e., either \(W_{i,k}^{(2)}\cdot W_{k,j}^{(1)}\geq 0\) for all \(k\) or \(W_{i,k}^{(2)}\cdot W_{k,j}^{(1)}\leq 0\) for all \(k\)._ Conditions for Propagation InvarianceTo see how strict the condition described by Lemma 3.3 is, we observe that propagation invariance requires the sign of the last element in any two-by-two block in \(\mathbf{W}^{(2)}\mathbf{W}^{(1)}\) to be determined by the signs of the other three elements: **Theorem 3.4** (Non-Propagation Invariance).: _Assume \(\exists i,i^{\prime},j,j^{\prime}\), such that \(W_{\cdot,j}^{(1)}\), \(W_{\cdot,j^{\prime}}^{(1)}\), \(W_{i,\cdot}^{(2)}\) and \(W_{i^{\prime},\cdot}^{(2)}\), are all non-zero. If \((\mathbf{W}^{(2)}\mathbf{W}^{(1)})_{i,j}\cdot(\mathbf{W}^{(2)}\mathbf{W}^{(1)})_{i,j^{\prime} }\cdot(\mathbf{W}^{(2)}\mathbf{W}^{(1)})_{i^{\prime},j}\cdot(\mathbf{W}^{(2)}\mathbf{W}^{(1)})_ {i^{\prime},j^{\prime}}<0\), then \(\mathbf{f}=\mathbf{W}^{(2)}\mathbf{W}^{(1)}\) is not propagation invariant._ To obtain a propagation invariant network with the weights \(\mathbf{W}^{(2)},\mathbf{W}^{(1)}\in\mathcal{R}^{d\times d}\), we can thus only choose \(2d-1\) (e.g., one row and one column) of the \(d^{2}\) signs freely (see Corollary A.1 in App. A). The statements of Lemma 3.3 and Theorem 3.4 naturally extend to DLNs with more than two layers \(L>2\). However, the conditions within Theorem 3.4 become increasingly complex and strict as more and more terms need to yield the same sign. Thus, we focus our analysis on \(L=2\) for clarity. IBP Bound TightnessTo analyze the tightness of IBP bounds for networks that do not satisfy the strict conditions for propagation invariance, we relax it to introduce _propagation tightness_ as the ratio between the optimal and layer-wise box radius, simply referred to as _tightness_ in this paper. **Definition 3.5**.: _Given a DLN \(\mathbf{f}\), we define the global propagation tightness \(\mathbf{\tau}\) as the ratio between optimal \(\operatorname{Box}^{*}(\mathbf{f},\mathcal{B}^{\epsilon}(\mathbf{x}))\) and layer-wise \(\operatorname{Box}^{\dagger}(\mathbf{f},\mathcal{B}^{\epsilon}(\mathbf{x}))\) approximation radius, i.e., \(\tau_{i}=\frac{\mathbf{\tilde{x}}^{*}-\mathbf{\tilde{x}}^{*}}{\mathbf{\tilde{x}}^{\dagger} -\mathbf{\tilde{x}}^{\dagger}}\)._ Intuitively, tightness measures how much smaller the exact dimension-wise bounds \(\operatorname{Box}^{*}\) are, compared to the layer-wise approximation \(\operatorname{Box}^{\dagger}\), thus quantifying the gap between IBP certified and true adversarial robustness. When tightness equals \(1\), the network is propagation invariant and can be certified exactly with IBP; when tightness is close to \(0\), IBP bounds become arbitrarily imprecise. ReLU NetworksThe nonlinearity of ReLU networks leads to locally varying tightness and makes the computation of optimal box bounds intractable. However, for infinitesimal perturbation magnitudes, the activation patterns of ReLU networks remain stable, making them locally linear. We thus introduce a local version of tightness around concrete inputs, which we will later use to confirm the applicability of our theoretical results on DLNs to ReLU networks. **Definition 3.6**.: _For an \(L\)-layer ReLU network with weight matrices \(\mathbf{W}^{(k)}\) and activation pattern \(\mathbf{d}^{(k)}(\mathbf{x})=\mathbbm{1}_{\mathbf{z}^{(k-1)}>0}\in\{0,1\}^{d_{k}}\) (\(1\) for active and \(0\) for inactive ReLUs) depending on the input \(\mathbf{x}\), we define its local tightness as_ \[\mathbf{\tau}=\frac{\frac{\mathbf{d}}{d\epsilon}(\mathbf{\tilde{x}}^{*}-\mathbf{\tilde{z}}^{*}) \big{|}_{\epsilon=0}}{\frac{\mathbf{d}}{d\epsilon}(\mathbf{\tilde{z}}^{\dagger}-\mathbf{ \tilde{z}}^{\dagger})\big{|}_{\epsilon=0}}=\frac{\left|\Pi_{k=1}^{L} \operatorname{diag}(\mathbf{d}^{(k)})\mathbf{W}^{(k)}\right|\mathbf{1}}{(\Pi_{k=1}^{L} \operatorname{diag}(\mathbf{d}^{(k)})\left|\mathbf{W}^{(k)}\right|)\mathbf{1}}.\] ### Tightness at Initialization We first investigate the (expected) tightness \(\tau=\frac{\mathbb{E}_{\mathcal{D}_{\mathbf{\theta}}}(\mathbf{\tilde{z}}^{*}-\mathbf{ \tilde{z}}^{*})}{\mathbb{E}_{\mathcal{D}_{\mathbf{\theta}}}(\mathbf{\tilde{z}}^{\dagger} -\mathbf{\tilde{z}}^{\dagger})}\) (independent of the dimension due to symmetry) at initialization, _i.e._, w.r.t. a weight distribution \(\mathcal{D}_{\mathbf{\theta}}\). Let us consider a two-layer DLN at initialization, _i.e._, with i.i.d. weights following a zero-mean Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) with an arbitrary but fixed variance \(\sigma^{2}\)[26; 27]. **Lemma 3.7** (Initialization Tightness w.r.t. Width).: _Given a 2-layer DLN with weight matrices \(\mathbf{W}^{(1)}\in\mathcal{R}^{d_{1}\times d_{0}}\), \(\mathbf{W}^{(2)}\in\mathcal{R}^{d_{2}\times d_{1}}\) with i.i.d. entries from \(\mathcal{N}(0,\sigma_{2}^{2})\) and \(\mathcal{N}(0,\sigma_{2}^{2})\) (together denoted as \(\mathbf{\theta}\)), we obtain the expected tightness \(\tau(d_{1})=\frac{\mathbb{E}_{\mathbf{\theta}}(\mathbf{\pi}^{\ast}-\mathbf{\pi}^{\ast})}{ \mathbb{E}_{\mathbf{\theta}}(\mathbf{\pi}^{\ast}-\mathbf{\pi}^{\ast})}=\frac{\sqrt{\pi} \operatorname{\Gamma}\left(\frac{1}{2}(d_{1}+1)\right)}{d_{1}\operatorname{ \Gamma}\left(\frac{1}{2}d_{1}\right)}\in\Theta(\frac{1}{\sqrt{d_{1}}})\)._ Even for just two linear layers, the tightness at initialization decreases quickly with internal width (\(\Theta(\frac{1}{\sqrt{d_{1}}})\)), e.g., by a factor of \(\tau(500)\approx 0.056\) for the penultimate layer of the popular CNN7[7; 17]. It, further, follows directly that tightness will decrease exponentially w.r.t. network depth. **Corollary 3.8** (Initialization Tightness w.r.t. Depth).: _The expected tightness of an \(L\)-layer DLN \(\mathbf{f}\) with minimum internal dimension \(d_{\text{min}}\) is at most \(\tau\leq\tau(d_{\text{min}})^{\lfloor\frac{L}{2}\rfloor}\) at initialization._ Note that this result is independent of the variance \(\sigma_{1}^{2},\sigma_{2}^{2}\), chosen for weight initialization. Thus, tightness at initialization can not be increased by scaling \(\sigma^{2}\), as proposed by Shi et al. [8] to achieve constant box radius over network depth. ### IBP Training Increases Tightness As we have seen that networks are initialized with low tightness, we now investigate the effect of IBP training and show that it indeed increases tightness. To this end, we again consider a DLN with layer-wise propagation matrix \(\mathbf{W}^{\dagger}=\Pi_{i=1}^{L}|\mathbf{W}^{(i)}|\) and optimal propagation matrix \(\mathbf{W}^{\ast}=|\Pi_{i=1}^{L}\mathbf{W}^{(i)}|\), obtaining the expected risk for IBP training as \(R(\mathbf{\epsilon})=\mathbb{E}_{\mathbf{x},y}\mathcal{L}(\operatorname{Box}^{\dagger }(\mathbf{f},\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})),y)\). **Theorem 3.9** (IBP Training Increases Tightness).: _Assume homogenous tightness, i.e., \(\mathbf{W}^{\ast}=\tau\mathbf{W}^{\dagger}\), and \(\frac{\|\nabla_{\mathbf{\theta}}\mathbf{W}^{\ast}_{ij}\|_{2}}{\mathbf{W}^{\ast}_{ij}}\leq \frac{1}{2}\frac{\|\nabla_{\mathbf{\theta}}\mathbf{W}^{\dagger}_{ij}\|_{2}}{\mathbf{W}^{ \ast}_{ij}}\) for all \(i,j\), then, the gradient difference between the IBP and standard loss is aligned with an increase in tightness, i.e., \(\langle\nabla_{\theta}(R(\mathbf{\epsilon})-R(0)),\nabla_{\theta}\tau\rangle\leq 0\) for all \(\mathbf{\epsilon}>0\)._ ### Network Width and Tightness after Training Many high-dimensional computer vision datasets were shown to possess a small intrinsic data dimensionality [28]. Thus, we study the reconstruction loss of a linear embedding into a low-dimensional subspace as a proxy for the network performance. We show in this setting, that tightness decreases with the width \(w\) of a bottleneck layer as long as it is smaller than the data-dimensionality \(d\), _i.e._, \(w\ll d\). Further, while reconstruction becomes lossless for points as soon as the width \(w\) reaches the intrinsic dimension \(k\) of the data, even optimal box propagation requires a width of at least the original data dimension \(d\) to achieve loss-less reconstruction. Let us consider a \(k\)-dimensional data distribution, linearly embedded into a \(d\) dimensional space with \(d\gg k\), _i.e._, the data matrix \(X\) has a low-rank eigendecomposition \(\operatorname{Var}(X)=U\Lambda U^{\top}\) where only the first \(k\) eigenvalues are non-zero. We know that in this setting, the optimal reconstruction \(\hat{X}=U_{k}U_{k}^{\top}X\) of the original data is exact for rank \(k\) matrices and obtained by choosing \(U_{k}\) as the \(k\) columns of \(U\) associated with the non-zero eigenvalues. Interestingly, this is not the case even for optimal box propagation: **Theorem 3.10** (Box Reconstruction Error).: _Consider the linear embedding and reconstruction \(\hat{\mathbf{x}}=U_{k}U_{k}^{\top}\mathbf{x}\) of a \(d\) dimensional data distribution \(\mathbf{x}\sim\mathcal{X}\) into a \(k\) dimensional space with \(d\gg k\) and eigenmatrices \(U\) drawn uniformly at random from the orthogonal group. Propagating the input box \(\mathcal{B}^{\mathbf{\epsilon}}(\mathbf{x})\) layer-wise and optimally, thus, yields \(\mathcal{B}^{\delta^{\dagger}}(\hat{\mathbf{x}})\), and \(\mathcal{B}^{\delta^{\prime}}(\hat{\mathbf{x}})\), respectively. Then, we have, (i) \(\mathbb{E}(\delta_{i}/\epsilon)=ck\in\Theta(k)\) for a positive constant \(c\) depending solely on \(d\) and \(c\to\frac{2}{\pi}\approx 0.64\) for large \(d\); and (ii) \(\mathbb{E}(\delta^{\ast}_{i}/\epsilon)\to\frac{2}{\sqrt{\pi}}\frac{\Gamma(\frac{ 1}{2}(k+1))}{\Gamma(\frac{1}{2}k)}\in\Theta(\sqrt{k})\)._ Intuitively, Theorem 3.10 implies that while input points can be embedded into and reconstructed from a \(k\) dimensional space losslessly, box propagation will yield a box growth of \(\Theta(\sqrt{k})\) for optimal and \(\Theta(k)\) for layer-wise propagation. However, as soon as we have \(k=d\), we can choose \(U_{k}\) to be an identity matrix, thus obtaining lossless "reconstruction", even for layer-wise propagation. This highlights that sufficient network width is crucial for box propagation, even in the linear setting. ## 4 Empirical Evaluation Analysis In this section, we conduct an extensive empirical study of IBP-based certified training, leveraging our novel tightness metric and specifically its local variant (see Definition 3.6) to gain a deeper understanding of these methods and confirm that our theoretical analysis of DLNs carries over to ReLU networks. For certification, we use MN-BaB [4], a state-of-the-art [29; 30] neural network verifier, combining the branch-and-bound paradigm [31] with multi-neuron constraints [32; 33]. We defer a detailed discussion of the experimental setup (hardware, runtimes, and hyper-parameter choices) to App. C. ### Confirming Interactions of Network Architecture and Tightness We first confirm the predictiveness of our theoretical results regarding the effect of network width and depth on tightness at initialization and after training for ReLU networks. In Figure 2 we visualize tightness at initialization, depending on the network width and depth for DLNs and ReLU networks. As predicted by Lemma 3.7 and Corollary 3.8, tightness decreases polynomially with width Figure 2 (left) and exponentially with depth Figure 2 (right) for DLNs. We observe that both of these trends carry over perfectly to ReLU networks. Before turning to networks trained on real datasets, we confirm our predictions on the inherent hardness of linear reconstruction in Figure 3, where we plot the ratio of recovered and original box radius for optimal and IBP propagation, given a bottleneck layer of width \(w\) and data with intrinsic dimensionality \(k=w\). As predicted by Theorem 3.10, IBP propagation leads to linear growth while optimal propagation yields sublinear growth. Next, we study the interaction of network architecture and IBP training. To this end, we train networks with 3 to 13 layers on CIFAR-10 for \(\epsilon=2/255\), visualizing results in Figure 4 (top). To measure the regularizing effect of propagation tightness, we report IBP-certified accuracy on the training set as a measure of the goodness of fit. Generally, we would expect that increasing network depth increases capacity, thus reducing the robust training loss and increasing training set IBP-certified accuracy. However, we only observe such an increase in accuracy until a depth of 7 layers before accuracy starts to drop. We can explain this by analyzing the corresponding tightness. As expected, tightness is high for shallow networks but decreases quickly with depth, reaching a minimum for 7 layers. From there, tightness increases again, at the cost of stronger regularization, explaining the drop in accuracy. This is in line with the popularity of the 7-layer CNN7 in the certified training literature [7; 8; 10]. Continuing our study of architecture effects, we train networks with 0.5 to 16 times the width of a standard CNN7 using IBP training and visualize the resulting IBP certified train set accuracy and tightness in Figure 4 (bottom). We observe that increasing capacity via width instead of depth yields a monotone although diminishing increase in accuracy as tightness decreases gradually. The different trends for width and depth increases agree well with our theoretical results, predicting that sufficient network width is essential for trained networks (see Theorem 3.10). It can further be explained by the observation that increasing depth, at initialization, reduces tightness exponentially, while increasing width only reduces it polynomially. Intuitively, this suggests that less regularization is required to offset the tightness penalty of increasing network width rather than depth networks. As these experiments indicate that optimal architectures for IBP-based training methods have only moderate depth but large width, we train wider versions of the popular CNN7 using SABR [10]. Indeed, we observe that this improves upon the state-of-the-art certified accuracy in several settings, shown in Table 1. While these improvements might seem marginal, they are of similar magnitude as multiple years of progress on certified training methods. ### Certified Training Increases Tightness To assess how different training methods affect tightness, we train a CNN3 on CIFAR-10 for a wide range of perturbation magnitudes (\(\epsilon\in[10^{-5},5\cdot 10^{-2}]\)) using IBP, PGD, and SABR training, illustrating the resulting tightness and accuracies in Figure 5. Recall, that while IBP computes and optimizes a sound over-approximation of the worst-case loss over the whole input region, SABR propagates only a small adversarially selected subregion with IBP, thus yielding an unsound but generally more precise approximation of the worst-case loss. PGD, in contrast, does not use IBP at all but rather trains with samples that approximately maximize the worst-case loss. We observe that training with either IBP-based method increases tightness with perturbation magnitude until networks become almost propagation invariant for \(\epsilon=0.05\) with a tightness of \(\tau=0.98\). This confirms our theoretical results, showing that IBP training increases tightness with \(\epsilon\) (see Theorem 3.9). In contrast, training with PGD barely influences tightness. We observe that the regularization required to yield such high tightness comes at the cost of severely reduced standard accuracies with drops being more pronounced the more a method increases tightness. However, while this reduced standard accuracy translates to smaller certified accuracies for very small perturbation magnitudes (\(\epsilon\leq 5\cdot 10^{-3}\)), the increased tightness improves certifiability to yield higher certified accuracies for larger perturbation magnitudes (\(\epsilon\geq 10^{-2}\)). We further investigate this dependency between (certified) robustness and tightness by varying the subsection ratio \(\lambda\) when training with SABR. Recall that \(\lambda\) controls the size of the propagated regions for a fixed perturbation magnitude \(\epsilon\), with \(\lambda=1\) recovering IBP and \(\lambda=0\) PGD. Plotting results in Figure 6, we observe that while decreasing lambda severely reduces tightness and thus regularization it not only leads to increasing natural but also certified accuracies until tightness falls below \(\tau<0.5\) at \(\lambda=0.4\). This again highlights that reducing tightness while maintaining sufficient certifiability is a promising path to new certified training methods. We observe similar trends when varying the regularization level for other unsound certified training methods, discussed in App. D.1. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & \(\epsilon\) & Width & Accuracy & Certified \\ \hline \multirow{2}{*}{MNIST} & \multirow{2}{*}{0.3} & \multirow{2}{*}{4x} & **98.82** & 93.38 \\ & & & 98.48 & **93.85** \\ \hline \multirow{2}{*}{CIFAR-10} & \multirow{2}{*}{\(\frac{2}{255}\)} & \multirow{2}{*}{2x} & 79.24 & 62.84 \\ & & & **79.89** & **63.28** \\ \hline \hline \end{tabular} \end{table} Table 1: Certified and standard Accuracy of SABR-trained models with the original (from Müller et al. [30]) and wider width. Figure 5: Standard and certified accuracy and tightness for CNN3 on CIFAR-10 depending on training method and perturbation magnitude \(\epsilon\) used for training and evaluation. Figure 6: Accuracies and tightness of a CNN7 for CIFAR-10 \(\epsilon=\frac{2}{255}\) depending on regularization strength with SABR. We now investigate whether non-IBP-based certified training methods affect a similar increase in tightness as IBP-based methods. To this end, we consider COLT [18] which combines precise Zonotope bounds with adversarial training. However, as COLT does not scale to the popular CNN7, we compare with it on the 4-layer CNN architecture used by Balunovic and Vechev [18]. Comparing tightness and certified and standard accuracies in Table 2, we observe that the ordering of tightness and accuracy is exactly inverted, thus highlighting the large accuracy penalty associated with the strong regularization for tightness. While COLT only affects a minimal increase in tightness compared to SABR or IBP, it still yields networks, an order of magnitude tighter than PGD, suggesting that slightly increased tightness might be desirable for certified robustness. This is further corroborated by the observation that while COLT reaches the highest certified accuracies at small perturbation magnitudes, the more heavily regularizing SABR performs better at larger perturbation magnitudes. ## 5 Related Work Certified TrainingSound certified training methods compute and optimize an over-approximation of the worst-case loss obtained via bound propagation methods [17; 19; 34]. A particularly efficient and scalable method is IBP [7; 14], for which Shi et al. [8] propose a custom initialization scheme, significantly shortening training schedules, and Lin et al. [24] propose a non-negativity regularization, marginally improving certified accuracies. More recent methods use unsound but more precise approximations. COLT [18] combines precise Zonotope[5] bounds with adversarial training but is severely limited in scalability. IBP-R [12] combines an IBP-based regularization with adversarial training at larger perturbation magnitudes. SABR [10] applies IBP to small but carefully selected regions in the adversary specification to reduce regularization. TAPS [11], similar to COLT, combines IBP with adversarial training. This recent dominance of IBP-based methods motivates our work to develop a deeper understanding of how IBP training affects network robustness. Theoretical Analysis of IBPThe capability of IBP has been studied theoretically in the past. Baader et al. [35] first show that continuous functions can be approximated by IBP-certifiable ReLU networks up to arbitrary precision. Wang et al. [36] extend this result to more activation functions and prove that constructing such networks is strictly harder than NP-complete problems assuming coNP \(\notin\) NP. Wang et al. [37] study the convergence of IBP-training and find that it converges to a global optimum with high probability for infinite width. Mirman et al. [38] derive a negative result, showing that even optimal box bounds can fail on simple datasets. However, none of these works study the tightness of IBP bounds, _i.e._, their relationship to optimal interval approximations. Motivated by recent certified training methods identifying this approximation precision as crucial [11; 30], we bridge this gap by deriving sufficient and necessary conditions for propagation invariance, introducing the relaxed measure of propagation tightness and studying how it interacts with network architecture and IBP training, both theoretically and empirically. ## 6 Conclusion Motivated by the recent and surprising dominance of IBP-based certified training methods, we investigated its underlying mechanisms and trade-offs. By quantifying the relationship between IBP and optimal Box bounds with our novel propagation tightness metric we were able to predict the influence of architecture choices on deep linear networks at initialization and after training. Our experimental results confirm the applicability of these theoretical results to deep ReLU networks and show that wider networks improve the performance of state-of-the-art methods, while deeper networks do not. Finally, we show that IBP-based certified training methods, in contrast to non-IBP-based methods, significantly increase propagation tightness at the cost of strong regularization. We believe that this insight and the novel metric of propagation tightness will constitute a key step towards developing novel and more effective certified training methods. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & \(\epsilon\) & Accuracy & Tightness & Certified \\ \hline \multirow{2}{*}{PGD} & 2/255 & 81.2 & 0.001 & - \\ & 8/255 & 69.3 & 0.007 & - \\ \hline \multirow{2}{*}{COLT} & 2/255 & 78.4 & 0.009 & 60.7 \\ & 8/255 & 51.7 & 0.057 & 26.7 \\ \hline \multirow{2}{*}{SABR} & 2/255 & 75.6 & 0.182 & 57.7 \\ & 8/255 & 48.2 & 0.950 & 31.2 \\ \hline \multirow{2}{*}{IBP} & 2/255 & 63.0 & 0.803 & 51.3 \\ & 8/255 & 42.2 & 0.977 & 31.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of IBP- and non-IBP-based training methods.
2308.06930
Vertical-supercooling-controlled interfacial instability for a spreading liquid film
Thermal effect is essential to regulate the interfacial instabilities for diverse technology applications. Here we report the fingering instability at the propagation front for a spreading liquid film subjected to the supercooling at the vertical direction. We find the onset timescale of hydrodynamic instability is strongly correlated with that of the vertical solidification process. This correlation is further validated in a non-uniform geometry, demonstrating the capability of controlling fingering instability by structure design. We attribute the identified interfacial instability to a pronounced thermo-viscous effect, since the rapidly increased viscosity of propagation front undergoing solidification can significantly enhance the mobility contrast locally in the vicinity of the spreading front, consequently producing the instability analogous to viscous fingering. This work offers another valuable dimension by gating the vertical temperature to exploit the interfacial stabilities and steer liquid flow, consequently shedding light on the microfluidic cooling for electronics, and the advanced functional fibers and fabrics.
Li Chen, Feng Wang, Yingrui Wang, Peng Huo, Yuqi Li, Xi Gu, Man Hu, Daosheng Deng
2023-08-14T04:17:48Z
http://arxiv.org/abs/2308.06930v2
# Vertical-supercooling-controlled interfacial instability for a spreading liquid film ###### Abstract Thermal effect is essential to regulate the interfacial instabilities for diverse technology applications. Here we report the fingering instability at the propagation front for a spreading liquid film subjected to the supercooling at the vertical direction. We find the onset timescale of hydrodynamic instability is strongly correlated with that of the vertical solidification process. This correlation is further validated in a non-uniform geometry, demonstrating the capability of controlling fingering instability by structure design. Moreover, based on the experimental observations, we propose a physical mechanism by considering thermal Marangoni effect at the spreading front, and the predicted wavelength from the linear stability analysis agrees with experiments excellently. This work offers another valuable dimension by gating the vertical temperature to exploit the interfacial stabilities and steer liquid flow, consequently shedding light on the microfluidic cooling for electronics, and the advanced functional fibers and fabrics. Interfacial instabilities and the correlated intriguing patterns are not only ubiquitous in nature, such as the snowflakes and patterns of bacterial colonies [1; 2], but also essential for technology applications, such as the coating processing, microfluidics, and Lab-on-a-Chip [3; 4; 5]. Particularly, thermal effect plays an indispensable role for the interfacial instabilities [6; 7]. For example,during the unidirectional freezing, Mullins-Sekerka instability occurs at the moving planar liquid-solid interface [8]; and under a unidirectional temperature gradient, the thermocapillary instability appears [9].At the microscopic scale, the heat transfer becomes even more pronounced for interfacial stabilities, significantly influencing the fluid dynamics and the subsequently produced patterns [10; 11]. In spite of these extensive studies, heat transfer and temperature gradient are mainly confined within the liquid film. Recently, by surveying an impacted droplet on a cold substrate [12], diverse patterns have been clearly revealed, such as the crack pattern changing from a 2D fragmentation to a hierarchical fracture [13], the self-peeling of impacting droplets [14], and morphology evolution from conical tips to toroidal shapes [15]. Fingering growth emerges in a binary droplet, which freezes from the outside prior to the impacting on the cold surface [16]. At a microscopic level, the nucleation process and the associated solidification or crystalization during the droplet spreading and cooling have been revealed to be responsible for the eventual morphology and the arrest of the contact line [17; 18; 19; 20]. However, the freezing of impacted droplets and the correlated spreading process are extremely complicated, because of the inherent complex interplay of the rapid impacting hydrodynamics, the non-uniform liquid thickness, the transient heat transfer, and the intricate solidification. In this Letter, different from the aforementioned frozen droplets, we explore the spreading of liquid film with uniform thickness by injecting a viscous jet at an elevated temperature into a Hele-Shaw cell while being subjected to the supercooling along the walls. This setup allows us to thoroughly investigate the effect of vertical heat transfer, as another degree of freedom to precisely control interfacial instability, for the first time to be best of our knowledge. We observe fingering instability at the spreading front, and identify the onset of the instability is primarily determined by the solidification process at the vertical direction. We propose a physical mechanism of this instability, and the linear stability analysis agrees with experiments excellently. Moreover, we demonstrate this vertical-supercooling-induced fingering instability can be further engineered simply through structure designing of a non-uniform geometry. _Fingering instability_ As shown in Fig. 1a, paraffin wax is melted into a viscous state at an elevated temperature (\(T_{l}=95\)\({}^{\circ}\)C) above its melting point (\(T_{m}=69\)\({}^{\circ}\)C). Subsequently, this viscous liquid is injected into a Hele-Shaw cell consisting of two glass plates with a gap thickness \(h=400\)\(\mu\)m through a central inlet (\(2T_{in}=2.5\) mm). The flow rate of injection is varied by a syringe pump (\(Q=0.25-4\) ml/s), and the cooling Figure 1: Spatiotemporal evolution of fingering instability for an injected liquid at an elevated temperature into a Hele-Shaw cell subjected to the side-wall supercooling. (a) Schematic diagram of the experimental setup; (b, c) Snapshot for a stable interface at a higher \(T_{w}\approx T_{m}\) (\(T_{w}=67\)\({}^{\circ}\)C) and fingering instability at a lower \(T_{w}<T_{m}\) (\(T_{w}=25\)\({}^{\circ}\)C) for a paraffin wax film in a glass cell (\(Q=4\) ml/s, \(h=400\)\(\mu\)m); (d) Temporal evolution of the azimuthal undulations along the radius \(r(\theta)\) for the interface in row (c). process is controlled or gated by the wall temperature of Hele-Shaw cell (\(T_{w}\)). At a higher \(T_{w}=67\)\({}^{\circ}\)C (\(T_{w}\approx T_{m}\)), when the injected liquid spreads within the Hele-Shaw cell, the moving front is stable, and the circular symmetry of the interface is preserved (Fig. 1b). However, at a lower \(T_{w}=25\)\({}^{\circ}\)C (\(T_{w}<T_{m}\)) (Fig. 1c), the injected viscous liquid accordingly undergoes the supercooling process, and an interfacial front instability is identified with the pronounced fingering patterns along the radial direction (Supplementary Movie 1). As shown by the spatiotemporal snapshots, initially the liquid film spreads outward uniformly (\(t=305,430\) ms), and the propagation front forms a nearly perfect circular interface. Once its radius is beyond a critical value (\(r>r_{c}\)), some little notches appear at the moving front and signify the onset of the fingering instability (\(t=555\) ms). As the fluid is continuously injected, the front progressively propagates radially, and fingers further extend longer outward (\(t=680,805\) ms). This fingering instability is quantitatively characterized by temporal evolution of the azimuthal undulations along the interface [\(r(\theta)\)] as shown by Fig. 1d. Note that the notches at the interface are pinning solidification points during the subsequent growing process, and the wavelength \(\lambda\) between the nearest pinning solidification points is associated with the circumferential length and the number of the observed fingers (\(N_{f}\)), \(\lambda_{exp}\sim 2\pi r_{c}/N_{f}\). _Onset of the fingering instability_ A quantitative understanding for the onset of the instability is obtained by checking the critical radius (\(r_{c}\)) with various physical parameters (such as \(Q,h\)) (Supplementary Movie 2, Supplementary Note 3). The temporal evolution of radius (\(r\)) for various flow rates (\(Q\)) (Fig. 2a) indicates the turning point for the onset of instability (a critical time \(t_{c}\) or radius \(r_{c}\)), after which the fingering instability further develops (the radius is denoted for the length of typical fingers \(r_{f}\)). Prior to the instability, based on the mass conservation, \(Q=2\pi rhdr/dt\), the radius dependent on time is obtained, \[r\sim(Qt/\pi h)^{1/2}\sim t^{1/2}. \tag{1}\] As shown in Fig. 2b, the experimental data collapse into a master curve with a \(1/2\) scaling law, \(r/r_{c}\sim(t/t_{c})^{1/2}\). Because of the geometric confinement in the Hele-Shaw cell, the injected viscous liquid at a high temperature \(T_{f}\) is subjected to conductive cooling due to the bottom and top plate at a low temperature (\(T_{w}\)) at the vertical direction, likely resulting in the aforementioned instability. By balancing the diffusive heat flux with the solidification rate [13; 15], the timescale of solidification process at the vertical direction is obtained, \[t_{s}=\rho\mathcal{L}(h/2)^{2}/2\kappa\Delta T\sim h^{2}, \tag{2}\] where \(\Delta T=T_{m}-T_{w}\), \(\mathcal{L}\), \(\rho\), \(\kappa\) for the supercooling, latent heat, density and thermal conductivity (Table 1 for the materials parameters). Indeed, the experimental data of \(t_{c}\) for the onset of the instability agrees excellently with the \(t_{s}\) for the solidification timescale at the vertical direction in theory (Fig. 2c), \[t_{c}=t_{s}. \tag{3}\] Additionally, the scaling law holds for various gap thicknesses as well (\(t_{c}=t_{s}\sim h^{2}\)). Further, from Eq. (1)-(3), the critical radius \(r_{c}\) is attained, \[r_{c}\sim(Qh)^{1/2}(\rho\mathcal{L}/8\pi\kappa\Delta T)^{1/2}\sim Re^{1/2}, \tag{4}\] where \(Re=\rho v_{in}h/\mu\) (\(v_{in}=Q/\pi r_{in}^{2}\), \(\mu\) for viscosity) for Reynolds number. This \(r_{c}\sim Re^{1/2}\) scaling is consistent with the experimental observation for paraffin wax (Fig. 2d). For other materials with various physical and thermal properties (Table 1 in Supplementary and Supplementary Note 4), the \(1/2\) scaling law still holds (Fig. 1d) in eicosane (C\({}_{20}\)H\({}_{42}\)) with the similar thermal properties of the paraffin wax. But for the different thermal properties, such as liquid metal of Bi-Sn alloy with a larger thermal conductivity and smaller latent heat, \(r_{c}\) becomes smaller to be comparable with \(r_{in}\), hence the instability appears immediately right after the injection of this liquid metal. _Geometry with a nonuniform gap_ Since the timescale of heat transfer along the vertical direction determines the onset of instability [\(t_{c}\sim h^{2}\) in Eq. (2)], we further directly demonstrate the effect of thickness in a nonuniform geometry on the resultant instability. In this wedge-shaped Hele-Shaw cell (Fig. 3a) [21], the constant depth gradient is extremely small on the order of \(O(10^{-3})\)[22]. When the molten paraffin wax is injected into this cell at \(T_{w}=25\)\({}^{\circ}\)C and \(Q=4\) ml/s, the high-speed images show the pattern evolution (Fig. 3b and Supplementary Movie 3), and the interfacial profile \(r(t,\theta)\) depends on time and the azimuthal angle (Fig. 3c). Initially the spreading profile remains Figure 2: Scaling analysis for the onset of instability. (a) The temporal evolution of \(r\) dependent on flow rates \(Q\); (b) Prior to the instability onset, the spreading radius has a power law of \(1/2\) with time, \(r/r_{c}\sim(t/t_{c})^{1/2}\); (c) At the onset of the instability, the experimental data of \(t_{c}\) agreeing with theory of Eq. (2) and (3) excellently \(t_{c}=t_{s}\sim h^{2}\); (d) Critical radius \(r_{c}\sim(Re)^{1/2}\). a nearly circular shape (\(t=160\) ms). Afterwards, in the left-side region with a narrower gap, the interface or the propagation front is more susceptible to the instability because of rapid cooling, and fingering instability becomes pronounced beyond a critical time (\(t_{c}\sim 344\) ms). Alternately, in the right-side region with a wider gap, the interface still remains stable and smooth (\(t=560\) ms). In this way, the asymmetry pattern is formed, _i.e._, the stability is persisted at the wider gap while the instability is triggered at the narrower gap (\(t=720\) ms). This geometry effect on the pattern stability is further analyzed by a simple scaling analysis. By combining equation Eq. (2) and the local height at the interface which is described by \(h(r,\theta)=h_{in}+r(t,\theta)\text{cos}\theta\text{sin}\varphi\) [\(h_{in}=(h_{n}+h_{w})/2\)], the local solidification timescale at the interface is obtained, \[t_{s}(r,\theta)=\rho\mathcal{L}h^{2}/8\kappa\Delta T\sim h(r,\theta)^{2}. \tag{5}\] We establish the stability diagram for the interface at propagation front by defining dimensionless time \(\tilde{t}(\theta)=t/t_{s}\) (Fig. 3d). The interface is stable for \(\tilde{t}<2\), while fingering instability sets up for \(\tilde{t}>2\). The experimental data agrees reasonably well with this scaling analysis, which is built on the heat transfer at the vertical direction by addressing the solidification timescale during the supercooling process. _Temperature visualization_ In order to unravel the role of the supercooling process for this fingering instability, the spatial-temporal evolution of temperature for the viscous liquid is directly recorded by a thermal camera (Fig. 4a, Supplementary Movie 4, Supplementary Note 2). Fig. 4b presents temperature distribution along the radial direction of the marked finger (the black arrow in the snapshot at 296 ms). Adjacent to the inlet (\(r\approx r_{in}\)), the injected liquid is immediately cooled down by contacting with the sidewall; but in the spreading region (\(r_{in}<r<r_{f}\)), temperature slightly decreases with the radius. In the vicinity of the propagation front (\(r\approx r_{f}\)), temperature drops dramatically, generating a pronounced temperature gradient (\(\text{d}T/\text{d}r\)) around tens of K/mm (Fig. 4c). Arising from this temperature gradient, the resultant surface-tension gradient might play an essential role to be responsible for the emergence of the front instability. Consequently, this thermal Marangoni force is balanced by the viscous force \(\text{d}\gamma/\text{d}r=(\text{d}\gamma/\text{d}T)(\text{d}T/\text{d}r)\sim \mu\nu_{\text{M}}/h\), resulting in a characteristic Marangoni velocity of \(v_{\text{M}}\approx(\text{d}\gamma/\text{d}r)h/\mu\). From the physical parameters of paraffin wax [23; 24; 25], \(|\text{d}\gamma/\text{d}T|\approx 0.01\text{ mN}/(\text{m}\cdot\text{K})\) with a linear approximation, and \(\mu=1.8\) mPa -s, \(v_{\text{M}}\approx 20\text{ mm}/\text{s}\) is estimated theoretically. From the mass conservation, the velocity of propagation front or the interface is \(v_{f}=Q/2\pi rh\sim r^{-1}\), and experimental data agrees with the \(-1\) power law as expected (Fig. 4d). However, once \(v_{f}\) reaches down to \(v_{\text{M}}\), indeed, the instability is triggered, which is reproducible at various \(Q\) and \(h\) in experiments. _Physical mechanism_ Based on this viewpoint, we propose a physical mechanism of the observed radial fingering instability by taking into account the thermal Marangoni force. As shown in the sketch of Fig. 5a, during the stage I for the onset of the instability (\(t<t_{c}\)), as \(v_{f}\) gradually slows down, the thermal Marangoni force overcomes the viscous force, and the instability at the liquid-air interface is triggered. Concurrently, the protrusion associated with the instability leads to the formation of the pinning solidification points as subjected to the supercooling locally, and the spacing between the pinning solidification points is correlated with the instability wavelength (\(\lambda_{exp}\)) to be explored hereafter. Subsequently, during the stage II for the fingering growth (\(t>t_{c}\)), as the flow is continuously injected into the cell, the liquid further spreads outward passing through the pinning solidification points. By this fashion, the fingers are further extending radially. In order to validate the proposed mechanism, numerical simulation of the incompressible Navier-Stokes equations is performed (Supplementary Note 5 and Supplementary Movie 5). For the stage I (Fig. 5b), the liquid-gas interface gradually undergoes the circumferential instability as driven by the gra Figure 3: Geometry effect on the fingering instability. (a) Schematic diagram of the non-uniform gap thickness with a small constant gradient (\(\varphi\ll 1\)); (b) Interface susceptible to instability in the narrower side, while maintaining stability in the wider side; (c) \(r\) at the interface as a function of azimuth at different times; (d) Emergence of fingering instability, once \(\tilde{t}>2\) (as shaded in the green), consistent with the experimental observation in (b). Orange dots in (b-d) marking the region of interfacial instability. Figure 4: Spatiotemporal evolution of temperature. (a) Thermal images of the viscous liquid of the paraffin wax at \(T_{w}=24\) °C, \(Q=0.25\) mI/s, \(h=270\)\(\mu\)m; (b) Temperature profile along the radial direction of the black arrow in the last snapshot of (a); (c) The pronounced temperature gradient in the vicinity of the propagation front; (d) For various experiments, once the front velocity \(v_{f}\) reaching down to \(v_{\text{M}}\), the onset of fingering instability occurs. dient of surface tension. Indeed, the Marangoni-induced flow during the onset of the instability is presented by the magnified view of velocity fields. For the stage II (Fig. 5c), the fingers grow continuously and extend radially between the adjacent pinning solidification points, which are formed during the onset of instability. _Linear stability analysis_ In order to quantitatively characterize the underlying mechanism, we perform a linear stability analysis to highlight the prominent thermal Marangoni effect based on a depth-averaged 2D governing equations (Supplementary Note 7) [26, 21]. The Marangoni flow induced by the surface tension gradient is required to be explicitly expressed in the dimensionless boundary condition at the liquid-air interface, _i.e._, the tangential surface stresses should be balanced by viscous stresses associated with fluid motion [26]. Then based on the linear stability analysis, the dispersion relation is obtained for the instability growth rate (\(\sigma\)) dependent on the dimensionless wavenumber (\(k=2\pi r_{in}/\lambda\), \(r_{in}\) for the characteristic length scale), \[\frac{k}{\left(k+\sqrt{k^{2}+\chi^{2}}\right)^{2}}\cdot\frac{1}{\sqrt{k^{2}+ \sigma Pe}}=-\frac{2}{MaT^{g\sigma}}-\frac{2}{f_{p}RePe}. \tag{6}\] Here \(Ma=-(d\gamma/dT)\cdot[r_{in}(T_{l}-T_{n})/\alpha\mu]\) is for Marangoni number, \(Pe=r_{in}U_{0}/\alpha\) for Peclet number (\(\alpha\) for the thermal diffusivity), and \(\tilde{T}^{g\sigma}\) for temperature gradient across the interface, \(\chi\) and \(f_{p}\) for a constant coefficient (Supplementary Note 7). Here the radial temperature gradient is obtained from the experimental observation in Fig. 4c and is applied for Marangoni number and linear stability analysis (Supplementary Note 6). Qualitatively, since the left-hand side of equation (6) is positive and the last term \(-2/f_{p}RePe\) at the right-hand side is negative, the dispersion relationship holds only if \(\tilde{T}^{g\sigma}<0\) for temperature decreasing with the front propagation at the interface, otherwise the interface is stable. Correspondingly, a critical magnitude of temperature gradient (\(\tilde{T}^{g\sigma}_{cr}\)) is necessary for the possible instability by setting left side of equation (6) to be zero,\(|\tilde{T}^{g\sigma}_{cr}|=f_{p}RePe/Ma\). Quantitatively, by numerically solving equation (6), a phase diagram for the interfacial stability is obtained, where the growth rate (\(\sigma(k,\tilde{T}^{g\sigma})\)) relies on the wavenumber and temperature gradient at the interface (Fig. 5d). A blue line for marginal stability (\(\sigma=0\)) divides the phase into two regimes: the top-left regime in grey for the stable interface (\(\sigma<0\)); the bottom-right regime for the unstable interface (colorbar for the value of \(\sigma>0\) ). The fastest growth rate associated with the maximum instability \(\sigma_{fast}=max\) [\(\sigma(k,\tilde{T}^{g\sigma})\)] is indicated by a red line. Specially, a dispersion relationship \(\sigma(k)\) (Fig. 5e) is presented for a given typical temperature gradient, such as \(|\tilde{T}^{g\sigma}|=32\) K/mm in our experiments, and the wavenumber of \(k_{fast}\) and \(k_{cr}\) corresponds to the growth rate of \(\sigma_{fast}\) and \(\sigma=0\), respectively. Additionally, based on the Stefan model for the interfacial temperature [27, 28, 15] (Supplementary Note 8) and scaling law of \(r_{c}\) of Eq. (4), the scaling of temperature gradient at the interface for the onset of instability is attained, \[\left.\tilde{T}^{g\sigma}\right|_{r=r_{c}}=\left.\frac{\mathrm{d}T}{\mathrm{d }r}\right|_{r=r_{c}}\sim r_{c}^{-1}\sim(Qh)^{-1/2}. \tag{7}\] By combining the above Eq. (7) with the measured temperature gradient at a given \(Q=0.25\) ml/s (Fig. 4c), the relationship between temperature gradient and flow rate (\(Q\)) is obtained, \(\left.\tilde{T}^{g\sigma}\right|_{r=r_{c}}(Q)\). Then from the dispersion relationship of Eq. (6) together with the calculated \(\left.\tilde{T}^{g\sigma}\right|_{r=r_{c}}(Q)\), the linear stability analysis further predicts wavelength \(\lambda_{cr}\) and \(\lambda_{fast}\), corresponding to \(\sigma=0\) and \(\sigma_{fast}\) (Fig. 5f). Indeed, as shown in Fig. 5f, the observed wavelength \(\lambda_{exp}\) is greater than \(\lambda_{cr}\), as expected. Furthermore, \(\lambda_{exp}\) is quantitatively comparable with the fastest instability growth mode, falling into the regime (\(\lambda_{fast},3\lambda_{fast}\)). Considering the simplified model for such complicated coupling processes of hydrodynamics, heat transfer and phase change, this remarkable agreement between theory and experiment clearly validates the proposed mechanism based on the thermal Marangoni instability which is determined by the ambient cooling at the vertical direction. After the onset of instability in stage I, the subsequent growth of fingers during stage II can be so complicated (due to solidification, heat transfer, thermal expansion and other factors) that the interface might be deformed into the irregular morphology, which is beyond the scope of this work. _Outlook_ This work presents that the strategy of the vertical supercooling and the structure design to control and steer the liquid flow and the subsequently correlated instabilities for the potential technological implications, such as microfluid Figure 5: Fingering instability mechanism and linear stability analysis.(a) Schematic of the onset of instability (stage I) and subsequently the fingers growing between the pinning solidification points (stage II); (b,c) Numerical simulation showing the snapshots for the stage I (b) and stage II (c); (d) Phase diagram of the growth rate \(\sigma\) as a function of \(k\) and \(\mathrm{d}T/\mathrm{d}r\), the blue and red curves for \(\sigma=0\) and \(\sigma_{fast}\); (e) A typical dispersion relationship, and \(k_{fast}\) and \(k_{cr}\) for \(\sigma_{fast}\) and \(\sigma=0\); (f) Wavelength \(\lambda\) dependent on injection flow rates \(Q\), lines for the model, squares for experiments and error bars for five reproducible experiments. idic cooling for semiconductor devices [29]. Particularly, this supercooling process of the molten jet and liquid film is inherent within the state-of-the-art thermal drawing process, which involves heating and the subsequent cooling stage, and consequently sheds light on the diverse fluid instabilities to fabricate versatile functional structures for the advanced functional fibres and fabrics [30; 31; 32; 33]. D. D. is grateful to Prof. Howard Stone, and Prof. Martin Bazant for the insightful and supportive comment on this work. We also acknowledge Prof. K. L. Chong for providing the reference [26] for the linear stability analysis. This work is supported by the funding from the National Program in China and startup in Fudan University.
2305.09427
The distribution of the maximum protection number in simply generated trees
The protection number of a vertex $v$ in a tree is the length of the shortest path from $v$ to any leaf contained in the maximal subtree where $v$ is the root. In this paper, we determine the distribution of the maximum protection number of a vertex in simply generated trees, thereby refining a recent result of Devroye, Goh and Zhao. Two different cases can be observed: if the given family of trees allows vertices of outdegree $1$, then the maximum protection number is on average logarithmic in the tree size, with a discrete double-exponential limiting distribution. If no such vertices are allowed, the maximum protection number is doubly logarithmic in the tree size and concentrated on at most two values. These results are obtained by studying the singular behaviour of the generating functions of trees with bounded protection number. While a general distributional result by Prodinger and Wagner can be used in the first case, we prove a variant of that result in the second case.
Clemens Heuberger, Sarah J. Selkirk, Stephan Wagner
2023-05-16T13:34:46Z
http://arxiv.org/abs/2305.09427v1
# The distribution of the maximum protection number in simply generated trees ###### Abstract. The protection number of a vertex \(v\) in a tree is the length of the shortest path from \(v\) to any leaf contained in the maximal subtree where \(v\) is the root. In this paper, we determine the distribution of the maximum protection number of a vertex in simply generated trees, thereby refining a recent result of Devroye, Goh and Zhao. Two different cases can be observed: if the given family of trees allows vertices of outdegree \(1\), then the maximum protection number is on average logarithmic in the tree size, with a discrete double-exponential limiting distribution. If no such vertices are allowed, the maximum protection number is doubly logarithmic in the tree size and concentrated on at most two values. These results are obtained by studying the singular behaviour of the generating functions of trees with bounded protection number. While a general distributional result by Prodinger and Wagner can be used in the first case, we prove a variant of that result in the second case. Key words and phrases:Protection number, simply generated trees, generating functions 2010 Mathematics Subject Classification: 05C05; 05A15, 05A16, 05C80 The research of C. Heuberger and S. J. Selkirk is partially supported by the Austrian Science Fund (FWF): P 28466-N35, _Analytic Combinatorics: Digits, Automata and Trees_ and Austrian Science Fund (FWF): DOC 78. S. Wagner is supported by the Knut and Alice Wallenberg Foundation, grant KAW 2017.0112, and the Swedish research council (VR), grant 2022-04030. Next, a random tree is constructed by starting with a root that produces offspring according to the given distribution. In each subsequent step, all vertices of the current generation also produce offspring according to the same distribution, all independent of each other and independent of all previous generations. The process stops if none of the vertices of a generation have children. If the weights in the construction of a simply generated family are taken to be the corresponding probabilities of the offspring distribution, then one verifies easily that the distribution of a random \(n\)-vertex tree from that family (with probabilities proportional to the weights) is the same as that of the Bienayme-Galton-Watson process, conditioned on the event that the final tree has \(n\) vertices. Conversely, even if the weight sequence of a simply generated family does not represent a probability measure, it is often possible to determine an equivalent probability measure that produces the same random tree distribution. For example, random plane trees correspond to a geometric distribution while random rooted labelled trees correspond to a Poisson distribution. We refer to [7] and [16] for more background on simply generated trees and Bienayme-Galton-Watson trees. ### Protection numbers in trees Protection numbers in trees measure the distance to the nearest leaf successor. Formally, this can be expressed as follows. _Definition_ (Protection number).: The _protection number of a vertex \(v\)_ is the length of the shortest path from \(v\) to any leaf contained in the maximal subtree where \(v\) is the root. Alternatively, the protection number can be defined recursively: a leaf has protection number \(0\), the parent of a leaf has protection number \(1\), and generally the protection number of an interior vertex is the minimum of the protection numbers of its children plus \(1\). In this paper, we will be particularly interested in the _maximum protection number_ of a tree, which is the largest protection number among all vertices. Figure 1 shows an example of a tree along with the protection numbers of all its vertices. The study of protection numbers in trees began with Cheon and Shapiro [4] considering the average number of vertices with protection number of at least \(2\) (called \(2\)-protected) in ordered trees. Several other authors contributed to knowledge in this direction, by studying the number of \(2\)-protected vertices in various types of trees: \(k\)-ary trees [20]; digital search trees [8]; binary search trees [19]; ternary search trees [14]; tries and suff Figure 1. A plane tree with \(15\) vertices and the protection number of each vertex indicated. The maximum protection number of this tree is \(2\). recursive trees [18]; and general simply generated trees from which some previously known cases were also obtained [6]. Generalising the concept of a vertex being \(2\)-protected, \(k\)-protected vertices--when a vertex has protection number at least \(k\)--also became a recent topic of interest. Devroye and Janson [6] proved convergence of the probability that a random vertex in a random simply generated tree has protection number \(k\). Copenhaver gave a closed formula for the number of \(k\)-protected vertices in all unlabelled rooted plane trees on \(n\) vertices along with expected values, and these results were extended by Heuberger and Prodinger [13]. A study of \(k\)-protected vertices in binary search trees was done by Bona [2] and Bona and Pittel [3]. Holmgren and Janson [15] proved general limit theorems for fringe subtrees and related tree functionals, applications of which include a normal limit law for the number of \(k\)-protected vertices in binary search trees and random recursive trees. Moreover, the protection number of the root of families of trees has also been studied. In [13], Heuberger and Prodinger derived the probability of a plane tree having a root that is \(k\)-protected, the probability distribution of the protection number of the root of recursive trees is determined by Golebiewski and Klimczak in [12]. The protection number of the root in simply generated trees, Polya trees, and unlabelled non-plane binary trees was studied by Gittenberger, Golebiewski, Larcher, and Sulkowska in [11], where they also obtained results relating to the protection number of a randomly chosen vertex. Very recently, Devroye, Goh and Zhao [5] studied the maximum protection number in Bienayme-Galton-Watson trees, referring to it as the leaf-height. Specifically, they showed the following: if \(X_{n}\) is the maximum protection number in a Bienayme-Galton-Watson tree conditioned on having \(n\) vertices, then \(\frac{X_{n}}{\log n}\) converges in probability to a constant if there is a positive probability that a vertex has exactly one child. If this is not the case, then \(\frac{X_{n}}{\log\log n}\) converges in probability to a constant. Our aim in this paper is to refine the result of Devroye, Goh and Zhao by providing the full limiting distribution of the maximum protection number. For our analytic approach, the framework of simply generated trees is more natural than the probabilistic setting of Bienayme-Galton-Watson trees, though as mentioned earlier the two are largely equivalent. ### Statement of results As was already observed by Devroye, Goh and Zhao in [5], there are two fundamentally different cases to be considered, depending on whether or not vertices of outdegree \(1\) are allowed (have nonzero weight) in the given family of simply generated trees. If such vertices can occur, then we find that the maximum protection number of a random tree with \(n\) vertices is on average of order \(\log n\), with a discrete double-exponential distribution in the limit. On the other hand, if there are no vertices of outdegree \(1\), then the maximum protection number is on average of order \(\log\log n\). There is an intuitive explanation for this phenomenon. If outdegree \(1\) is allowed, it becomes easy to create vertices with high protection number: if the subtree rooted at a vertex is an \((h+1)\)-vertex path, then this vertex has protection number \(h\). On the other hand, if outdegree \(1\) is forbidden, then the smallest possible subtree rooted at a vertex of protection number \(h\) is a complete binary tree with \(2^{h+1}-1\) vertices. An illustration of the two cases is given in Figure 2. In the case where vertices of outdegree \(1\) can occur, the limiting distribution turns out to be a discrete double-exponential distribution that also occurs in many other combinatorial examples, and for which general results are available--see Section 2.2. These results are adapted in Section 5.2 to the case where there are no vertices of outdegree \(1\). In the following results, we make a common technical assumption, stating formally that there is a positive real number \(\tau\), less than the radius of convergence of \(\Phi\), such that \(\Phi(\tau)=\tau\Phi^{\prime}(\tau)\) (see Section 2.1 for further details). This is equivalent to the offspring distribution of the associated Bienayme-Galton-Watson process having a finite exponential moment, which is the case for all the examples mentioned earlier (plane trees, binary trees, pruned binary trees, labelled trees). This assumption is crucial for the analytic techniques that we are using, which are based on an asymptotic analysis of generating functions. However, it is quite likely that our main results remain valid under somewhat milder conditions. **Theorem 1**.: _Given a family of simply generated trees with \(w_{1}=\Phi^{\prime}(0)\neq 0\), the proportion of trees of size \(n\) whose maximum protection number is at most \(h\) is asymptotically given by_ \[\exp\left(-\kappa nd^{-h}\right)(1+o(1))\] _as \(n\to\infty\) and \(h=\log_{d}(n)+O(1)\), where \(\kappa\) (given in (55)) and \(d=(\rho\Phi^{\prime}(0))^{-1}>1\) are positive constants. Moreover, the expected value of the maximum protection number in trees with \(n\) vertices is_ \[\log_{d}(n)+\log_{d}(\kappa)+\frac{\gamma}{\log(d)}+\frac{1}{2}+\psi_{d}(\log _{d}(\kappa n))+o(1),\] _where \(\gamma\) denotes the Euler-Mascheroni constant and \(\psi_{d}\) is the \(1\)-periodic function that is defined by the Fourier series_ \[\psi_{d}(x)=-\frac{1}{\log(d)}\sum_{k\neq 0}\Gamma\Bigl{(}-\frac{2k\pi i}{\log(d )}\Bigr{)}e^{2k\pi ix}. \tag{2}\] In the case where vertices of outdegree \(1\) are excluded, we show that the maximum protection number is strongly concentrated. In fact, with high probability it only takes on one of at most two different values (depending on the size of the tree). The precise result can be stated as follows. **Theorem 2**.: _Given a family of simply generated trees with \(w_{1}=\Phi^{\prime}(0)=0\), set \(r=\min\{i\in\mathbb{N}\colon i\geq 2\text{ and }w_{i}\neq 0\}\) and \(D=\gcd\{i\in\mathbb{N}\colon w_{i}\neq 0\}\). The proportion of trees of size \(n\) whose maximum protection number is at most \(h\) is asymptotically given by_ \[\exp\left(-\kappa nd^{-r^{h}}(1+o(1))+o(1)\right)\] Figure 2. Smallest examples where a tree may (A) or may not (B) have exactly one child and the root has protection number \(4\). _as \(n\to\infty\), \(n\equiv 1\pmod{D}\), and \(h=\log_{r}\left(\log_{d}(n)\right)+O(1)\), where \(\kappa=\frac{w_{r}\lambda_{1}^{r}}{\Phi(\tau)}\) and \(d=\mu^{-r}>1\) are positive constants with \(\lambda_{1}\) and \(\mu\) defined in (61) and (62) respectively (see Lemma 5.5). Moreover, there is a sequence of positive integers \(h_{n}\) such that the maximum protection number of a tree with \(n\) vertices is \(h_{n}\) or \(h_{n}+1\) with high probability (i.e., probability tending to \(1\) as \(n\to\infty\)) where \(n\equiv 1\pmod{D}\)._ _Specifically, with \(m_{n}=\log_{r}\log_{d}\left(n\right)\) and \(\{m_{n}\}\) denoting its fractional part, one can set_ \[h_{n}=\begin{cases}\lfloor m_{n}\rfloor&\{m_{n}\}\leq\frac{1}{2},\\ \lceil m_{n}\rceil&\{m_{n}\}>\frac{1}{2}.\end{cases}\] _If we restrict to those values of \(n\) for which \(\{m_{n}\}\in[\epsilon,1-\epsilon]\), where \(\epsilon>0\) is fixed, then with high probability \(X_{n}\) is equal to \(\lceil m_{n}\rceil\)._ Note that in the setting of Theorem 2, it is easy to see that there are no trees of size \(n\) if \(n\not\equiv 1\pmod{D}\). In the setting of Theorem 1, we have \(\gcd\{i\in\mathbb{N}\colon w_{i}\neq 0\}=1\) because \(w_{1}\neq 0\). Theorem 1 is illustrated in Figure 3, while Theorem 2 is illustrated in Figure 4. The proof of Theorem 1 relies on a a general distributional result provided in [22], see Theorem 2.1. For the proof of Theorem 2, however, we will need a variant for doubly-exponential convergence of the dominant singularities. The statement and proof are similar to the original and we expect that this variant will be useful in other contexts, too. Figure 3. The asymptotic cumulative distribution function plotted against calculated values for plane, binary, and Cayley trees. **Theorem 3**.: _Let \(Y_{h}(x)=\sum_{n\geq 0}y_{h,n}x^{n}\) (\(h\geq 0\)) be a sequence of generating functions with nonnegative coefficients such that \(y_{h,n}\) is nondecreasing in \(h\) and_ \[\lim_{h\to\infty}Y_{h}(x)=Y(x)=\sum_{n\geq 0}y_{n}x^{n},\] _and let \(X_{n}\) denote the sequence of random variables with support \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\) defined by_ \[\mathbb{P}(X_{n}\leq h)=\frac{y_{h,n}}{y_{n}}.\] _Assume that each generating function \(Y_{h}\) has a singularity at \(\rho_{h}\in\mathbb{R}\) such that_ 1. \(\rho_{h}=\rho(1+\kappa\zeta^{r^{h}}+o(\zeta^{r^{h}}))\) _as_ \(h\to\infty\) _for some constants_ \(\rho>0\)_,_ \(\kappa>0\)_,_ \(\zeta\in(0,1)\)_, and_ \(r>1\)_._ 2. \(Y_{h}(x)\) _can be continued analytically to the domain_ \[\{x\in\mathbb{C}:|x|\geq(1+\delta)|\rho_{h}|,|\mathrm{Arg}(x/\rho_{h}-1)|>\phi\}\] _for some fixed_ \(\delta>0\) _and_ \(\phi\in(0,\pi/2)\)_, and_ \[Y_{h}(x)=U_{h}(x)+A_{h}(1-x/\rho_{h})^{\alpha}+o((1-x/\rho_{h})^{\alpha})\] _holds within this domain, uniformly in_ \(h\)_, where_ \(U_{h}(x)\) _is analytic and uniformly bounded in_ \(h\) _within the aforementioned region,_ \(\alpha\in\mathbb{R}\setminus\mathbb{N}_{0}\)_, and_ \(A_{h}\) _is a constant dependent on_ \(h\) _such that_ \(\lim_{h\to\infty}A_{h}=A\neq 0\)_. Finally,_ \[Y(x)=U(x)+A(1-x/\rho)^{\alpha}+o((1-x/\rho)^{\alpha})\] _in the region_ \[\{x\in\mathbb{C}:|x|\geq(1+\delta)|\rho|,|\mathrm{Arg}(x/\rho-1)|>\phi\}\] _for a function_ \(U(x)\) _that is analytic within this region._ _Then the asymptotic formula_ \[\mathbb{P}(X_{n}\leq h)=\frac{y_{h,n}}{y_{n}}=\exp\left(-\kappa n\zeta^{r^{h}} (1+o(1))+o(1)\right)\] _holds as \(n\to\infty\) and \(h=\log_{r}\left(\log_{d}\left(n\right)\right)+O(1)\), where \(d=\zeta^{-1}\)._ Figure 4. The asymptotic cumulative distribution function plotted against calculated values for complete binary, and Riordan trees [17]. In the next theorem, we show that the consequences of this distributional result are quite drastic. **Theorem 4**.: _Assume the conditions of Theorem 3. There is a sequence of nonnegative integers \(h_{n}\) such that \(X_{n}\) is equal to \(h_{n}\) or \(h_{n}+1\) with high probability. Specifically, with \(m_{n}=\log_{r}\log_{d}\left(n\right)\) and \(\{m_{n}\}\) denoting its fractional part, one can set_ \[h_{n}=\begin{cases}\lfloor m_{n}\rfloor&\{m_{n}\}\leq\frac{1}{2},\\ \lceil m_{n}\rceil&\{m_{n}\}>\frac{1}{2}.\end{cases}\] _If we restrict to those values of \(n\) for which \(\{m_{n}\}\in[\epsilon,1-\epsilon]\), where \(\epsilon>0\) is fixed, then with high probability \(X_{n}\) is equal to \(\lceil m_{n}\rceil\)._ ## 2. Preliminaries ### Basic facts about simply generated trees For our purposes, we will make the following typical technical assumptions: first, we assume without loss of generality that \(w_{0}=1\) or equivalently \(\Phi(0)=1\). In other words, leaves have an associated weight of \(1\), which can be achieved by means of a normalising factor if necessary. Moreover, to avoid trivial cases in which the only possible trees are paths, we assume that \(w_{j}>0\) for at least one \(j\geq 2\). Finally, we assume that there is a positive real number \(\tau\), less than the radius of convergence of \(\Phi\), such that \(\Phi(\tau)=\tau\Phi^{\prime}(\tau)\). As mentioned earlier, this is equivalent to the offspring distribution having exponential moments. It is well known (see e.g. [7, Section 3.1.4]) that if such a \(\tau\) exists, it is unique, and the radius of convergence \(\rho\) of \(Y\) can be expressed as \[\rho=\tau/\Phi(\tau)=1/\Phi^{\prime}(\tau), \tag{3}\] which is equivalent to \(\rho\) and \(\tau\) satisfying the simultaneous equations \(y=x\Phi(y)\) and \(1=x\Phi^{\prime}(y)\) (which essentially mean that the implicit function theorem fails at the point \((\rho,\tau)\)). Moreover, \(Y\) has a square root singularity at \(\rho\) with \(\tau=Y(\rho)\), with a singular expansion of the form \[Y(x)=\tau+a\Big{(}1-\frac{x}{\rho}\Big{)}^{1/2}+b\Big{(}1-\frac{x}{\rho}\Big{)} +c\Big{(}1-\frac{x}{\rho}\Big{)}^{3/2}+O\Big{(}(\rho-x)^{2}\Big{)}. \tag{4}\] The coefficients \(a,b,c\) can be expressed in terms of \(\Phi\) and \(\tau\). In particular, we have \[a=-\Big{(}\frac{2\Phi(\tau)}{\Phi^{\prime\prime}(\tau)}\Big{)}^{1/2}.\] In fact, there is a full Newton-Puiseux expansion in powers of \((1-x/\rho)^{1/2}\). If the weight sequence is _aperiodic_, i. e., \(\gcd\{j\colon w_{j}\neq 0\}=1\), then \(\rho\) is the only singularity on the circle of convergence of \(Y\), and for sufficiently small \(\varepsilon>0\) there are no solutions to the simultaneous equations \(y=x\Phi(y)\) and \(1=x\Phi^{\prime}(y)\) with \(|x|\leq\rho+\varepsilon\) and \(|y|\leq\tau+\varepsilon\) other than \((x,y)=(\rho,\tau)\). Otherwise, if this \(\gcd\) is equal to \(D\), there are \(D\) singularities at \(\rho e^{2k\pi i}D\) (\(i\in\{0,1,\ldots,D-1\}\)), all with the same singular behaviour. In the following, we assume for technical simplicity that the weight sequence is indeed aperiodic, but the proofs are readily adapted to the periodic setting, see Remarks 3.17 and 5.9. By means of singularity analysis [9, Chapter VI], the singular expansion (4) yields an asymptotic formula for the coefficients of \(Y\): we have \[y_{n}=[x^{n}]Y(x)\sim\frac{-a}{2\sqrt{\pi}}n^{-3/2}\rho^{-n}.\] If the weight sequence corresponds to a probability distribution, then \(y_{n}\) is the probability that an _unconditioned_ Bienayme-Galton-Watson tree has exactly \(n\) vertices when the process ends. For other classes such as plane trees or binary trees, \(y_{n}\) represents the number of \(n\)-vertex trees in the respective class. ### A general distributional result The discrete double-exponential distribution in Theorem 1 has been observed in many other combinatorial instances, for example the longest run of zeros in a random 0-1-string, the longest horizontal segment in Motzkin paths or the maximum outdegree in plane trees. This can often be traced back to the behaviour of the singularities of associated generating functions. The following general result, taken from [22], will be a key tool for us. **Theorem 2.1** (see [22, Theorem 1]).: _Let \(Y_{h}(x)=\sum_{n\geq 0}y_{h,n}x^{n}\)\((h\geq 0)\) be a sequence of generating functions with nonnegative coefficients such that \(y_{h,n}\) is nondecreasing in \(h\) and_ \[\lim_{h\to\infty}Y_{h}(x)=Y(x)=\sum_{n\geq 0}y_{n}x^{n},\] _and let \(X_{n}\) denote the sequence of random variables with support \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\) defined by_ \[\mathbb{P}(X_{n}\leq h)=\frac{y_{h,n}}{y_{n}}. \tag{5}\] _Assume, moreover, that each generating function \(Y_{h}\) has a singularity \(\rho_{h}\in\mathbb{R}\), such that_ 1. \(\rho_{h}=\rho(1+\kappa\zeta^{h}+o(\zeta^{h}))\) _as_ \(h\to\infty\) _for some constants_ \(\rho>0\)_,_ \(\kappa>0\) _and_ \(\zeta\in(0,1)\)_._ 2. \(Y_{h}(x)\) _can be continued analytically to the domain_ (6) \[\{x\in\mathbb{C}:|x|\leq(1+\delta)|\rho_{h}|,|\mathrm{Arg}(x/\rho_{h}-1)|>\phi\}\] _for some fixed_ \(\delta>0\) _and_ \(\phi\in(0,\pi/2)\)_, and_ \[Y_{h}(x)=U_{h}(x)+A_{h}(1-x/\rho_{h})^{\alpha}+o((1-x/\rho_{h})^{\alpha})\] _holds within this domain, uniformly in_ \(h\)_, where_ \(U_{h}(x)\) _is analytic and uniformly bounded in_ \(h\) _within the aforementioned region,_ \(\alpha\in\mathbb{R}\setminus\mathbb{N}_{0}\)_, and_ \(A_{h}\) _is a constant depending on_ \(h\) _such that_ \(\lim_{h\to\infty}A_{h}=A\neq 0\)_. Finally,_ \[Y(x)=U(x)+A(1-x/\rho)^{\alpha}+o((1-x/\rho)^{\alpha})\] _in the region_ \[\{x\in\mathbb{C}:|x|\leq(1+\delta)|\rho|,|\mathrm{Arg}(x/\rho-1)|>\phi\}\] _for a function_ \(U(x)\) _that is analytic within this region._ _Then the asymptotic formula_ \[\mathbb{P}(X_{n}\leq h)=\frac{y_{h,n}}{y_{n}}=\exp{(-\kappa n\zeta^{h})(1+o(1))}\] _holds as \(n\to\infty\) and \(h=\log_{d}(n)+O(1)\), where \(d=\zeta^{-1}\). Hence the shifted random variable \(X_{n}-\log_{d}(n)\) converges weakly to a limiting distribution if \(n\) runs through a subset of the positive integers such that the fractional part \(\{\log_{d}(n)\}\) of \(\log_{d}(n)\) converges._ As we will see, the conditions of this theorem hold for the random variable \(X_{n}\) given by the maximum protection number of a random \(n\)-vertex tree from a simply generated family that satisfies our technical assumptions. Under slightly stronger assumptions, which also hold in our case, one has the following theorem on the expected value of the random variable \(X_{n}\). **Theorem 2.2** (see [22, Theorem 2]).: _In the setting of Theorem 2.1, assume additionally that_ 1. _There exists a constant_ \(K\) _such that_ \(y_{h,n}=y_{n}\) _for_ \(h>Kn\)_,_ 2. \(\sum_{h\geq 0}|A-A_{h}|<\infty\)_,_ 3. _the asymptotic expansions of_ \(Y_{h}\) _and_ \(Y\) _around their singularities are given by_ \[Y_{h}(x)=U_{h}(x)+A_{h}(1-x/\rho_{h})^{\alpha}+B_{h}(1-x/\rho_{h})^{\alpha+1}+ o((1-x/\rho_{h})^{\alpha+1}),\] _uniformly in_ \(h\)_, and_ \[Y(x)=U(x)+A(1-x/\rho_{h})^{\alpha}+B(1-x/\rho_{h})^{\alpha+1}+o((1-x/\rho_{h})^ {\alpha+1}),\] _respectively, such that_ \(\lim_{h\to\infty}B_{h}=B\)_._ _Then the mean of \(X_{n}\) satisfies_ \[\mathbb{E}(X_{n})=\log_{d}(n)+\log_{d}(\kappa)+\frac{\gamma}{\log(d)}+\frac{1 }{2}+\psi_{d}(\log_{d}(\kappa n))+o(1),\] _where \(\gamma\) denotes the Euler-Mascheroni constant and \(\psi_{d}\) is given by (2)._ ### A system of functional equations As a first step of our analysis, we consider a number of auxiliary generating functions and derive a system of functional equations that is satisfied by these generating functions. The family of simply generated trees and the associated weight generating function \(\Phi\) are regarded fixed throughout. Let \(h\) be a positive integer and \(k\) an integer with \(0\leq k\leq h\). Consider trees with the following two properties: P1. No vertex has a protection number greater than \(h\). P2. The root is at least \(k\)-protected (but also at most \(h\)-protected). Let \(Y_{h,k}(x)\) be the associated generating function, where \(x\) marks the number of vertices. Note in particular that when \(k=0\), we obtain the generating function for trees where the maximum protection number is at most \(h\). Hence we can express the probability that the maximum protection number of a random \(n\)-vertex tree (from our simply generated family) is at most \(h\) as the quotient \[\frac{[x^{n}]Y_{h,0}(x)}{[x^{n}]Y(x)}.\] This is precisely the form of (5), and indeed our general strategy will be to show that the generating functions \(Y_{h,0}\) satisfy the technical conditions of Theorem 2.1. Compared to the examples given in [22], this will be a rather lengthy technical task. However, we believe that the general method, in which a sequence of functional equations is shown to converge uniformly in a suitable region, is also potentially applicable to other instances and therefore interesting in its own right. Let us now derive a system of functional equations, using the standard decomposition of a rooted tree into the root and its branches. Clearly, if a tree has property P1, then this must also be the case for all its branches. Moreover, property P2 is satisfied for \(k>0\) if and only if the root of each of the branches is at least \((k-1)\)-protected, but not all of them are \(h\)-protected (as this would make the root \((h+1)\)-protected). Thus, for \(1\leq k\leq h\), we have \[Y_{h,k}(x)=x\Phi(Y_{h,k-1}(x))-x\Phi(Y_{h,h}(x)). \tag{7}\] Note that the only case in which the root is only \(0\)-protected is when the root is the only vertex. Hence we have \[Y_{h,0}(x)=Y_{h,1}(x)+x. \tag{8}\] The analytic properties of the system of functional equations given by (7) and (8) will be studied in the following section, culminating in Proposition 3.16, which shows that Theorem 2.1 is indeed applicable to our problem. ## 3. Analysis of the functional equations ### Contractions and implicit equations This section is devoted to a detailed analysis of the generating functions \(Y_{h,k}\) that satisfy the system of equations given by (7) and (8). The first step will be to reduce it to a single implicit equation satisfied by \(Y_{h,1}\) that is then shown to converge to the functional equation (1) in a sense that will be made precise. This is then used to infer information on the region of analyticity of \(Y_{h,1}\) as well as its behaviour around the dominant singularity, which is also shown to converge to the dominant singularity of \(Y\). This information is collected in Proposition 3.16 at the end of the section. In the following, we will prove various statements for sufficiently small \(\varepsilon>0\). In several, but finitely many, steps it might be necessary to decrease \(\varepsilon\); we tacitly assume that \(\varepsilon\) is always small enough to ensure validity of all statements up to the given point. In order to avoid ambiguities, we will always assume that \(\varepsilon<1\). Let us remark that \(\varepsilon\) and other constants as well as all implied \(O\)-constants that occur in this section depend on the specific simply generated family of trees (in particular the weight generating function \(\Phi\) and therefore \(\rho\) and \(\tau\)), but nothing else. Recall that \(\rho\) is the dominant singularity of the generating function \(Y\) of our simply generated family of trees. Moreover, \(\tau=Y(\rho)\) is characterised by the equation \(\tau\Phi^{\prime}(\tau)=\Phi(\tau)\) (see (3)) and satisfies \(\tau=\rho\Phi(\tau)\). Since \(\Phi\) is increasing and \(\Phi(0)=1\), we also have \(\tau=\rho\Phi(\tau)>\rho\Phi(0)=\rho\). Let us write \(D_{\delta}(w):=\{z\in\mathbb{C}\,:\,|z-w|<\delta\}\) for open disks. For \(\varepsilon>0\), we define \[\Xi_{\varepsilon}^{(1)} \coloneqq D_{\rho+\varepsilon}(0),\] \[\Xi_{\varepsilon}^{(2)} \coloneqq D_{\tau-\rho+\varepsilon}(0),\] \[\Xi_{\varepsilon}^{(3)} \coloneqq D_{\varepsilon}(0).\] For \(1\leq j<k\leq 3\), we set \(\Xi_{\varepsilon}^{(j,k)}\coloneqq\Xi_{\varepsilon}^{(j)}\times\Xi_{ \varepsilon}^{(k)}\), and we also set \(\Xi_{\varepsilon}\coloneqq\Xi_{\varepsilon}^{(1,2,3)}\coloneqq\Xi_{ \varepsilon}^{(1)}\times\Xi_{\varepsilon}^{(2)}\times\Xi_{\varepsilon}^{(3)}\). As \(\tau\) is less than the radius of convergence of \(\Phi\) by our assumptions, we may choose \(\varepsilon>0\) sufficiently small such that \(\tau+2\varepsilon\) is still smaller than the radius of convergence of \(\Phi\). Consider the function defined by \(f_{x,z}(y)=x(\Phi(y)-\Phi(z))\). We can rewrite the functional equation (7) in terms of this function as \[Y_{h,k}(x)=f_{x,Y_{h,h}(x)}(Y_{h,k-1}(x)) \tag{9}\] for \(1\leq k\leq h\). For \(j\geq 0\), we denote the \(j\)th iterate of \(f_{x,z}\) by \(f_{x,z}^{(j)}\), i. e., \(f_{x,z}^{(0)}(y)=y\) and \(f_{x,z}^{(j+1)}(y)=f_{x,z}^{(j)}(f_{x,z}(y))\) for \(j\geq 0\). Iterating (9) then yields \[Y_{h,k}(x)=f_{x,Y_{h,h}(x)}(Y_{h,k-1}(x))=\cdots=f_{x,Y_{h,h}(x)}^{(k-1)}(Y_{h,1}(x))\] for \(1\leq k\leq h\) and therefore \[Y_{h,h}(x)=f_{x,Y_{h,h}(x)}^{(h-1)}(Y_{h,1}(x)). \tag{10}\] Plugging (8) into (7) for \(k=1\) yields \[Y_{h,1}(x)=x\big{(}\Phi(Y_{h,1}(x)+x)-\Phi(Y_{h,h}(x))\big{)}. \tag{11}\] This means that (10) and (11) are a system of two functional equations for \(Y_{h,1}(x)\) and \(Y_{h,h}(x)\). We intend to solve (10) for \(Y_{h,h}(x)\) and then plug the solution into (11). As a first step towards this goal, we show that \(f_{x,z}\) represents a contraction on a suitable region. **Lemma 3.1**.: _For sufficiently small \(\varepsilon>0\), we have \(|f_{x,z}(y)|<\tau-\rho\) for all \((x,y,z)\in\Xi_{\varepsilon}\)._ Proof.: By the triangle inequality, definition of \(\Xi_{\varepsilon}\), non-negativity of the coefficients of \(\Phi\), and \(\Phi(0)=1\), we have \[|f_{x,z}(y)| =|x\big{(}(\Phi(y)-1)-(\Phi(z)-1)\big{)}|\] \[\leq(\rho+\varepsilon)(|\Phi(y)-1|+|\Phi(z)-1|)\] \[\leq(\rho+\varepsilon)((\Phi(|y|)-1)+(\Phi(|z|)-1))\] \[\leq(\rho+\varepsilon)(\Phi(\tau-\rho+\varepsilon)-1+\Phi( \varepsilon)-1).\] For \(\varepsilon\to 0\), the upper bound converges to \(\rho\Phi(\tau-\rho)-\rho\) because we are assuming that \(\Phi(0)=1\). As \(\rho\Phi(\tau-\rho)-\rho<\rho\Phi(\tau)-\rho=\tau-\rho\) by (3), the assertion of the lemma holds for sufficiently small \(\varepsilon>0\). **Lemma 3.2**.: _For sufficiently small \(\varepsilon>0\) and \((x,y,z)\in\Xi_{\varepsilon}\), we have \(|f^{\prime}_{x,z}(y)|=|x\Phi^{\prime}(y)|\leq\lambda\) for some constant \(\lambda<1\)._ Proof.: For any triple \((x,y,z)\in\Xi_{\varepsilon}\), \[|f^{\prime}_{x,z}(y)|=|x\Phi^{\prime}(y)|\leq(\rho+\varepsilon)\Phi^{\prime}( \tau-\rho+\varepsilon).\] For \(\varepsilon\to 0\), the upper bound converges to \(\rho\Phi^{\prime}(\tau-\rho)\), which is less than \(\rho\Phi^{\prime}(\tau)=1\) (by (3)). For the remainder of this section, \(\lambda\) will be defined as in Lemma 3.2. **Lemma 3.3**.: _For sufficiently small \(\varepsilon>0\) and \((x,z)\in\Xi_{\varepsilon}^{(1,3)}\), \(f_{x,z}\) maps \(\Xi_{\varepsilon}^{(2)}\) to itself and is a contraction with Lipschitz constant \(\lambda\)._ Proof.: The fact that \(f_{x,z}\) maps \(\Xi_{\varepsilon}^{(2)}\) to itself for sufficiently small \(\varepsilon>0\) is a direct consequence of Lemma 3.1. Making use of Lemma 3.2, the contraction property now follows by a standard argument: For \(y_{1}\), \(y_{2}\in\Xi_{\varepsilon}^{(2)}\), we have \[|f_{x,z}(y_{2})-f_{x,z}(y_{1})|\leq\int_{[y_{1},y_{2}]}|f^{\prime}_{x,z}(y)| \,|dy|\leq\lambda|y_{2}-y_{1}|.\qed\] For sufficiently small \(\varepsilon\) and \((x,z)\in\Xi_{\varepsilon}^{(1,3)}\), Banach's fixed point theorem together with Lemma 3.3 implies that \(f_{x,z}\) has a unique fixed point in \(\Xi_{\varepsilon}^{(2)}\). This fixed point will be denoted by \(g(x,z)\), i. e., \[g(x,z)=f_{x,z}(g(x,z))=x(\Phi(g(x,z))-\Phi(z)). \tag{12}\] If we plug in \(0\) for \(z\), we see that (12) holds for \(g(x,0)=0\), so uniqueness of the fixed point implies that \[g(x,0)=0 \tag{13}\] for \(x\in\Xi_{\varepsilon}^{(1)}\). **Lemma 3.4**.: _For sufficiently small \(\varepsilon>0\), \(g\colon\Xi_{\varepsilon}^{(1,3)}\to\Xi_{\varepsilon}^{(2)}\) is an analytic function, and \(\frac{\partial}{\partial z}g(x,z)\) is bounded._ Proof.: Note that using Lemma 3.2, we have that \(|\frac{\partial}{\partial y}(y-f_{x,z}(y))|=|1-f^{\prime}_{x,z}(y)|\geq 1-|f^{ \prime}_{x,z}(y)|\geq 1-\lambda\) is bounded away from zero for sufficiently small \(\varepsilon>0\) and \((x,y,z)\in\Xi_{\varepsilon}\). Thus the analytic implicit function theorem shows that \(g\) as defined by (12) is analytic and has bounded partial derivative \(\frac{\partial}{\partial z}g(x,z)\) on \(\Xi_{\varepsilon}^{(1,3)}\) for sufficiently small \(\varepsilon>0\). We now intend to solve (10) for \(Y_{h,h}(x)\). Therefore, we consider the equation \[z=f^{(h-1)}_{x,z}(y) \tag{14}\] and attempt to solve it for \(z\). For large \(h\), \(f^{(h-1)}_{x,z}(y)\) will be close to the fixed point \(g(x,z)\) of \(f_{x,z}\) by the Banach fixed point theorem. Therefore, we define \(\Lambda_{h}\) as the difference between the two: \(\Lambda_{h}(x,y,z)\coloneqq f^{(h-1)}_{x,z}(y)-g(x,z)\). So (14) can be rewritten as \[z=g(x,z)+\Lambda_{h}(x,y,z). \tag{15}\] We first establish bounds on \(\Lambda_{h}\). **Lemma 3.5**.: _For sufficiently small \(\varepsilon>0\),_ \[\Lambda_{h}(x,y,z) =O(\lambda^{h})\text{ and } \tag{17}\] \[\frac{\partial}{\partial z}\Lambda_{h}(x,y,z) =O(\lambda^{h}) \tag{16}\] _hold uniformly for \((x,y,z)\in\Xi_{\varepsilon}\)._ Proof.: Since \(g\) is defined as the fixed point of \(f_{x,z}\) and \(f_{x,z}\) is a contraction with Lipschitz constant \(\lambda\), we have \[|\Lambda_{h}(x,y,z)|=|f^{(h-1)}_{x,z}(y)-f^{(h-1)}_{x,z}(g(x,z))|\leq\lambda^ {h-1}|y-g(x,z)|=O(\lambda^{h})\] for \((x,y,z)\in\Xi_{\varepsilon}\), so we have shown (16). For \((x,y,z)\in\Xi_{\varepsilon/3}\), Cauchy's integral formula yields \[\frac{\partial}{\partial z}\Lambda_{h}(x,y,z)=\frac{1}{2\pi i}\oint_{|\zeta-z |=\varepsilon/3}\frac{\Lambda_{h}(x,y,\zeta)}{(\zeta-z)^{2}}\,d\zeta.\] By (16), we can bound the integral by \(O(\lambda^{h})\). Thus replacing \(\varepsilon\) by \(\varepsilon/3\) yields (17). In order to apply the analytic implicit function theorem to the implicit equation (10) for \(Y_{h,h}\), we will need to show that the derivative of the difference of the two sides of (15) with respect to \(z\) is nonzero. The derivative of the second summand on the right-hand side of (15) is small by (17), so we first consider the remaining part of the equation. **Lemma 3.6**.: _There is a \(\delta>0\) such that for sufficiently small \(\varepsilon>0\), we have_ \[\left|\frac{\partial}{\partial z}(z-g(x,z))\right|>\delta \tag{18}\] _for \((x,z)\in\Xi_{\varepsilon}^{(1,3)}\)._ Proof.: To compute \(\frac{\partial}{\partial z}g(x,z)\), we differentiate (12) with respect to \(z\) and obtain \[\frac{\partial}{\partial z}g(x,z)=x\Phi^{\prime}(g(x,z))\frac{\partial}{ \partial z}g(x,z)-x\Phi^{\prime}(z),\] which leads to \[\frac{\partial}{\partial z}g(x,z)=-\frac{x\Phi^{\prime}(z)}{1-x\Phi^{\prime}(g(x,z ))}.\] Note that the denominator is nonzero for \((x,z)\in\Xi_{\varepsilon}^{(1,3)}\) by Lemma 3.2. We obtain \[\left|\frac{\partial}{\partial z}(z-g(x,z))\right|=\left|\frac{1+x(\Phi^{ \prime}(z)-\Phi^{\prime}(g(x,z)))}{1-x\Phi^{\prime}(g(x,z))}\right|\geq\frac{ 1-(\rho+\varepsilon)|\Phi^{\prime}(z)-\Phi^{\prime}(g(x,z))|}{1+(\rho+ \varepsilon)|\Phi^{\prime}(g(x,z))|}. \tag{19}\] By Lemma 3.4, \(\frac{\partial g(x,z)}{\partial z}\) is analytic and bounded for \((x,z)\in\Xi_{\varepsilon}^{(1,3)}\), and by (13), it follows that \[g(x,z)=g(x,z)-g(x,0)=\int_{[0,z]}\frac{\partial g(x,\zeta)}{\partial\zeta}\,d \zeta=O(|z|)=O(\varepsilon)\] for \(\varepsilon\to 0\), uniformly in \(x\). Therefore, we have \[\Phi^{\prime}(z)-\Phi^{\prime}(g(x,z))=(\Phi^{\prime}(z)-\Phi^{\prime}(0))-( \Phi^{\prime}(g(x,z))-\Phi^{\prime}(0))=O(\varepsilon)\] and \(|\Phi^{\prime}(g(x,z))|=\Phi^{\prime}(0)+O(\varepsilon)\) for \(\varepsilon\to 0\). So (19) yields \[\left|\frac{\partial}{\partial z}(z-g(x,z))\right|\geq\frac{1-(\rho+ \varepsilon)O(\varepsilon)}{1+(\rho+\varepsilon)(\Phi^{\prime}(0)+O( \varepsilon))}=\frac{1}{1+\rho\Phi^{\prime}(0)}+O(\varepsilon)\] for \(\varepsilon\to 0\). Setting \(\delta\coloneqq\frac{1}{2}\frac{1}{1+\rho\Phi^{\prime}(0)}\) and choosing \(\varepsilon\) small enough yields the result. We need bounds for \(z\) such that we remain in the region where our previous results hold. In fact, (13) shows that \(z=0\) would be a solution when the summand \(\Lambda_{h}\) (which is \(O(\lambda^{h})\)) is removed from the implicit equation, so we expect that the summand \(\Lambda_{h}\) does not perturb \(z\) too much. This is shown in the following lemma. **Lemma 3.7**.: _Let \(\varepsilon>0\) be sufficiently small and \((x,y,z)\in\Xi_{\varepsilon}\) such that (15) holds. Then_ \[z=O(\lambda^{h}). \tag{20}\] Proof.: In view of (15) and (16), we have \[g(x,z)-z=O(\lambda^{h}). \tag{21}\] By definition, \(g(x,z)\in\Xi_{\varepsilon}^{(2)}\). The implicit equation (12) for \(g(x,z)\) and (21) imply \[g(x,z)=x(\Phi(g(x,z))-\Phi(z))=x\int_{[z,g(x,z)]}\Phi^{\prime}(\zeta)\,d\zeta= O(|g(x,z)-z|)=O(\lambda^{h}).\] Inserting this into (21) leads to (20). **Lemma 3.8**.: _There exists an \(\varepsilon>0\) such that for sufficiently large \(h\), there is a unique analytic function \(q_{h}\colon\Xi_{\varepsilon}^{(1,2)}\to\mathbb{C}\) such that_ \[q_{h}(x,y)=f_{x,q_{h}(x,y)}^{(h-1)}(y) \tag{22}\] _and \(q_{h}(x,0)=0\) for \((x,y)\in\Xi_{\varepsilon}^{(1,2)}\); furthermore, \(q_{h}(x,y)=O(\lambda^{h})\) holds uniformly in \(x\) and \(y\)._ Proof.: We choose \(h\) sufficiently large such that (17) implies \[\left|\frac{\partial}{\partial z}\Lambda_{h}(x,y,z)\right|\leq\frac{\delta}{2} \tag{23}\] for \((x,y,z)\in\Xi_{\varepsilon}\), where \(\delta\) is taken as in Lemma 3.6, and such that (20) implies \[|z|\leq\frac{\varepsilon}{2} \tag{24}\] for all \((x,y,z)\in\Xi_{\varepsilon}\) for which (15) holds. By definition of \(f\), we have \(f_{x,0}(0)=0\) and therefore \(f_{x,0}^{(h-1)}(0)=0\) for every \(x\in\Xi_{\varepsilon}^{(1)}\), so \(z=0\) is a solution of (14) for \(y=0\). By (18) and (23), we have \[\frac{\partial}{\partial z}(f_{x,z}^{(h-1)}(y)-z)\neq 0 \tag{25}\] for \((x,y,z)\in\Xi_{\varepsilon}\). The analytic implicit function theorem thus implies that, for every \(x\in\Xi_{\varepsilon}^{(1)}\), there is an analytic function \(q_{h}\) defined in a neighbourhood of \((x,0)\) such that (22) holds there and such that \(q_{h}(x,0)=0\). Next we show that this extends to the whole region \(\Xi_{\varepsilon}^{(1,2)}\). For \(x_{0}\in\Xi_{\varepsilon}^{(1)}\), let \(r(x_{0})\) be the supremum of all \(r<\tau-\rho+\varepsilon\) for which there is an analytic extension of \(y\mapsto q_{h}(x_{0},y)\) from the open disk \(D_{r}(0)\) to \(\Xi_{\varepsilon}^{(3)}\). Suppose for contradiction that \(r(x_{0})<\tau-\rho+\varepsilon\). Consider a point \(y_{0}\) with \(|y_{0}|=r(x_{0})\), and take a sequence \(y_{n}\to y_{0}\) such that \(|y_{n}|<r(x_{0})\). Note that \(|q_{h}(x_{0},y_{n})|\leq\frac{\varepsilon}{2}\) by (24). Without loss of generality, we can assume that \(q_{h}(x_{0},y_{n})\) converges to some \(q_{0}\) with \(|q_{0}|\leq\frac{\varepsilon}{2}\) as \(n\to\infty\) (by compactness). By continuity, we have \(q_{0}=f_{x_{0},q_{0}}^{(h-1)}(y_{0})\). Since \((x_{0},y_{0},q_{0})\in\Xi_{\varepsilon}\), we can still use the analytic implicit function theorem together with (25) to conclude that there is a neighbourhood of \((x_{0},y_{0},q_{0})\) where the equation \(f_{x,z}^{(h-1)}(y)=z\) has exactly one solution \(z\) for every \(x\) and \(y\), and an analytic function \(\tilde{q}_{h}(x,y)\) such that \(\tilde{q}_{h}(x,y)=f_{x,\tilde{q}_{h}(x,y)}^{(h-1)}(y)\) and \(\tilde{q}_{h}(x_{0},y_{0})=q_{0}\). We assume the neighbourhood to be chosen small enough such that \(\tilde{q}_{h}(x,y)\in\Xi_{\varepsilon}^{(3)}\) for all \((x,y)\) in the neighbourhood. For large enough \(n\), this neighbourhood contains \((x_{0},y_{n},q_{h}(x_{0},y_{n}))\), so we must have \(q_{h}(x_{0},y_{n})=\tilde{q}_{h}(x_{0},y_{n})\) for all those \(n\). This implies that \(\tilde{q}_{h}\) is an analytic continuation of \(q_{h}\) in a neighbourhood of \((x_{0},y_{0})\) with values in \(\Xi_{\varepsilon}^{(3)}\). Since \(y_{0}\) was arbitrary, we have reached the desired contradiction. So we conclude that there is indeed such an analytic function \(q_{h}\) defined on all of \(\Xi_{\varepsilon}^{(1,2)}\), with values in \(\Xi_{\varepsilon}^{(3)}\). The fact that \(q_{h}(x,y)=O(\lambda^{h})\) finally follows from Lemma 3.7. ### Location of the dominant singularity Let us summarise what has been proven so far. By (10) and Lemma 3.8, for sufficiently large \(h\) we can express \(Y_{h,h}\) in terms of \(Y_{h,1}\) as \[Y_{h,h}(x)=q_{h}(x,Y_{h,1}(x))\] at least in a neighbourhood of \(0\), which we can plug into (11) to get \[Y_{h,1}(x)=x\big{(}\Phi(Y_{h,1}(x)+x)-\Phi(q_{h}(x,Y_{h,1}(x)))\big{)}.\] Setting \[F_{h}(x,y)=x(\Phi(y+x)-\Phi(q_{h}(x,y))),\] this can be rewritten as \[Y_{h,1}(x)=F_{h}(x,Y_{h,1}(x)).\] The function \(F_{h}\) is analytic on \(\Xi_{\varepsilon}^{(1,2)}\) by Lemma 3.8 and the fact that \(\Phi\) is analytic for these arguments. Note also that \[\lim_{h\to\infty}F_{h}(x,y)=x\big{(}\Phi(y+x)-1\big{)}=:F_{\infty}(x,y)\] pointwise for \((x,y)\in\Xi_{\varepsilon}^{(1,2)}\). By the estimate on \(q_{h}\) in Lemma 3.8, we also have \[F_{h}(x,y)=F_{\infty}(x,y)+O(\lambda^{h}), \tag{26}\] uniformly for \((x,y)\in\Xi_{\varepsilon}^{(1,2)}\). Using the same argument as in Lemma 3.5, we can also assume (redefining \(\varepsilon\) if necessary) that \[\frac{\partial}{\partial y}F_{h}(x,y)=\frac{\partial}{\partial y}F_{\infty}(x,y )+O(\lambda^{h}) \tag{27}\] and analogous estimates for any finite number of partial derivatives hold as well. Having reduced the original system of equations to a single equation for \(Y_{h,1}(x)\), we now deduce properties of its dominant singularity. Since \(Y_{h,1}(x)\) has a power series with nonnegative coefficients, by Pringsheim's theorem it must have a dominant positive real singularity that we denote by \(\rho_{h}\). Since the coefficients of \(Y_{h,1}(x)\) are bounded above by those of \(Y(x)\), we also know that \(\rho_{h}\geq\rho\). **Lemma 3.9**.: _For every sufficiently large \(h\), \(\rho_{h}\leq\rho+\lambda^{h/2}\). Moreover, \(\eta_{h,1}:=Y_{h,1}(\rho_{h})=\tau-\rho+O(\lambda^{h/2})\)._ Proof.: Note first that \(Y_{h,1}(x)\) is an increasing function of \(x\) for positive real \(x<\rho_{h}\). Let \(\tilde{\rho}=\min(\rho_{h},\rho+\frac{\varepsilon}{2})\). Suppose first that \(\lim_{x\to\tilde{\rho}^{-}}Y_{h,1}(x)\geq\tau-\rho+\frac{\varepsilon}{2}\). If \(h\) is large enough, this implies together with (27) that \[\lim_{x\to\tilde{\rho}^{-}}\frac{\partial F_{h}}{\partial y}(x,Y_ {h,1}(x)) =\lim_{x\to\tilde{\rho}^{-}}\frac{\partial F_{\infty}}{\partial y }(x,Y_{h,1}(x))+O(\lambda^{h})\] \[\geq\frac{\partial F_{\infty}}{\partial y}\Big{(}\rho,\tau-\rho+ \frac{\varepsilon}{2}\Big{)}+O(\lambda^{h})\] \[=\rho\Phi^{\prime}\Big{(}\tau+\frac{\varepsilon}{2}\Big{)}+O( \lambda^{h})>\rho\Phi^{\prime}(\tau)=1.\] On the other hand, we also have \[\frac{\partial F_{h}}{\partial y}(\rho/2,Y_{h,1}(\rho/2)) =\frac{\partial F_{\infty}}{\partial y}(\rho/2,Y_{h,1}(\rho/2))+ O(\lambda^{h})\] \[\leq\frac{\partial F_{\infty}}{\partial y}(\rho/2,Y(\rho/2)-\rho/ 2)+O(\lambda^{h})\] \[<\rho\Phi^{\prime}(\tau)=1,\] so by continuity there must exist some \(x_{0}\in(\rho/2,\tilde{\rho})\) such that \[\frac{\partial F_{h}}{\partial y}(x_{0},Y_{h,1}(x_{0}))=1.\] Moreover, if \(h\) is large enough we have \[\frac{\partial^{2}F_{h}}{\partial y^{2}}(x_{0},Y_{h,1}(x_{0}))=\frac{\partial ^{2}F_{\infty}}{\partial y^{2}}(x_{0},Y_{h,1}(x_{0}))+O(\lambda^{h})>0\] as \(x_{0}\) and thus also \(Y_{h,1}(x_{0})\) are bounded below by positive constants, and analogously \(\frac{\partial F_{h}}{\partial x}(x_{0},Y_{h,1}(x_{0}))>0\). But this would mean that \(Y_{h,1}\) has a square root singularity at \(x_{0}<\rho_{h}\) (compare the discussion in Section 3.3 later), and we reach a contradiction. Hence we can assume that \[\lim_{x\to\tilde{\rho}^{-}}Y_{h,1}(x)<\tau-\rho+\frac{\varepsilon}{2}. \tag{28}\] Assume next that \(\rho_{h}>\rho+\lambda^{h/2}\). Now for \(x_{1}=\rho+\lambda^{h/2}<\tilde{\rho}\) (the inequality holds if \(h\) is large enough to make \(\lambda^{h/2}<\frac{\varepsilon}{2}\)), \(u_{1}=Y_{h,1}(x_{1})+x_{1}\) satisfies \[u_{1}=x_{1}\Phi(u_{1})+O(\lambda^{h}), \tag{29}\] since \(F_{h}(x,y)=F_{\infty}(x,y)+O(\lambda^{h})=x(\Phi(y+x)-1)+O(\lambda^{h})\). Note here that \(u_{1}\leq\tau+\frac{\varepsilon}{2}+\lambda^{h/2}\) by (28), thus \(u_{1}\) is in the region of analyticity of \(\Phi\) (again assuming \(h\) to be large enough). However, since \(u\leq\rho\Phi(u)\) for all positive real \(u\) for which \(\Phi(u)\) is well-defined (the line \(u\mapsto\frac{u}{\rho}\) is a tangent to the graph of the convex function \(\Phi\) at \(\tau\)), for sufficiently large \(h\) the right-hand side in (29) is necessarily greater than the left, and we reach another contradiction. So it follows that \(\rho_{h}\leq\rho+\lambda^{h/2}\), and in particular \(\tilde{\rho}=\rho_{h}<\rho+\frac{\varepsilon}{2}\) if \(h\) is large enough. Since we know that \(\lim_{x\to\tilde{\rho}^{-}}Y_{h,1}(x)<\tau-\rho+\frac{\varepsilon}{2}\), we also have \(\eta_{h,1}:=Y_{h,1}(\rho_{h})<\tau-\rho+\frac{\varepsilon}{2}\). We conclude that \((\rho_{h},\eta_{h,1})\in\Xi_{\varepsilon}^{(1,2)}\), i.e., \((\rho_{h},\eta_{h,1})\) lies within the region of analyticity of \(F_{h}\). So the singularity at \(\rho_{h}\) must be due to the implicit function theorem failing at this point: \[\eta_{h,1}=F_{h}(\rho_{h},\eta_{h,1})\text{ and }1=\frac{\partial F_{h}}{ \partial y}(\rho_{h},\eta_{h,1}).\] The second equation in particular gives us \[\rho_{h}\Phi^{\prime}(\eta_{h,1}+\rho_{h})=1+O(\lambda^{h})\] by (27). Since \(\Phi^{\prime}\) is increasing for positive real arguments and we know that \(\rho\Phi^{\prime}(\tau)=1\) and \(\rho_{h}=\rho+O(\lambda^{h/2})\), we can conclude from this that \(\eta_{h,1}=\tau-\rho+O(\lambda^{h/2})\). As we have established that \(\eta_{h,1}\to\tau-\rho\) as \(h\to\infty\), we will use the abbreviation \(\eta_{1}:=\tau-\rho\) in the following. This will later be generalised to \(\eta_{h,k}:=Y_{h,k}(\rho_{h})\to\eta_{k}\), see Sections 4 and 5. For our next step, we need a multidimensional generalisation of Rouche's theorem: **Theorem 3.10** (see [1, p.20, Theorem 2.5]).: _Let \(\Omega\) be a bounded domain in \(\mathbb{C}^{n}\) whose boundary \(\partial\Omega\) is piecewise smooth. Suppose that \(u,v:\overline{\Omega}\to\mathbb{C}^{n}\) are analytic functions, and that the boundary of \(\Omega\) does not contain any zeros of \(u\). Moreover, assume that for every \(z\in\partial\Omega\), there is at least one coordinate \(j\) for which \(|u_{j}(z)|>|v_{j}(z)|\) holds. Then \(u\) and \(u+v\) have the same number of zeros in \(\Omega\)._ **Lemma 3.11**.: _If \(\varepsilon\) is chosen sufficiently small and \(h\) sufficiently large, then the pair \((\rho_{h},\eta_{h,1})\) is the only solution to the simultaneous equations \(F_{h}(x,y)=y\) and \(\frac{\partial}{\partial y}F_{h}(x,y)=1\) with \((x,y)\in\Xi_{\varepsilon}^{(1,2)}\)._ Proof.: Note that \((\rho,\eta_{1})\) is a solution to the simultaneous equations \(F_{\infty}(x,y)=x(\Phi(x+y)-1)=y\) and \(\frac{\partial}{\partial y}F_{\infty}(x,y)=x\Phi^{\prime}(x+y)=1\), and that there is no other solution with \(|x|\leq\rho+\varepsilon\) and \(|y|\leq\eta_{1}+\varepsilon\) if \(\varepsilon\) is chosen sufficiently small by our assumptions on the function \(\Phi\) (see Section 2.1). We take \(\Omega=\Xi_{\varepsilon}^{(1,2)}\) in Theorem 3.10 and set \[u(x,y)=\Big{(}F_{\infty}(x,y)-y,\frac{\partial}{\partial y}F_{\infty}(x,y)-1 \Big{)}.\] Moreover, take \[v(x,y)=\Big{(}F_{h}(x,y)-F_{\infty}(x,y),\frac{\partial}{\partial y}F_{h}(x,y )-\frac{\partial}{\partial y}F_{\infty}(x,y)\Big{)}.\] Note that both coordinates of \(v\) are \(O(\lambda^{h})\) by (26) and (27). Since the boundary \(\partial\Omega\) contains no zeros of \(u\), if we choose \(h\) sufficiently large, then the conditions of Theorem 3.10 are satisfied. Consequently, \(u\) and \(u+v\) have the same number of zeros in \(\Omega\), namely \(1\). Solutions to the simultaneous equations \(F_{h}(x,y)=y\) and \(\frac{\partial}{\partial y}F_{h}(x,y)=1\) are precisely zeros of \(u+v\), so this completes the proof. At this point, it already follows from general principles (see the discussion in [9, Chapter VII.4]) that for every sufficiently large \(h\), \(Y_{h,1}\) has a dominant square root singularity at \(\rho_{h}\), and is otherwise analytic in a domain of the form (6). As we will need uniformity of the asymptotic expansion and a uniform bound for the domain of analyticity, we will make this more precise in the following section. ### Asymptotic expansion and area of analyticity **Lemma 3.12**.: _Let \(\varepsilon>0\) be such that all previous lemmata hold. There exist \(\delta_{1},\delta_{2}>0\), some positive number \(h_{0}\), and analytic functions \(R_{h}\) on \(D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) and \(S_{h}\) on \(D_{\delta_{2}}(\eta_{h,1})\) for \(h\geq h_{0}\) such that \(\delta_{2}<\varepsilon\), \(D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\subseteq\Xi_{ \varepsilon}^{(1,2)}\) and_ \[F_{h}(x,y)-y=(x-\rho_{h})R_{h}(x,y)+(y-\eta_{h,1})^{2}S_{h}(y) \tag{30}\] _holds for \((x,y)\in D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) and \(h\geq h_{0}\) and such that \(|R_{h}|\) is bounded from above and below by positive constants on \(D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) for \(h\geq h_{0}\) (uniformly in \(h\)) and \(|S_{h}|\) is bounded from above and below by positive constants on \(D_{\delta_{2}}(\eta_{h,1})\) for \(h\geq h_{0}\) (uniformly in \(h\))._ _Furthermore, the sequences \(R_{h}\) and \(S_{h}\) converge uniformly to some analytic functions \(R\) and \(S\), respectively. The same holds for their partial derivatives._ Proof.: Recall that we can approximate partial derivatives of \(F_{h}\) by those of \(F_{\infty}\) with an exponential error bound (as in (27)), giving us \[\frac{\partial}{\partial x}F_{h}(x,y) =\frac{\partial}{\partial x}F_{\infty}(x,y)+O(\lambda^{h})\] \[=\frac{\partial F_{\infty}}{\partial x}(\rho,\eta_{1})+O(\lambda ^{h})+O(x-\rho)+O(y-\eta_{1})\] \[=\Phi(\tau)+O(\lambda^{h})+O(x-\rho)+O(y-\eta_{1}),\] as well as \[\frac{\partial^{2}}{\partial y^{2}}F_{h}(x,y) =\frac{\partial^{2}}{\partial y^{2}}F_{\infty}(x,y)+O(\lambda^{h})\] \[=\frac{\partial^{2}F_{\infty}}{\partial y^{2}}(\rho,\eta_{1})+O( \lambda^{h})+O(x-\rho)+O(y-\eta_{1})\] \[=\rho\Phi^{\prime\prime}(\tau)+O(\lambda^{h})+O(x-\rho)+O(y-\eta _{1})\] for \((x,y)\) in a neighbourhood of \((\rho,\eta_{1})\) contained in \(\Xi_{\varepsilon}^{(1,2)}\) and \(h\to\infty\). Using Lemma 3.9, we choose \(\delta_{1}>0\) and \(\delta_{2}>0\) small enough and \(h_{0}\) large enough such that \(|x-\rho_{h}|\leq\delta_{1}\), \(|y-\eta_{h,1}|\leq\delta_{2}\), and \(h\geq h_{0}\) imply that \[\Big{|}\frac{\partial}{\partial x}F_{h}(x,y)-\Phi(\tau)\Big{|}\leq\frac{1}{2} \Phi(\tau) \tag{31}\] and \[\Big{|}\frac{\partial^{2}}{\partial y^{2}}F_{h}(x,y)-\rho\Phi^{\prime\prime}( \tau)\Big{|}\leq\frac{1}{2}\rho\Phi^{\prime\prime}(\tau), \tag{32}\] and such that \(\overline{D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})}\subseteq\Xi_{ \varepsilon}^{(1,2)}\). By Lemma 3.11, we have \[F_{h}(\rho_{h},\eta_{h,1}) =\eta_{h,1}, \tag{34}\] \[\frac{\partial F_{h}}{\partial y}(\rho_{h},\eta_{h,1}) =1. \tag{33}\] We now define \[S_{h}(y)\coloneqq\frac{F_{h}(\rho_{h},y)-y}{\left(y-\eta_{h,1}\right)^{2}}\] for \(y\in\overline{D_{\delta_{2}}(\eta_{h,1})}\setminus\{\eta_{h,1}\}\). By (33) and (34), \(S_{h}\) has a removable singularity at \(\eta_{h,1}\). Therefore it is analytic on \(D_{\delta_{2}}(\eta_{h,1})\). By (33), we have \[F_{h}(\rho_{h},y)-y =(F_{h}(\rho_{h},y)-y)-(F_{h}(\rho_{h},\eta_{h,1})-\eta_{h,1})\] \[=\int_{\eta_{h,1}}^{y}\Big{(}\frac{\partial}{\partial w}F_{h}( \rho_{h},w)-1\Big{)}\,dw.\] By (34), this can be rewritten as \[F_{h}(\rho_{h},y)-y =\int_{\eta_{h,1}}^{y}\Big{(}\Big{(}\frac{\partial F_{h}}{ \partial y}(\rho_{h},w)-1\Big{)}-\Big{(}\frac{\partial F_{h}}{\partial y}( \rho_{h},\eta_{h,1})-1\Big{)}\Big{)}\,dw\] \[=\int_{\eta_{h,1}}^{y}\int_{\eta_{h,1}}^{w}\frac{\partial^{2}F_{ h}}{\partial y^{2}}(\rho_{h},v)\,dv\,dw\] \[=\int_{\eta_{h,1}}^{y}\int_{\eta_{h,1}}^{w}\rho\Phi^{\prime\prime }(\tau)\,dv\,dw+\int_{\eta_{h,1}}^{y}\int_{\eta_{h,1}}^{w}\Big{(}\frac{ \partial^{2}F_{h}}{\partial y^{2}}(\rho_{h},v)-\rho\Phi^{\prime\prime}(\tau) \Big{)}\,dv\,dw\] \[=\frac{1}{2}\rho\Phi^{\prime\prime}(\tau)(y-\eta_{h,1})^{2}+\int_ {\eta_{h,1}}^{y}\int_{\eta_{h,1}}^{w}\Big{(}\frac{\partial^{2}F_{h}}{\partial y ^{2}}(\rho_{h},v)-\rho\Phi^{\prime\prime}(\tau)\Big{)}\,dv\,dw.\] Rearranging and using the definition of \(S_{h}(y)\) as well as (32) yields \[\Big{|}S_{h}(y)-\frac{1}{2}\rho\Phi^{\prime\prime}(\tau)\Big{|}\leq\frac{1}{4 }\rho\Phi^{\prime\prime}(\tau)\] for all \(y\in\overline{D_{\delta_{2}}(\eta_{h,1})}\) and \(h\geq h_{0}\). Thus \(|S_{h}(y)|\) is bounded from below and above by positive constants for every such \(y\) and \(h\). We now define \(R_{h}(x,y)\) such that (30) holds, which is equivalent to \[R_{h}(x,y)\coloneqq\frac{F_{h}(x,y)-F_{h}(\rho_{h},y)}{x-\rho_{h}}\] for \(x\in\overline{D_{\delta_{1}}(\rho_{h})}\setminus\{\rho_{h}\}\) and \(y\in\overline{D_{\delta_{2}}(\eta_{h,1})}\). We have \[F_{h}(\rho_{h},y)-F_{h}(x,y) =\int_{x}^{\rho_{h}}\frac{\partial F_{h}}{\partial x}(w,y)\,dw\] \[=\Phi(\tau)(\rho_{h}-x)+\int_{x}^{\rho_{h}}\Big{(}\frac{\partial F _{h}}{\partial x}(w,y)-\Phi(\tau)\Big{)}\,dw.\] Rearranging and using the definition of \(R_{h}(x,y)\) yields \[|R_{h}(x,y)-\Phi(\tau)|\leq\frac{1}{2}\Phi(\tau)\] by (31) for \(x\in\overline{D_{\delta_{1}}(\rho_{h})}\setminus\{\rho_{h}\}\) and \(y\in\overline{D_{\delta_{2}}(\eta_{h,1})}\) and \(h\geq h_{0}\). In other words, \(|R_{h}(x,y)|\) is bounded from below and above by positive constants for these \((x,y)\) and \(h\). To prove analyticity of \(R_{h}\), we use Cauchy's formula to rewrite it as \[R_{h}(x,y)=\frac{1}{2\pi i}\oint_{|\zeta-\rho_{h}|=\delta_{1}}\frac{F_{h}(\zeta,y) -F_{h}(\rho_{h},y)}{\zeta-\rho_{h}}\,\frac{d\zeta}{\zeta-x}\] for \(x\neq\rho_{h}\) (note that the integrand has a removable singularity at \(\zeta=\rho_{h}\) in this case). The integral is also defined for \(x=\rho_{h}\) and clearly defines an analytic function on \(D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) whose absolute value is bounded from above and below by a constant. To see uniform convergence of \(R_{h}\), we use Cauchy's formula once more and get \[R_{h}(x,y)=\frac{1}{(2\pi i)^{2}}\oint_{|\zeta-\rho_{h}|=\delta_{1}}\oint_{| \eta-\eta_{h,1}|=\delta_{2}}\frac{F_{h}(\zeta,\eta)-F_{h}(\rho_{h},\eta)}{ \zeta-\rho_{h}}\,\frac{d\eta}{\eta-y}\,\frac{d\zeta}{\zeta-x} \tag{35}\] for \(x\in D_{\delta_{1}}(\rho_{h})\) and \(y\in D_{\delta_{2}}(\eta_{h,1})\). Without loss of generality, \(h_{0}\) is large enough such that \(|\rho_{h}-\rho|<\delta_{1}/4\) and \(|\eta_{h,1}-\eta_{1}|<\delta_{2}/4\). By Cauchy's theorem, we can change the contour of integration such that (35) implies \[R_{h}(x,y)=\frac{1}{(2\pi i)^{2}}\oint_{|\zeta-\rho|=\delta_{1}/2}\oint_{|\eta -\eta_{1}|=\delta_{2}/2}\frac{F_{h}(\zeta,\eta)-F_{h}(\rho_{h},\eta)}{\zeta- \rho_{h}}\,\frac{d\eta}{\eta-y}\,\frac{d\zeta}{\zeta-x}\] for \(x\in D_{\delta_{1}/4}(\rho)\) and \(y\in D_{\delta_{2}/4}(\eta_{1})\), as the deformation is happening within the region of analyticity of the integrand. Using (26) and the fact that the denominator of the integrand is bounded away from zero shows that \[R_{h}(x,y)=\frac{1}{(2\pi i)^{2}}\oint_{|\zeta-\rho|=\delta_{1}/2}\oint_{|\eta -\eta_{1}|=\delta_{2}/2}\frac{F_{\infty}(\zeta,\eta)-F_{\infty}(\rho_{h},\eta) }{\zeta-\rho_{h}}\,\frac{d\eta}{\eta-y}\,\frac{d\zeta}{\zeta-x}+O(\lambda^{h})\] for \(x\in D_{\delta_{1}/4}(\rho)\) and \(y\in D_{\delta_{2}/4}(\eta_{1})\). By Lemma 3.9, replacing the remaining occurrences of \(\rho_{h}\) by \(\rho\) induces another error term of \(O(\lambda^{h/2})\), so that we get \[R_{h}(x,y)=R(x,y)+O(\lambda^{h/2})\] with \[R(x,y)\coloneqq\frac{1}{(2\pi i)^{2}}\oint_{|\zeta-\rho|=\delta_{1}/2}\oint_{| \eta-\eta_{1}|=\delta_{2}/2}\frac{F_{\infty}(\zeta,\eta)-F_{\infty}(\rho,\eta) }{\zeta-\rho}\,\frac{d\eta}{\eta-y}\frac{d\zeta}{\zeta-x}\] for \(x\in D_{\delta_{1}/4}(\rho)\) and \(y\in D_{\delta_{2}/4}(\eta_{1})\). Of course, the \(O\) constants do not depend on \(x\) and \(y\); therefore, we have uniform convergence. Analogously, we get \[S_{h}(y) =\frac{1}{2\pi i}\oint_{|\eta-\eta_{h,1}|=\delta_{2}}\frac{F_{h}( \rho_{h},\eta)-\eta}{(\eta-\eta_{h,1})^{2}}\,\frac{d\eta}{\eta-y} \tag{37}\] \[=S(y)+O(\lambda^{h/2}) \tag{36}\] with \[S(y)\coloneqq\frac{1}{2\pi i}\oint_{|\eta-\eta_{1}|=\delta_{2}/2}\frac{F_{h}( \rho,\eta)-\eta}{(\eta-\eta_{1})^{2}}\,\frac{d\eta}{\eta-y},\] for \(y\in D_{\delta_{2}/4}(\eta_{1})\). Analogous results hold for partial derivatives. We replace \(\delta_{1}\) by \(\delta_{1}/4\) and \(\delta_{2}\) by \(\delta_{2}/4\) to get the result as stated in the lemma. **Lemma 3.13**.: _The constants \(\delta_{1}\), \(\delta_{2}\) and \(h_{0}\) in Lemma 3.12 can be chosen such that whenever \(y=F_{h}(x,y)\) for some \((x,y)\in D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) and some \(h\geq h_{0}\), we have \(|y-\eta_{h,1}|<\delta_{2}/2\)._ Proof.: We first choose \(\delta_{1}\) and \(\delta_{2}\) as in Lemma 3.12. Then \(y=F_{h}(x,y)\) and Lemma 3.12 imply that \[|y-\eta_{h,1}|=\sqrt{|x-\rho_{h}|\Big{|}\frac{R_{h}(x,y)}{S_{h}(y)}\Big{|}}.\] The fraction on the right-hand side is bounded by some absolute constant according to Lemma 3.12. So by decreasing \(\delta_{1}\) if necessary, the right-hand side is at most \(\delta_{2}/2\). **Lemma 3.14**.: _Let \(\varepsilon>0\) be such that the previous lemmata hold. There exists \(\delta_{0}>0\) such that, for all sufficiently large \(h\), the asymptotic formula_ \[Y_{h,1}(x)=\eta_{h,1}+a_{h}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{1/2}+b_{h}\Big{(} 1-\frac{x}{\rho_{h}}\Big{)}+c_{h}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{3/2}+O \Big{(}(\rho_{h}-x)^{2}\Big{)} \tag{38}\] _holds for \(x\in D_{\delta_{0}}(\rho_{h})\) with \(|\mathrm{Arg}(x-\rho_{h})|\geq\pi/4\) and certain sequences \(a_{h}\), \(b_{h}\) and \(c_{h}\). The \(O\)-constant is independent of \(h\), and \(a_{h}\), \(b_{h}\), \(c_{h}\) converge to the coefficients \(a\), \(b\), \(c\) in (4) at an exponential rate as \(h\to\infty\). Additionally, \(|Y_{h,1}(x)-\eta_{1}|<\varepsilon/2\) for all these \(x\)._ Proof.: By (30), the function \(Y_{h,1}\) is determined by the implicit equation \[0=F_{h}(x,Y_{h,1}(x))-Y_{h,1}(x)=(x-\rho_{h})R_{h}(x,Y_{h,1}(x))+(Y_{h,1}(x)- \eta_{h,1})^{2}S_{h}(Y_{h,1}(x)). \tag{39}\] For \(r>0\), set \(C(r)\coloneqq\{x\in D_{r}(\rho_{h})\colon|\mathrm{Arg}(x-\rho_{h})|\geq\pi/4\}\) and \(\widetilde{C}(r)\coloneqq\{x\in\mathbb{C}\colon|x-\rho_{h}|=r\text{ and }|\mathrm{Arg}(x-\rho_{h})|\geq\pi/4\}\). Choose \(\delta_{1}\), \(\delta_{2}\), \(h_{0}\) as in Lemma 3.13. For some \(h\geq h_{0}\), let \(r_{h}\) be the supremum of all \(r\leq\delta_{1}\) such that \(Y_{h,1}\) can be continued analytically to \(C(r)\) with values in \(D_{\delta_{2}/2}(\eta_{h,1})\). We claim that \(r_{h}=\delta_{1}\). Suppose for contradiction that \(r_{h}<\delta_{1}\) and let \(x_{\infty}\in\widetilde{C}(r_{h})\). Choose a sequence of elements \(x_{n}\in C(r_{h})\) converging to \(x_{\infty}\) for \(n\to\infty\) and set \(y_{n}\coloneqq Y_{h,1}(x_{n})\) for all \(n\). By assumption, we have \(|y_{n}-\eta_{h,1}|\leq\delta_{2}/2\). By replacing the sequence \(x_{n}\) by a subsequence if necessary, we may assume that the sequence \(y_{n}\) is convergent to some limit \(y_{\infty}\). Note that \(|y_{\infty}-\eta_{h,1}|\leq\delta_{2}/2\). By continuity of \(F_{h}\), we also have \(y_{\infty}=F_{h}(x_{\infty},y_{\infty})\). As \((x_{\infty},y_{\infty})\in\Xi_{\varepsilon}^{(1,2)}\) with \(x_{\infty}\neq\rho_{h}\), Lemma 3.11 and the analytic implicit function theorem imply that \(Y_{h,1}\) can be continued analytically in a suitable open neighbourhood of \(x_{\infty}\). This neighbourhood can be chosen small enough such that the inequality \(|Y_{h,1}(x)-\eta_{h,1}|\leq\delta_{2}\) holds for all \(x\) in this neighbourhood. However, Lemma 3.13 implies that we then actually have \(|Y_{h,1}(x)-\eta_{h,1}|\leq\delta_{2}/2\) for all such \(x\). The set of these open neighbourhoods associated with all \(x_{\infty}\in\widetilde{C}(r_{h})\) covers the compact set \(\widetilde{C}(r_{h})\), so a finite subset of these open neighbourhoods can be selected. Thus we find an analytic continuation of \(Y_{h,1}\) to \(C(\widetilde{r}_{h})\) for some \(\widetilde{r}_{h}\in(r_{h},\delta_{1})\) with values still in \(D_{\delta_{2}/2}(\eta_{h,1})\), which is a contradiction to the choice of \(r_{h}\). Thus we have \(r_{h}=\delta_{1}\). In particular, choosing \(h\) large enough that \(|\eta_{h,1}-\eta_{1}|<(\varepsilon-\delta_{2})/2\) gives \(|Y_{h,1}(x)-\eta_{1}|\leq|Y_{h,1}(x)-\eta_{h,1}|+|\eta_{h,1}-\eta_{1}|<\delta_{ 2}/2+(\varepsilon-\delta_{2})/2=\varepsilon/2\) for all \(x\in C(\delta_{1})\). Rearranging (39) yields \[(\eta_{h,1}-Y_{h,1}(x))^{2}=\Big{(}\rho_{h}\frac{R_{h}(x,Y_{h,1}(x))}{S_{h}(Y_{ h,1}(x))}\Big{)}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}. \tag{40}\] We know from Lemma 3.12 that \(R_{h}\) is bounded above and \(S_{h}\) is bounded below on \(D_{\delta_{1}}(\rho_{h})\times D_{\delta_{2}}(\eta_{h,1})\) and \(D_{\delta_{2}}(\eta_{h,1})\), respectively. Therefore, the absolute value of the first factor on the right-hand side of (40) is bounded above and below by positive constants for \(x\in D_{\delta_{1}}(\rho_{h})\). For \(x<\rho_{h}\), we have that the factor \((1-x/\rho_{h})\) is trivially positive and that \(\eta_{h,1}>Y_{h,1}(x)\) because \(Y_{h,1}\) is strictly increasing on \((0,\rho_{h})\), so the first factor on the right-hand side of (40) must be positive. Thus we may take the principal value of the square root to rewrite (40) as \[\eta_{h,1}-Y_{h,1}(x)=\sqrt{\rho_{h}\frac{R_{h}(x,Y_{h,1}(x))}{S_{h}(Y_{h,1}(x)) }}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{1/2} \tag{41}\] for \(x\in C(\delta_{1})\). The above considerations also show that the radicand in (41) remains positive in the limit \(x\to\rho_{h}^{-}\) (i.e., as \(x\) approaches \(\rho_{h}\) from the left) and then for \(h\to\infty\). As we just observed that the first factor on the right-hand side of (41) is bounded, (41) implies \[Y_{h,1}(x)-\eta_{h,1}=O\big{(}(x-\rho_{h})^{1/2}\big{)}, \tag{42}\] with an \(O\)-constant that is independent of \(h\). We can now iterate this argument: using Taylor expansion along with the fact that partial derivatives of \(R_{h}\) and \(S_{h}\) are uniformly bounded above while \(S_{h}\) is also uniformly bounded below, we obtain \[\frac{R_{h}(x,Y_{h,1}(x))}{S_{h}(Y_{h,1}(x))} =\frac{R_{h}(\rho_{h},\eta_{h,1})+O(x-\rho_{h})+O(Y_{h,1}(x)-\eta _{h,1})}{S_{h}(\eta_{h,1})+O(Y_{h,1}(x)-\eta_{h,1})}\] \[=\frac{R_{h}(\rho_{h},\eta_{h,1})}{S_{h}(\eta_{h,1})}+O\big{(}(x- \rho_{h})^{1/2}\big{)}.\] Plugging this into (41) yields \[\eta_{h,1}-Y_{h,1}(x)=\sqrt{\rho_{h}\frac{R_{h}(\rho_{h},\eta_{h,1})}{S_{h}( \eta_{h,1})}}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{1/2}+O(x-\rho_{h}),\] still with an \(O\)-constant that is independent of \(h\). This can be continued arbitrarily often to obtain further terms of the expansion and an improved error term (for our purposes, it is enough to stop at \(O((x-\rho_{h})^{2})\)). Indeed it is well known (cf. [9, Lemma VII.3]) that an implicit equation of the form (39) has a solution as a power series in \((1-x/\rho_{h})^{1/2}\). In particular, (38) follows with an error term that is uniform in \(h\). The coefficients \(a_{h},b_{h},c_{h}\) can be expressed in terms of \(R_{h}\), \(S_{h}\) and their partial derivatives evaluated at \((\rho_{h},\eta_{h,1})\): specifically, \[a_{h} =-\sqrt{\rho_{h}\frac{R_{h}(\rho_{h},\eta_{h,1})}{S_{h}(\eta_{h, 1})}},\] \[b_{h} =\frac{\rho_{h}S_{h}(\eta_{h,1})\frac{\partial R_{h}}{\partial y }(\rho_{h},\eta_{h,1})-\rho_{h}S_{h}^{\prime}(\eta_{h,1})R_{h}(\rho_{h},\eta_{ h,1})}{2S_{h}(\eta_{h,1})^{2}},\] \[c_{h} =\frac{\rho_{h}^{3/2}N}{8\sqrt{R_{h}(\rho_{h},\eta_{h,1})S_{h}( \eta_{h,1})^{7}}},\] where the numerator \(N\) is a polynomial in \(R_{h}(\rho_{h},\eta_{h,1})\), \(S_{h}(\eta_{h,1})\) and their derivatives. By Lemma 3.12, \(R_{h}\) and \(S_{h}\) as well as their partial derivatives converge uniformly to \(R\) and \(S\) as well as their partial derivatives, respectively, with an error bound of \(O(\lambda^{h/2})\). We also know that \(\rho_{h}\) and \(\eta_{h,1}\) converge exponentially to \(\rho\) and \(\eta_{1}\), respectively, see Lemma 3.9. This means that first replacing all occurrences of \(R_{h}\) and \(S_{h}\) by \(R\) and \(S\), respectively, and then replacing all occurrences of \(\rho_{h}\) and \(\eta_{h,1}\) by \(\rho\) and \(\eta_{1}\), respectively, shows that \(a_{h}=a+O(\lambda^{h/2})\), \(b_{h}=b+O(\lambda^{h/2})\), and \(c_{h}=c+O(\lambda^{h/2})\) where \(a\), \(b\), and \(c\) are the results of these replacements. Taking the limit for \(h\to\infty\) in (30) shows that \(R\) and \(S\) and therefore \(a\), \(b\), and \(c\) play the same role with respect to \(F_{\infty}\) as \(R_{h}\), \(S_{h}\), \(a_{h}\), \(b_{h}\), and \(c_{h}\) play with respect to \(F_{h}\), which implies that \(a\), \(b\), and \(c\) are indeed the constants from (4). Having dealt with the behaviour around the singularity, it remains to prove a uniform bound on \(Y_{h,1}\) in a domain of the form (6) for fixed \(\delta\). **Lemma 3.15**.: _Let \(\varepsilon>0\) be such that all previous lemmata hold. There exist \(\delta>0\) and a positive integer \(h_{0}\) such that \(Y_{h,1}(x)\) has an analytic continuation to the domain_ \[\{x\in\mathbb{C}:|x|\leq(1+\delta)|\rho_{h}|,|\mathrm{Arg}(x/\rho_{h}-1)|>\pi/4\}\] _for all \(h\geq h_{0}\), and has the uniform upper bound_ \[|Y_{h,1}(x)|\leq\tau-\rho+\frac{\varepsilon}{2}=\eta_{1}+\frac{\varepsilon}{2}\] _for all \(h\geq h_{0}\) and all \(x\)._ Proof.: Let us define \(r_{h}=\sup\mathcal{R}_{h}\), where \[\mathcal{R}_{h}=\Big{\{}r\,:\,Y_{h,1}\text{ extends analytically to }D_{r}(0)\backslash D_{\delta_{0}}(\rho_{h})\text{ and satisfies }|Y_{h,1}(x)|<\eta_{1}+\frac{\varepsilon}{2}\text{ there} \Big{\}},\] with \(\delta_{0}\) as in the previous lemma. Note that trivially, \(r_{h}\geq\rho\). If \(\liminf_{h\to\infty}r_{h}>\rho\), we are done: in this case, there is some \(\delta>0\) such that \(Y_{h,1}\) extends analytically to \(D_{\rho(1+\delta)}(0)\setminus D_{\delta_{0}}(\rho_{h})\) and satisfies \(|Y_{h,1}(x)|<\eta_{1}+\frac{\varepsilon}{2}\) there. As the previous lemma covers \(D_{\delta_{0}}(\rho_{h})\), this already completes the proof. So let us assume that \(\liminf_{h\to\infty}r_{h}=\rho\) and derive a contradiction. The assumption implies that there is an increasing sequence of positive integers \(h_{j}\) such that \(\lim_{j\to\infty}r_{h_{j}}=\rho\). Without loss of generality, we may assume that \(r_{h_{j}}\leq\rho+\frac{\varepsilon}{2}\) for all \(j\). Pick (for each sufficiently large \(j\)) a point \(x_{h_{j}}\) with \(|x_{h_{j}}|=r_{h_{j}}\) and \(|Y_{h_{j},1}(x_{h_{j}})|=\eta_{1}+\frac{\varepsilon}{2}\). If this were not possible, we could analytically continue \(Y_{h_{j},1}\) at every point \(x\) with \(|x|=r_{h_{j}}\) and \(x\notin D_{\delta_{0}}(\rho_{h_{j}})\) to a disk where \(Y_{h_{j},1}\) is still bounded by \(\eta_{1}+\frac{\varepsilon}{2}\). This analytic continuation is possible, since by Lemma 3.11 the pair \((\rho_{h_{j}},\eta_{h_{j},1})\) is the only solution to the simultaneous equations \(F_{h_{j}}(x,y)=y\) and \(\frac{\partial}{\partial y}F_{h_{j}}(x,y)=1\) with \((x,y)\in\Xi_{\varepsilon}^{(1,2)}\), so the analytic implicit function theorem becomes applicable (compare e.g. the analytic continuation of \(q_{h}\) in Lemma 3.8). By compactness, this would allow us to extend \(Y_{h_{j},1}\) to \(D_{r}(0)\setminus D_{\delta_{0}}(\rho_{h_{j}})\) for some \(r>r_{h_{j}}\) while still maintaining the inequality \(|Y_{h_{j},1}(x)|<\eta_{1}+\frac{\varepsilon}{2}\), contradicting the choice of \(r_{h_{j}}\). Without loss of generality (choosing a subsequence if necessary), we can assume that \(x_{h_{j}}\) and \(Y_{h_{j},1}(x_{h_{j}})\) have limits \(x_{\infty}\) and \(y_{\infty}\), respectively. By construction, \(|x_{\infty}|=\rho\) and \(|y_{\infty}|=\eta_{1}+\frac{\varepsilon}{2}\). Since \(x_{h_{j}}\notin D_{\delta_{0}}(\rho)\) for all \(j\), \(\mathrm{Arg}\,x_{h_{j}}\) is bounded away from \(0\). Thus we can find \(\alpha>0\) such that \(|\mathrm{Arg}\,x_{h_{j}}|\geq 2\alpha\) for all \(j\). Define the region \(A\) by \[A=\Big{\{}z\in\mathbb{C}\,:\,|z|<\frac{1}{2}\text{ or }(|z|<1\text{ and }| \mathrm{Arg}\,z|<\alpha)\Big{\}}.\] Note that \(x_{h_{j}}A\) avoids the part of the real axis that includes \(\rho_{h_{j}}\) (see Figure 5), so the function \(Y_{h_{j},1}(x)\) is analytic in this region for all \(j\) by construction since \((x,Y_{h_{j},1}(x))\in\Xi_{\varepsilon}^{(1,2)}\) whenever \(x\in x_{h_{j}}A\). So we have a sequence of functions \(W_{j}(z):=Y_{h_{j},1}(x_{h_{j}}z)\) that are all analytic on \(A\) and are uniformly bounded above by \(\eta_{1}+\frac{\varepsilon}{2}\) by our choice of \(x_{h_{j}}\). By Montel's theorem, there is a subsequence of these functions (without loss of generality the sequence itself) that converges locally uniformly and thus to an analytic function \(W_{\infty}\) on \(A\). This function needs to satisfy the following: * \(W_{\infty}(0)=0\), since \(W_{j}(0)=0\) for all \(j\), * \(W_{\infty}(z)=F_{\infty}(x_{\infty}z,W_{\infty}(z))=x_{\infty}z(\Phi(x_{\infty}z+ W_{\infty}(z))-1)\) for \(z\in A\), since we have the uniform estimate \[W_{j}(z)=Y_{h_{j},1}(x_{h_{j}}z)=F_{h_{j}}(x_{h_{j}}z,Y_{h_{j},1}(x_{h_{j}}z))= F_{\infty}(x_{h_{j}}z,Y_{h_{j},1}(x_{h_{j}}z))+O(\lambda^{h}).\] This is also equivalent to \[x_{\infty}z+W_{\infty}(z)=x_{\infty}z\Phi(x_{\infty}z+W_{\infty}(z)).\] These two properties imply that \(W_{\infty}(z)=Y(x_{\infty}z)-x_{\infty}z\), since \(Y\) is the unique function that is analytic at \(0\) and satisfies the implicit equation \(Y(x)=x\Phi(Y(x))\). Implicit differentiation of \(Y_{h_{j},1}(x)=F_{h_{j}}(x,Y_{h_{j},1}(x))\) for \(x\in x_{h_{j}}A\) yields \[Y^{\prime}_{h_{j},1}(x)=\frac{\frac{\partial F_{h_{j}}}{\partial x}(x,Y_{h_{j },1}(x))}{1-\frac{\partial F_{h_{j}}}{\partial y}(x,Y_{h_{j},1}(x))}=\frac{ \frac{\partial F_{\infty}}{\partial x}(x,Y_{h_{j},1}(x))+O(\lambda^{h_{j}})}{ 1-\frac{\partial F_{\infty}}{\partial y}(x,Y_{h_{j},1}(x))+O(\lambda^{h_{j}})}. \tag{43}\] Note that the numerator is uniformly bounded. Moreover, we recall again that the only solution to the simultaneous equations \(F_{\infty}(x,y)=x(\Phi(y+x)-1)=y\) and \(\frac{\partial}{\partial y}F_{\infty}(x,y)=x\Phi^{\prime}(y+x)=1\) with \(|x|\leq\rho+\varepsilon\) and \(|x+y|\leq\tau+\varepsilon\) is \((x,y)=(\rho,\tau-\rho)=(\rho,\eta_{1})\) by our assumptions on \(\Phi\). By construction, there is a constant \(\varepsilon_{A}>0\) such that \(|x-\rho|\geq\varepsilon_{A}\) whenever \(x\in x_{h_{j}}A\) for some \(j\). The map \((x,y)\mapsto\|(F_{\infty}(x,y)-y,\frac{\partial}{\partial y}F_{\infty}(x,y)-1)\|\) is continuous on the compact set \[\mathcal{K}\coloneqq\{(x,y):|x|\leq\rho+\varepsilon\text{ and }|x+y|\leq\tau+\varepsilon \text{ and }|x-\rho|\geq\varepsilon_{A}\}\] and has no zero there (using the Euclidean norm on \(\mathbb{C}^{2}\)). Therefore, it attains a minimum \(\delta_{A}>0\) on \(\mathcal{K}\). Now for \(x\in x_{h_{j}}A\), \(|x|\leq\rho+\varepsilon\) holds by assumption, as does \(|x+Y_{h_{j},1}(x)|\leq\tau+\varepsilon\). Moreover, \(|x-\rho|\geq\varepsilon_{A}\). Thus we can conclude that \((x,Y_{h_{j},1}(x))\in\mathcal{K}\) and therefore \(\|(F_{\infty}(x,Y_{h_{j},1}(x))-Y_{h_{j},1}(x),\frac{\partial F_{\infty}}{ \partial y}(x,Y_{h_{j},1}(x))-1)\|\geq\delta_{A}\) for all such \(x\). Since \[F_{\infty}(x,Y_{h_{j},1}(x))-Y_{h_{j},1}(x)=F_{h_{j}}(x,Y_{h_{j},1}(x))-Y_{h_{ j},1}(x)+O(\lambda^{h_{j}})=O(\lambda^{h_{j}}),\] this means that \(|1-\frac{\partial F_{\infty}}{\partial y}(x,Y_{h_{j},1}(x))|\geq\delta_{A}-O( \lambda^{h_{j}})\), so that the denominator in (43) is bounded below by a positive constant for sufficiently large \(j\). So we can conclude that \(Y^{\prime}_{h_{j},1}(x)\) is uniformly bounded by a constant for \(x\in x_{h_{j}}A\), implying that \(W^{\prime}_{j}(z)\) is uniformly bounded (for all \(z\in A\) and all sufficiently large \(j\)) by a constant that Figure 5. Illustration of the domain \(x_{h_{j}}A\). is independent of \(j\). Therefore, \(W_{j}(z)\) is a uniformly equicontinuous sequence of functions on \(\overline{A}\), the closure of \(A\). By the Arzela-Ascoli theorem, this implies that \(W_{j}(z)\to W_{\infty}(z)=Y(x_{\infty}z)-x_{\infty}z\) holds even for all \(z\in\overline{A}\), not only on \(A\). In particular, \(y_{\infty}=W_{\infty}(1)=Y(x_{\infty})-x_{\infty}\). Here, we have \(|x_{\infty}|\leq\rho\) and \(|y_{\infty}|=\eta_{1}+\frac{\varepsilon}{2}\) by assumption. However, \[|Y(x)-x|\leq|Y(\rho)-\rho|=\eta_{1}\] holds for all \(|x|\leq\rho\) by the triangle inequality, so we finally reach a contradiction. We conclude this section with a summary of the results proven so far. The following proposition follows by combining the last two lemmata. **Proposition 3.16**.: _There exists a constant \(\delta>0\) such that \(Y_{h,1}(x)\) can be continued analytically to the domain_ \[\{x\in\mathbb{C}:|x|\leq(1+\delta)|\rho_{h}|,|\mathrm{Arg}(x/\rho_{h}-1)|> \pi/4\}\] _for every sufficiently large \(h\). Moreover, \(Y_{h,1}(x)\) is then uniformly bounded on this domain by a constant that is independent of \(h\), and the following singular expansion holds near the singularity:_ \[Y_{h,1}(x)=\eta_{h,1}+a_{h}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{1/2}+b_{h} \Big{(}1-\frac{x}{\rho_{h}}\Big{)}+c_{h}\Big{(}1-\frac{x}{\rho_{h}}\Big{)}^{3 /2}+O\Big{(}(\rho_{h}-x)^{2}\Big{)},\] _where the \(O\)-constant is independent of \(h\) and \(a_{h},b_{h},c_{h}\) converge at an exponential rate to \(a,b,c\) respectively as \(h\to\infty\)._ _Remark 3.17_.: Let \(D=\{i\in\mathbb{N}\colon w_{i}\neq 0\}\) be the _period_ of \(\Phi\). The purpose of this remark is give indications how the results so far have to be adapted for the case \(D>1\). If \(D>1\), then for all trees of our simply generated family of trees, the number \(n\) of vertices will be congruent to \(1\) modulo \(D\) because all outdegrees are multiples of \(D\). Trivially, the same is true for all trees with maximum protection number \(h\). By [9, Remark VI.17], both \(Y\) and \(Y_{h,1}\) have \(D\) conjugate roots on its circle of convergence. Therefore, it is enough to study the positive root at the radius of convergence. Up to Theorem 3.10, no changes are required. In Lemma 3.11, there are exactly \(D\) solutions instead of exactly one solution to the simultaneous equations. Lemmata 3.12, 3.13, and 3.14 analyse the behaviour of \(Y_{h,1}\) around the dominant positive singularity and remain valid without any change. In the proof of Lemma 3.15, we need to exclude balls around the conjugate roots. Proposition 3.16 must also be changed to exclude the conjugate roots. ## 4. The exponential case: \(w_{1}\neq 0\) ### Asymptotics of the singularities Proposition 3.16 that concluded the previous section shows that condition (2) of Theorem 2.1 is satisfied (with \(\alpha=\frac{1}{2}\)) by the generating functions \(Y_{h,1}\) (and thus also \(Y_{h,0}\), since \(Y_{h,0}(x)=Y_{h,1}(x)+x\)). It remains to study the behaviour of the singularity \(\rho_{h}\) of \(Y_{h,0}\) and \(Y_{h,1}\) to make the theorem applicable. As it turns out, condition (1) of Theorem 2.1 holds precisely if vertices of outdegree \(1\) are allowed in our simply generated family of trees. In terms of the weight generating function \(\Phi\), this can be expressed as \(w_{1}=\Phi^{\prime}(0)\neq 0\). Starting with Lemma 4.3, we will assume that this holds. The case where vertices of outdegree \(1\) cannot occur (equivalently, \(w_{1}=\Phi^{\prime}(0)=0\)) is covered in Section 5. Let us define the auxiliary quantities \(\eta_{h,k}:=Y_{h,k}(\rho_{h})\) for all \(0\leq k\leq h\). We know that these must exist and be finite for all sufficiently large \(h\). Since the coefficients of \(Y_{h,k}\) are nonincreasing in \(k\) in view of the combinatorial interpretation, we must have \[\eta_{h,0}\geq\eta_{h,1}\geq\cdots\geq\eta_{h,h}. \tag{44}\] Note also that the following system of equations holds: \[\eta_{h,0} =\eta_{h,1}+\rho_{h}, \tag{46}\] \[\eta_{h,k} =\rho_{h}\Phi(\eta_{h,k-1})-\rho_{h}\Phi(\eta_{h,h})\qquad\text{ for }1\leq k\leq h, \tag{45}\] in view of (8) and (7), respectively. Since \(Y_{h,1}\) is singular at \(\rho_{h}\) by assumption, the Jacobian determinant of the system that determines \(Y_{h,0},Y_{h,1},\ldots,Y_{h,h}\) needs to vanish (as there would otherwise be an analytic continuation by the analytic implicit function theorem). This determinant is given by \[\begin{vmatrix}1&-1&0&\cdots&0&0\\ -\rho_{h}\Phi^{\prime}(\eta_{h,0})&1&0&\cdots&0&\rho_{h}\Phi^{\prime}(\eta_{h,h})\\ 0&-\rho_{h}\Phi^{\prime}(\eta_{h,1})&1&\cdots&0&\rho_{h}\Phi^{\prime}(\eta_{h,h})\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&-\rho_{h}\Phi^{\prime}(\eta_{h,h-1})&1+\rho_{h}\Phi^{\prime}(\eta _{h,h})\end{vmatrix}.\] Using column expansion with respect to the last column to obtain the determinant, we find that this simplifies to \[\prod_{j=1}^{h}\big{(}\rho_{h}\Phi^{\prime}(\eta_{h,j})\big{)}+\big{(}1-\rho_{ h}\Phi^{\prime}(\eta_{h,0})\big{)}\Big{(}1+\sum_{k=2}^{h}\prod_{j=k}^{h}\big{(} \rho_{h}\Phi^{\prime}(\eta_{h,j})\big{)}\Big{)}=0. \tag{47}\] We will now use (45), (46), and (47) to determine an asymptotic formula for \(\rho_{h}\). Throughout this section, \(B_{i}\)'s will always be positive constants with \(B_{i}<1\) that depend on the specific family of simply generated trees, but nothing else. **Lemma 4.1**.: _There exist positive constants \(C\) and \(B_{1}\) with \(B_{1}<1\) such that \(\eta_{h,k}\leq CB_{1}^{k}\) for all sufficiently large \(h\) and all \(k\) with \(0\leq k\leq h\)._ Proof.: Since we already know that \(\eta_{h,1}\) converges to \(\tau-\rho\) and that \(\rho_{h}\) converges to \(\rho\), \(\eta_{h,0}\) converges to \(\tau\) by (45). By the monotonicity property (44), all \(\eta_{h,k}\) must therefore be bounded by a single constant \(M\) for sufficiently large \(h\). Since \(\eta_{h,1}\) converges to \(\tau-\rho\), we must have that \(\rho_{h}\Phi^{\prime}(\eta_{h,1})\) converges to \(\rho\Phi^{\prime}(\tau-\rho)\). Therefore, \(\rho_{h}\Phi^{\prime}(\eta_{h,1})\leq\rho\Phi^{\prime}(\tau-\rho/2)\) for sufficiently large \(h\). It follows that \(\rho_{h}\Phi^{\prime}(\eta_{h,1})\leq\rho\Phi^{\prime}(\tau-\rho/2)<\rho\Phi^{ \prime}(\tau)=1\). For all \(1\leq j\leq h\), we now have \[\eta_{h,j}=\rho_{h}\Phi(\eta_{h,j-1})-\rho_{h}\Phi(\eta_{h,h})\leq\rho_{h} \Phi^{\prime}(\eta_{h,j-1})(\eta_{h,j-1}-\eta_{h,h})\leq\rho_{h}\Phi^{\prime} (\eta_{h,1})\eta_{h,j-1}.\] Thus by induction \[\eta_{h,k}\leq\eta_{h,1}\big{(}\rho_{h}\Phi^{\prime}(\eta_{h,1})\big{)}^{k-1} \leq M\big{(}\rho\Phi^{\prime}(\tau-\rho/2)\big{)}^{k-1}.\] This proves the desired inequality for sufficiently large \(h\) and \(1\leq k\leq h\) with \(B_{1}=\rho\Phi^{\prime}(\tau-\rho/2)<1\), and we are done. With this bound, we will be able to refine the estimates for the system of equations, leading to better estimates for \(\rho_{h}\) and \(\eta_{h,0}\). Recall from Lemma 3.9 that \(\rho_{h}\) and \(\eta_{h,1}\) converge to their respective limits \(\rho\) and \(\tau-\rho=\eta_{1}\) (at least) exponentially fast. Since \(\eta_{h,0}=\eta_{h,1}+\rho_{h}\) by (45), this also applies to \(\eta_{h,0}\). We show that an analogous statement also holds for \(\eta_{h,k}\) with arbitrary \(k\). In view of (46), it is natural to expect that \(\eta_{h,k}\to\eta_{k}\), where \(\eta_{k}\) is defined recursively as follows: \(\eta_{0}=\tau\) and, for \(k>0\), \(\eta_{k}=\rho\Phi(\eta_{k-1})-\rho\), which also coincides with our earlier definition of \(\eta_{1}=\tau-\rho\). This is proven in the following lemma. **Lemma 4.2**.: _For a suitable constant \(B_{2}<1\) and sufficiently large \(h\), we have \(\rho_{h}=\rho+O(B_{2}^{h})\) and \(\eta_{h,k}=\eta_{k}+O(B_{2}^{h})\) for all \(k\) with \(0\leq k\leq h\), uniformly in \(k\)._ Proof.: For a suitable choice of \(B_{2}\), the estimate for \(\rho_{h}\) has been established by Lemma 3.9, as has the estimate for \(\eta_{h,k}\) in the cases where \(k=0\) and \(k=1\). Set \(\delta_{h,k}=\eta_{h,k}-\eta_{k}\). Since \(\eta_{h,h}\leq CB_{1}^{h}\) by Lemma 4.1, we have \(\Phi(\eta_{h,h})=\Phi(0)+O(B_{1}^{h})=1+O(B_{1}^{h})\). Without loss of generality, suppose that \(B_{2}\geq B_{1}\). Then, using (46), we obtain \[\eta_{h,k} =\rho_{h}\Phi(\eta_{h,k-1})-\rho_{h}\Phi(\eta_{h,h})\] \[=(\rho+O(B_{2}^{h}))\Phi(\eta_{k-1}+\delta_{h,k-1})-(\rho+O(B_{2} ^{h}))(1+O(B_{1}^{h}))\] \[=\rho(\Phi(\eta_{k-1})+\Phi^{\prime}(\xi_{h,k-1})\delta_{h,k-1})- \rho+O(B_{2}^{h})\] \[=\eta_{k}+\rho\Phi^{\prime}(\xi_{h,k-1})\delta_{h,k-1}+O(B_{2}^{h})\] where \(\xi_{h,k-1}\) is between \(\eta_{k-1}\) and \(\eta_{h,k-1}\) (by the mean value theorem) and the \(O\)-constant is independent of \(k\). Let \(M\) be this \(O\)-constant. We already know (compare the proof of Lemma 4.1) that \(\eta_{h,k-1}\leq\eta_{h,1}\leq\tau-\rho/2\) for every \(k\geq 2\) if \(h\) is sufficiently large. Likewise, it is easy to see that \(\eta_{k}\) is decreasing in \(k\), hence \(\eta_{k-1}\leq\eta_{1}=\tau-\rho\). Thus, \(\xi_{h,k-1}\leq\tau-\rho/2\) and \(\rho\Phi^{\prime}(\xi_{h,k-1})\leq\rho\Phi^{\prime}(\tau-\rho/2)=B_{1}<1\). So we have, for every \(k>1\), \[|\delta_{h,k}|=|\eta_{h,k}-\eta_{k}|\leq B_{1}|\delta_{h,k-1}|+MB_{2}^{h}.\] Iterating this inequality yields \[|\delta_{h,k}|\leq B_{1}^{k-1}|\delta_{h,1}|+(1+B_{1}+\cdots+B_{1}^{k-2})MB_{2 }^{h}\leq|\delta_{h,1}|+\frac{MB_{2}^{h}}{1-B_{1}},\] and the desired statement follows. From Lemma 4.1 and the fact that \(\eta_{h,k}\to\eta_{k}\), we trivially obtain \(\eta_{k}\leq C\cdot B_{1}^{k}\), with the same constants \(B_{1}\) and \(C\) as in Lemma 4.1. In fact, we can be more precise, and this is demonstrated in the lemma that follows. Since the expression \(\rho\Phi^{\prime}(0)\) occurs frequently in the following, we set \(\zeta:=\rho\Phi^{\prime}(0)\). Recall that we assume \(\Phi^{\prime}(0)\neq 0\) until the end of this section. **Lemma 4.3**.: _The limit \(\lambda_{1}:=\lim_{k\to\infty}\zeta^{-k}\eta_{k}\) exists. Moreover, we have_ \[\eta_{k}=\lambda_{1}\zeta^{k}(1+O(B_{1}^{k})),\] _with \(B_{1}\) as in Lemma 4.1._ Proof.: Recall that we defined the sequence \((\eta_{k})_{k\geq 0}\) by \(\eta_{0}=\tau\) and \(\eta_{k}=\rho\Phi(\eta_{k-1})-\rho\) for \(k\geq 1\). Using Taylor expansion, we obtain \[\eta_{k}=\rho\Phi^{\prime}(0)\eta_{k-1}(1+O(\eta_{k-1}))=\zeta\eta_{k-1}(1+O( \eta_{k-1})).\] Since we already know that \(\eta_{k-1}\leq C\cdot B_{1}^{k-1}\), this implies that \[\eta_{k}=\zeta\eta_{k-1}(1+O(B_{1}^{k})).\] Now it follows that the infinite product \[\lambda_{1}=\eta_{0}\prod_{j\geq 1}\frac{\eta_{j}}{\zeta\eta_{j-1}}=\lim_{k\to \infty}\eta_{0}\prod_{j=1}^{k}\frac{\eta_{j}}{\zeta\eta_{j-1}}=\lim_{k\to\infty }\zeta^{-k}\eta_{k}\] converges. The error bound follows from noting that \[\zeta^{-k}\eta_{k}=\lambda_{1}\prod_{j\geq k+1}\frac{\zeta\eta_{j-1}}{\eta_{j}} =\lambda_{1}\prod_{j\geq k+1}(1+O(B_{1}^{j})).\qed\] Next, we consider the expression in (47) and determine the asymptotic behaviour of its parts. **Lemma 4.4**.: _For large enough \(h\) and a fixed constant \(B_{3}<1\), we have_ \[1+\sum_{k=2}^{h}\prod_{j=k}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=\frac{1}{1- \zeta}+O(B_{3}^{h})\] _and_ \[\prod_{j=1}^{h}\rho_{h}\Phi^{\prime}(\eta_{h,j})=\lambda_{2}\zeta^{h}(1+O(B_{ 3}^{h})),\] _where \(\lambda_{2}:=\prod_{j\geq 1}\frac{\Phi^{\prime}(\eta_{j})}{\Phi^{\prime}(0)}\)._ Proof.: Note that \[\prod_{j=k}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=\rho_{h}^{h-k+1}\prod_{j=k} ^{h}\Phi^{\prime}(\eta_{h,j}).\] In view of Lemma 4.2, we have \(\rho_{h}^{h-k+1}=\rho^{h-k+1}(1+O(B_{2}^{h}))^{h-k+1}=\rho^{h-k+1}(1+O(hB_{2}^ {h}))\), uniformly in \(k\). Moreover, Lemma 4.1 yields \(\Phi^{\prime}(\eta_{h,j})=\Phi^{\prime}(0)+O(\eta_{h,j})=\Phi^{\prime}(0)+O(B_ {1}^{j})\), uniformly in \(h\). Thus \[\prod_{j=k}^{h}\Phi^{\prime}(\eta_{h,j})=\Phi^{\prime}(0)^{h-k+1}\prod_{j=k}^{ h}(1+O(B_{1}^{j}))=\Phi^{\prime}(0)^{h-k+1}(1+O(B_{1}^{k})).\] Hence the expression simplifies to \[1+\sum_{k=2}^{h}\prod_{j=k}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=1+(1+O(hB_{ 2}^{h}))\sum_{k=2}^{h}\zeta^{h-k+1}(1+O(B_{1}^{k})).\] Since \(\zeta<1\) and \(B_{1}<1\), we can simply evaluate the geometric series, and the expression further simplifies to \[1+\sum_{k=2}^{h}\zeta^{h-k+1}+O(B_{3}^{h})=\frac{1-\zeta^{h}}{1-\zeta}+O(B_{3} ^{h})=\frac{1}{1-\zeta}+O(B_{3}^{h})\] for an appropriately chosen \(B_{3}<1\). This proves the first statement. For the second statement, we also use Lemma 4.2, along with the monotonicity of \(\Phi^{\prime}\) and the assumption that \(\Phi^{\prime}(0)\neq 0\) which implies that \(\Phi^{\prime}(\eta_{j})\) is bounded away from \(0\). This yields \[\prod_{j=1}^{h}\rho_{h}\Phi^{\prime}(\eta_{h,j})=\prod_{j=1}^{h}(\rho+O(B_{2}^{h} ))(\Phi^{\prime}(\eta_{j})+O(B_{2}^{h}))=\rho^{h}(1+O(hB_{2}^{h}))\prod_{j=1}^{h }\Phi^{\prime}(\eta_{j}).\] Since \(\Phi^{\prime}(\eta_{j})=\Phi^{\prime}(0)+O(\zeta^{j})\) (by Lemma 4.3), the product that defines \(\lambda_{2}\) converges. So we can rewrite the product term as \[\prod_{j=1}^{h}\Phi^{\prime}(\eta_{j})=\Phi^{\prime}(0)^{h}\prod_{j=1}^{h} \frac{\Phi^{\prime}(\eta_{j})}{\Phi^{\prime}(0)}=\lambda_{2}\Phi^{\prime}(0)^ {h}\prod_{j\geq h+1}\frac{\Phi^{\prime}(0)}{\Phi^{\prime}(\eta_{j})},\] and thus, using again the estimate \(\Phi^{\prime}(\eta_{j})=\Phi^{\prime}(0)+O(\zeta^{j})\) on the remaining product, \[\prod_{j=1}^{h}\rho_{h}\Phi^{\prime}(\eta_{h,j})=\lambda_{2}\zeta^{h}(1+O(hB_ {2}^{h}))(1+O(\zeta^{h})).\] This proves the desired formula for a suitable choice of \(B_{3}\). **Corollary 4.5**.: _For sufficiently large \(h\), we have that_ \[\rho_{h}\Phi^{\prime}(\eta_{h,0})=1+\lambda_{2}(1-\zeta)\zeta^{h}(1+O(B_{3}^{h })), \tag{48}\] _where \(\lambda_{2}\) and \(B_{3}\) are as in Lemma 4.4._ Proof.: Taking the asymptotic formulas from the statement of Lemma 4.4 and applying them to (47) we obtain the formula after solving for \(\rho_{h}\Phi^{\prime}(\eta_{h,0})\). In the proof of Lemma 4.2 we used the bound \(\eta_{h,h}=O(B_{1}^{h})\) (obtained from Lemma 4.1). In order to refine the process, we need a more precise estimate. **Lemma 4.6**.: _For sufficiently large \(h\) and a fixed constant \(B_{4}<1\), we have that_ \[\eta_{h,h}=\lambda_{1}(1-\zeta)\zeta^{h}(1+O(B_{4}^{h})), \tag{49}\] _where \(\lambda_{1}\) is as defined in Lemma 4.3._ Proof.: Pick some \(\alpha\in(0,1)\) in such a way that \(\zeta^{\alpha}>B_{2}\), with \(B_{2}\) as in Lemma 4.2, and set \(m=\lfloor\alpha h\rfloor\). From Lemma 4.3, we know that \(\eta_{m}=\Theta(\zeta^{\alpha h})\). By Lemma 4.2, \(\eta_{h,m}=\eta_{m}+O(B_{2}^{h})\), so by our choice of \(\alpha\) there is some \(B_{4}<1\) such that \(\eta_{h,m}=\eta_{m}(1+O(B_{4}^{h}))\) for sufficiently large \(h\). Next, recall from (46) that \[\eta_{h,k}=\rho_{h}\big{(}\Phi(\eta_{h,k-1})-\Phi(\eta_{h,h})\big{)}.\] By the mean value theorem, there is some \(\xi_{h,k}\in(\eta_{h,h},\eta_{h,k-1})\) such that \[\eta_{h,k}=\rho_{h}(\eta_{h,k-1}-\eta_{h,h})\Phi^{\prime}(\xi_{h,k})=\rho_{h}( \eta_{h,k-1}-\eta_{h,h})(\Phi^{\prime}(0)+O(\eta_{h,k-1})).\] Assume now that \(k\geq m\), so that \(\eta_{h,k-1}=O(B_{1}^{\alpha h})\) by Lemma 4.1. Moreover, \(\rho_{h}=\rho+O(B_{2}^{h})\) by Lemma 4.2. So with \(B=\max(B_{2},B_{1}^{\alpha})\), it follows that \[\eta_{h,k}=\zeta(\eta_{h,k-1}-\eta_{h,h})(1+O(B^{h})),\] uniformly for all \(k\geq m\). Rewrite this as \[\eta_{h,k-1}=\eta_{h,h}+\frac{\eta_{h,k}}{\zeta}(1+O(B^{h})).\] Iterate this \(h-m\) times to obtain \[\eta_{h,m} =\sum_{j=0}^{h-m}\frac{\eta_{h,h}}{\zeta^{j}}(1+O(B^{h}))^{j}\] \[=\eta_{h,h}\zeta^{-(h-m)}\frac{1-\zeta^{h-m+1}}{1-\zeta}(1+O(hB^{h} )).\] Now recall that \(\eta_{h,m}=\eta_{m}(1+O(B^{h}_{4}))\), and that \(\eta_{m}=\lambda_{1}\zeta^{m}(1+O(B^{\alpha h}_{1}))\) by Lemma 4.3. Plugging all this in and solving for \(\eta_{h,h}\), we obtain (49), provided that \(B_{4}\) was also chosen to be greater than \(B\) and \(\zeta^{1-\alpha}\). Now we can make use of this asymptotic formula for \(\eta_{h,h}\) in order to obtain a refined estimate for \(\eta_{h,0}\). **Proposition 4.7**.: _For a fixed constant \(B_{5}<1\), we have that_ \[\eta_{h,0}=\tau+\frac{(1-\zeta)(\Phi(\tau)\lambda_{2}-\Phi^{\prime}(0) \lambda_{1})}{\tau\Phi^{\prime\prime}(\tau)}\zeta^{h}+O((\zeta B_{5})^{h}) \tag{50}\] _and_ \[\rho_{h}=\rho+\frac{\lambda_{1}(1-\zeta)}{\Phi(\tau)}\zeta^{h+1}+O((\zeta B_{ 5})^{h}), \tag{51}\] _where \(\lambda_{1}\) and \(\lambda_{2}\) are as in Lemma 4.3 and Lemma 4.4 respectively._ Proof.: From (45) and (46) with \(k=1\), we have \[\eta_{h,0}=\rho_{h}\big{(}\Phi(\eta_{h,0})-\Phi(\eta_{h,h})+1\big{)}. \tag{52}\] By means of Taylor expansion and Lemma 4.6, we get \[\eta_{h,0}=\rho_{h}\big{(}\Phi(\eta_{h,0})-\Phi^{\prime}(0)\eta_{h,h}+O(\eta_{ h,h}^{2})\big{)}.\] We multiply this by (48) and divide through by \(\rho_{h}\) to obtain \[\eta_{h,0}\Phi^{\prime}(\eta_{h,0})=\big{(}\Phi(\eta_{h,0})-\Phi^{\prime}(0) \eta_{h,h}+O(\eta_{h,h}^{2})\big{)}\big{(}1+\lambda_{2}(1-\zeta)\zeta^{h}(1+O (B^{h}_{3}))\big{)} \tag{53}\] or, with \(H(x)=x\Phi^{\prime}(x)-\Phi(x)\), \[H(\eta_{h,0})=\big{(}-\Phi^{\prime}(0)\eta_{h,h}+O(\eta_{h,h}^{2})\big{)}(1+O( \zeta^{h}))+\Phi(\eta_{h,0})\lambda_{2}(1-\zeta)\zeta^{h}(1+O(B^{h}_{3})).\] We plug in the asymptotic formula for \(\eta_{h,h}\) from Lemma 4.6 and also note that \(\Phi(\eta_{h,0})=\Phi(\tau+O(B^{h}_{2}))=\Phi(\tau)+O(B^{h}_{2})\) by Lemma 4.2. This gives us \[H(\eta_{h,0})=(\Phi(\tau)\lambda_{2}-\Phi^{\prime}(0)\lambda_{1})(1-\zeta) \zeta^{h}+O((\zeta B_{5})^{h}), \tag{54}\] where \(B_{5}=\max(\zeta,B_{2},B_{3},B_{4})\). Now note that the function \(H\) is increasing (on the positive real numbers within the radius of convergence of \(\Phi\)) with derivative \(H^{\prime}(x)=x\Phi^{\prime\prime}(x)\) and a unique zero at \(\tau\). So by inverting (54), we finally end up with \[\eta_{h,0}=\tau+\frac{1}{H^{\prime}(\tau)}(\Phi(\tau)\lambda_{2}-\Phi^{\prime }(0)\lambda_{1})(1-\zeta)\zeta^{h}+O((\zeta B_{5})^{h}),\] completing the proof of the first formula. Now we return to (48), which gives us \[\rho_{h}=\frac{1+\lambda_{2}(1-\zeta)\zeta^{h}(1+O(B^{h}_{3}))}{\Phi^{\prime}( \eta_{h,0})}=\frac{1+\lambda_{2}(1-\zeta)\zeta^{h}(1+O(B^{h}_{3}))}{\Phi^{ \prime}(\tau)+\Phi^{\prime\prime}(\tau)(\eta_{h,0}-\tau)+O((\eta_{h,0}-\tau) ^{2})}.\] Plugging in (50) and simplifying by means of the identities \(\rho\Phi(\tau)=\tau\) and \(\rho\Phi^{\prime}(\tau)=1\) now yields (51). ### Proof of Theorem 1 We are now finally ready to apply Theorem 2.1 and Theorem 2.2. The generating functions \(Y_{h}(z):=Y_{h,0}(z)=Y_{h,1}(z)+z\) were defined precisely in such a way that \(y_{h,n}=[z^{n}]Y_{h}(z)\) is the number of \(n\)-vertex trees for which the maximum protection number is less than or equal to \(h\). Thus the random variable \(X_{n}\) in Theorem 2.1 becomes the maximum protection number of a random \(n\)-vertex tree. Condition (2) of Theorem 2.1 is satisfied in view of Proposition 3.16. Condition (1) holds by Proposition 4.7 with \(\zeta=\rho\Phi^{\prime}(0)\) and \[\kappa=\frac{\lambda_{1}(1-\zeta)\zeta}{\rho\Phi(\tau)}=\frac{\lambda_{1}(1- \zeta)\zeta}{\tau}, \tag{55}\] where \(\lambda_{1}\) is as defined in Lemma 4.3 and we recall the definition of \(\zeta\) as \(\rho\Phi^{\prime}(0)\). This already proves the first part of Theorem 1. We can also apply Theorem 2.2: Note that the maximum protection number of a tree with size \(n\) is no greater than \(n-1\), thus \(y_{h,n}=y_{n}\) for \(h\geq n-1\), and an appropriate choice of constant for Condition (1) in Theorem 2.2 would be \(K=1\). Conditions (2) and (3) are still covered by Proposition 3.16. Hence Theorem 2.2 applies, and the second part of Theorem 1 follows. ## 5. The double-exponential case: \(w_{1}=0\) ### Asymptotics of the singularities In Section 4.1, it was crucial in most of our asymptotic estimates that \(w_{1}=\Phi^{\prime}(0)\neq 0\). In this section we assume that \(w_{1}=\Phi^{\prime}(0)=0\) and define \(r\) to be the smallest positive outdegree with nonzero weight: \[r=\min\{i\in\mathbb{N}:i\geq 2\text{ and }w_{i}\neq 0\}=\min\{i\in\mathbb{N}:i\geq 2 \text{ and }\Phi^{(i)}(0)\neq 0\}.\] Our goal will be to determine the asymptotic behaviour of \(\rho_{h}\) in this case, based again on the system of equations that is given by (45), (46) and (47). Once again, \(B_{i}\)'s will always denote positive constants with \(B_{i}<1\) (different from those in the previous section, but for simplicity we restart the count at \(B_{1}\)) that depend on the specific family of simply generated trees, but nothing else. No part of the proof of Lemma 4.1 depends on \(\Phi^{\prime}(0)\neq 0\) and thus it also holds in the case which we are currently working in, so we already have an exponential bound on \(\eta_{h,k}\). However, this bound is loose if \(\Phi^{\prime}(0)=0\), and so we determine a tighter bound. **Lemma 5.1**.: _There exist positive constants \(C\) and \(B_{1}\) with \(B_{1}<1\) such that \(\eta_{h,k}\leq CB_{1}^{r^{k}}\) for all sufficiently large \(h\) and all \(k\) with \(0\leq k\leq h\)._ Proof.: From (46), we have that \(\eta_{h,k}=\rho_{h}\Phi(\eta_{h,k-1})-\rho_{h}\Phi(\eta_{h,h})\). Using the Taylor expansion about \(0\), this gives, for some \(\xi_{h,k-1}\in(0,\eta_{h,k-1})\), \[\eta_{h,k} =\rho_{h}\Big{(}\Phi(0)+\frac{\Phi^{(r)}(\xi_{h,k-1})}{r!}\eta_{h,k-1}^{r}\Big{)}-\rho_{h}\Phi(\eta_{h,h})\] \[\leq\rho_{h}\frac{\Phi^{(r)}(\xi_{h,k-1})}{r!}\eta_{h,k-1}^{r} \leq\rho_{h}\frac{\Phi^{(r)}(\eta_{h,1})}{r!}\eta_{h,k-1}^{r}.\] There is a constant \(M\) such that \(\rho_{h}\frac{\Phi^{(r)}(\eta_{h,1})}{r!}\leq M\) for all sufficiently large \(h\), since we already know that \(\rho_{h}\) and \(\eta_{h,1}\) converge. So for sufficiently large \(h\), we have \(\eta_{h,k}\leq M\eta_{h,k-1}^{r}\) for all \(k>1\). Iterating this inequality yields \[\eta_{h,k}\leq M^{\frac{r^{k-\ell}-1}{r-1}}\eta_{h,\ell}^{r^{k-\ell}}\] for \(0\leq\ell\leq k\). In view of the exponential bound on \(\eta_{h,\ell}\) provided by Lemma 4.1, we can choose \(\ell\) so large that \(M^{1/(r-1)}\eta_{h,\ell}\leq\frac{1}{2}\) for all sufficiently large \(h\). This proves the desired bound for \(k\geq\ell\) with \(B_{1}=2^{-r^{-\ell}}\) and a suitable choice of \(C\) (for \(k<\ell\), it is implied by the exponential bound). Our next step is an analogue of Lemma 4.4. **Lemma 5.2**.: _For large enough \(h\) and the same constant \(B_{1}<1\) as in the previous lemma, we have_ \[1+\sum_{k=2}^{h}\prod_{j=k}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=1+O(B_{1}^{ r^{h}})\] _and_ \[\prod_{j=1}^{h}\rho_{h}\Phi^{\prime}(\eta_{h,j})=O(B_{1}^{r^{h}}).\] Proof.: We already know that \(\rho_{h}\Phi^{\prime}(\eta_{h,1})\) converges to \(\rho\Phi^{\prime}(\tau-\rho)<1\), so for sufficiently large \(h\) and some \(q<1\), we have \(\rho_{h}\Phi^{\prime}(\eta_{h,j})\leq\rho_{h}\Phi^{\prime}(\eta_{h,1})\leq q\) for all \(j\geq 1\). It follows that \[\sum_{k=2}^{h}\prod_{j=k}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))\leq\sum_{k=2 }^{h}q^{h-k}\rho_{h}\Phi^{\prime}(\eta_{h,h})\leq\frac{1}{1-q}\rho_{h}\Phi^{ \prime}(\eta_{h,h})\] and \[\prod_{j=1}^{h}\rho_{h}\Phi^{\prime}(\eta_{h,j})\leq q^{h-1}\rho_{h}\Phi^{ \prime}(\eta_{h,h}).\] Now both statements follow from the fact that \(\Phi^{\prime}(\eta_{h,h})=\Phi^{\prime}(0)+O(\eta_{h,h})=O(\eta_{h,h})\) and the previous lemma. Taking the results from Lemma 5.2 and applying them to (47), we find that \[\rho_{h}\Phi^{\prime}(\eta_{h,0})=1+O\big{(}B_{1}^{r^{h}}\big{)}. \tag{56}\] Additionally note that using Lemma 5.1 and Taylor expansion, we have that \[\Phi(\eta_{h,h})=1+O(B_{1}^{r^{h}}).\] Now recall that (45) and (46) yield (see (52)) \[\eta_{h,0}=\rho_{h}\big{(}\Phi(\eta_{h,0})-\Phi(\eta_{h,h})+1\big{)}, \tag{57}\] which now becomes \[\eta_{h,0}=\rho_{h}\Phi(\eta_{h,0})+O\big{(}B_{1}^{r^{h}}\big{)}. \tag{58}\] Taking advantage of the expressions in (56) and (58), we can now prove doubly exponential convergence of \(\rho_{h}\) and \(\eta_{h,0}\) (using the approach of Proposition 4.7). **Lemma 5.3**.: _For large enough \(h\), it holds that_ \[\rho_{h}=\rho+O\big{(}B_{1}^{r^{h}}\big{)}\qquad\text{and}\qquad\eta_{h,0}=\tau+O \big{(}B_{1}^{r^{h}}\big{)}.\] _and thus also \(\eta_{h,1}=\eta_{h,0}-\rho_{h}=\eta_{1}+O(B_{1}^{r^{h}})\)._ Proof.: Multiplying (56) and (58) and dividing by \(\rho_{h}\) yields \[\eta_{h,0}\Phi^{\prime}(\eta_{h,0})=\Phi(\eta_{h,0})+O\big{(}B_{1}^{r^{h}} \big{)}.\] As in the proof of Proposition 4.7, we observe that the function \(H(x)=x\Phi^{\prime}(x)-\Phi(x)\) is increasing (on the positive real numbers within the radius of convergence of \(\Phi\)) with derivative \(H^{\prime}(x)=x\Phi^{\prime\prime}(x)\) and a unique zero at \(\tau\). So it follows from this equation that \(\eta_{h,0}=\tau+O\big{(}B_{1}^{r^{h}}\big{)}\). Using this estimate for \(\eta_{h,0}\) in (56) it follows that \(\rho_{h}=\rho+O\big{(}B_{1}^{r^{h}}\big{)}\). As in the previous section, we will approximate \(\eta_{h,k}\) by \(\eta_{k}\), defined recursively by \(\eta_{0}=\tau\) and \(\eta_{k}=\rho(\Phi(\eta_{k-1})-1)\). As it turns out, this approximation is even more precise in the current case. **Lemma 5.4**.: _For a fixed constant \(B_{2}<1\) and sufficiently large \(h\), we have that_ \[\eta_{h,k}=\eta_{k}(1+O(B_{2}^{r^{h}})),\] _uniformly for all \(0\leq k\leq h\)._ Proof.: Recall that, by (46), \(\eta_{h,k}=\rho_{h}\Phi(\eta_{h,k-1})-\rho_{h}\Phi(\eta_{h,h})\). By Taylor expansion, we find that \[\eta_{h,k}=\rho_{h}\Phi(\eta_{h,k-1})-\rho_{h}+O(\eta_{h,h}^{r}).\] Since \(\eta_{h,k}\geq\eta_{h,h}\), we have \(\eta_{h,k}-O(\eta_{h,h}^{r})=\eta_{h,k}(1-O(\eta_{h,h}^{r-1}))\). Now we use the estimates \(\eta_{h,h}=O(B_{1}^{r^{h}})\) from Lemma 5.1 and \(\rho_{h}=\rho+O(B_{1}^{r^{h}})\) from Lemma 5.3 to obtain \[\eta_{h,k}=\rho(\Phi(\eta_{h,k-1})-1)\big{(}1+O(B_{1}^{r^{h}})\big{)}.\] We compare this to \[\eta_{k}=\rho(\Phi(\eta_{k-1})-1).\] Taking the logarithm in both these equations and subtracting yields \[\log\frac{\eta_{h,k}}{\eta_{k}}=\log(\Phi(\eta_{h,k-1})-1)-\log(\Phi(\eta_{k- 1})-1)+O(B_{1}^{r^{h}}). \tag{59}\] For large enough \(h\), we can assume that \(\eta_{h,1}\leq\tau\) and thus \(\eta_{h,k}\leq\tau\) for all \(k\geq 1\). The auxiliary function \[\Psi_{1}(u)=\log\big{(}\Phi(e^{u})-1\big{)}\] is continuously differentiable on \((-\infty,\log(\tau)]\). Since \(\lim_{u\to-\infty}\Psi_{1}^{\prime}(u)=r\), as one easily verifies, \(|\Psi_{1}^{\prime}(u)|\) must be bounded by some constant \(K\) for all \(u\) in this interval, thus \(|\Psi_{1}(u+v)-\Psi_{1}(u)|\leq K|v|\) whenever \(u,u+v\leq\log(\tau)\). We apply this with \(u+v=\log\eta_{h,k-1}\) and \(u=\log\eta_{k-1}\) to obtain \[\Big{|}\log(\Phi(\eta_{h,k-1})-1)-\log(\Phi(\eta_{k-1})-1)\Big{|}\leq K\Big{|} \log\frac{\eta_{h,k-1}}{\eta_{k-1}}\Big{|}.\] Plugging this into (59) yields \[\Big{|}\log\frac{\eta_{h,k}}{\eta_{k}}\Big{|}\leq K\Big{|}\log\frac{\eta_{h,k- 1}}{\eta_{k-1}}\Big{|}+O(B_{1}^{r^{h}}). \tag{60}\] We already know that \(\big{|}\log\frac{\eta_{h,0}}{\eta_{0}}\big{|}=O(B_{1}^{r^{h}})\) and \(\big{|}\log\frac{\eta_{h,1}}{\eta_{1}}\big{|}=O(B_{1}^{r^{h}})\) in view of Lemma 5.3. Iterating (60) gives us \[\Big{|}\log\frac{\eta_{h,k}}{\eta_{k}}\Big{|}=O\big{(}(1+K+K^{2}+\cdots+K^{k}) B_{1}^{r^{h}}),\] which implies the statement for any \(B_{2}>B_{1}\). The next lemma parallels Lemma 4.3. **Lemma 5.5**.: _There exist positive constants \(\lambda_{1}\) and \(\mu<1\) such that_ \[\eta_{k}=\lambda_{1}\mu^{r^{k}}\big{(}1+O(B_{1}^{r^{k}})\big{)},\] _with the same constant \(B_{1}\) as in Lemma 5.1._ Proof.: Note that Lemma 5.1 trivially implies that \(\eta_{k}=O(B_{1}^{r^{k}})\). From the recursion \[\eta_{k}=\rho(\Phi(\eta_{k-1})-1),\] we obtain, by the properties of \(\Phi\), \[\eta_{k}=\frac{\rho\Phi^{(r)}(0)\eta_{k-1}^{r}}{r!}(1+O(\eta_{k-1})).\] Set \[\lambda_{1}=\Big{(}\frac{\rho\Phi^{(r)}(0)}{r!}\Big{)}^{-1/(r-1)}=(\rho w_{r} )^{-1/(r-1)} \tag{61}\] and divide both sides by \(\lambda_{1}\) to obtain \[\frac{\eta_{k}}{\lambda_{1}}=\Big{(}\frac{\eta_{k-1}}{\lambda_{1}}\Big{)}^{r }(1+O(\eta_{k-1})).\] Let us write \(e^{\theta_{k-1}}\) for the final factor, where \(\theta_{k-1}=O(\eta_{k-1})\). Taking the logarithm yields \[\log\frac{\eta_{k}}{\lambda_{1}}=r\log\frac{\eta_{k-1}}{\lambda_{1}}+\theta_{ k-1}.\] We iterate this recursion \(k\) times to obtain \[\log\frac{\eta_{k}}{\lambda_{1}} =r^{k}\log\frac{\eta_{0}}{\lambda_{1}}+\sum_{j=0}^{k-1}r^{k-1-j} \theta_{j}\] \[=r^{k}\Big{(}\log\frac{\eta_{0}}{\lambda_{1}}+\sum_{j=0}^{\infty }r^{-1-j}\theta_{j}\Big{)}-\sum_{j=k}^{\infty}r^{k-1-j}\theta_{j}.\] The infinite series converge in view of the estimate \(\theta_{j}=O(\eta_{j})=O(B_{1}^{r^{j}})\) that we get from Lemma 5.1. Moreover, we have \(\sum_{j=k}^{\infty}r^{k-1-j}\theta_{j}=O(B_{1}^{r^{k}})\) by the same bound. The result follows upon taking the exponential on both sides and multiplying by \(\lambda_{1}\), setting \[\mu:=\exp\Big{(}\log\frac{\eta_{0}}{\lambda_{1}}+\sum_{j=0}^{\infty}r^{-1-j} \theta_{j}\Big{)}=\frac{\eta_{0}}{\lambda_{1}}\prod_{j=0}^{\infty}e^{\theta_{j }/r^{j+1}}. \tag{62}\] Note that \(\mu<1\) because we already know that \(\eta_{k}=O(B_{1}^{r^{k}})\). In order to further analyse the behaviour of the product \(\prod_{j=1}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))\) in (47), we need one more short lemma. **Lemma 5.6**.: _For sufficiently large \(h\), we have that_ \[\Phi^{\prime}(\eta_{h,k})=\Phi^{\prime}(\eta_{k})\big{(}1+O(B_{2}^{r^{h}})\big{)},\] _uniformly for all \(1\leq k\leq h\), with the same constant \(B_{2}\) as in Lemma 5.4._ Proof.: Again, we can assume that \(h\) is so large that \(\eta_{h,k}\leq\tau\) for all \(k\geq 1\). The auxiliary function \(\Psi_{2}(u)=\log(\Phi^{\prime}(e^{u}))\) is continuously differentiable on \((-\infty,\log\tau]\) and satisfies \(\lim_{u\to-\infty}\Psi^{\prime}_{2}(u)=r-1\). Thus its derivative is also bounded, and the same argument as in Lemma 5.4 shows that \[\Big{|}\log\frac{\Phi^{\prime}(\eta_{h,k})}{\Phi^{\prime}(\eta_{k})}\Big{|} \leq K\Big{|}\log\frac{\eta_{h,k}}{\eta_{k}}\Big{|}\] for some positive constant \(K\). Now the statement follows from Lemma 5.4. **Lemma 5.7**.: _There exist positive constants \(\lambda_{2},\lambda_{3}\) and \(B_{3}<1\) such that, for large enough \(h\),_ \[\prod_{j=1}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=\lambda_{2}\lambda_{3}^{h} \mu^{r^{h+1}}\big{(}1+O(B_{3}^{r^{h}})\big{)}, \tag{63}\] _with \(\mu\) as in Lemma 5.5._ Proof.: First, observe that \[\prod_{j=1}^{h}(\rho_{h}\Phi^{\prime}(\eta_{h,j}))=\Big{(}\rho\big{(}1+O(B_{1} ^{r^{h}})\big{)}\Big{)}^{h}\prod_{j=1}^{h}\Big{(}\Phi^{\prime}(\eta_{j})\big{(} 1+O(B_{2}^{r^{h}})\big{)}=\rho^{h}\Big{(}\prod_{j=1}^{h}\Phi^{\prime}(\eta_{j} )\Big{)}\big{(}1+O(hB_{2}^{r^{h}})\big{)}\] in view of Lemma 5.3 and Lemma 5.6 (recall that \(B_{2}>B_{1}\)). Next, Taylor expansion combined with Lemma 5.5 gives us \[\Phi^{\prime}(\eta_{k})=\frac{\Phi^{(r)}(0)}{(r-1)!}\eta_{k}^{r-1}(1+O(\eta_{ k}))=\frac{\Phi^{(r)}(0)\lambda_{1}^{r-1}}{(r-1)!}\mu^{(r-1)r^{k}}\big{(}1+O(B_{1} ^{r^{k}})\big{)}.\] Set \(\lambda_{3}:=\rho\frac{\Phi^{(r)}(0)\lambda_{1}^{r-1}}{(r-1)!}=rw_{r}\rho \lambda_{1}^{r-1}\), so that \[\Phi^{\prime}(\eta_{k})=\frac{\lambda_{3}}{\rho}\mu^{(r-1)r^{k}}\big{(}1+O(B_ {1}^{r^{k}})\big{)}.\] It follows that the infinite product \[\Pi:=\prod_{j=1}^{\infty}\frac{\rho\Phi^{\prime}(\eta_{j})}{\lambda_{3}\mu^{(r -1)r^{j}}}\] converges, and that \[\prod_{j=1}^{h}\frac{\rho\Phi^{\prime}(\eta_{j})}{\lambda_{3}\mu^{(r-1)r^{j}} }=\Pi\big{(}1+O(B_{1}^{r^{h}})\big{)}.\] Consequently, \[\prod_{j=1}^{h}\Phi^{\prime}(\eta_{j})=\Pi\Big{(}\frac{\lambda_{3}}{\rho} \Big{)}^{h}\mu^{r^{h+1}-r}\big{(}1+O(B_{1}^{r^{h}})\big{)}.\] Putting everything together, the statement of the lemma follows with \(\lambda_{2}=\Pi\mu^{-r}\) and a suitable choice of \(B_{3}>B_{2}\) With this estimate for the product term in the determinant of the Jacobian (47), and the estimate for the sum term from Lemma 5.2, we can now obtain a better asymptotic formula for \(\rho_{h}\Phi^{\prime}(\eta_{h,0})\) than that which was obtained in (56). For large enough \(h\), we have that \[\rho_{h}\Phi^{\prime}(\eta_{h,0})=1+\lambda_{2}\lambda_{3}^{h}\mu^{r^{h+1}} \big{(}1+O(B_{3}^{r^{h}})\big{)}. \tag{64}\] For the error term, recall that \(B_{1}<B_{3}\). Moreover, combining Lemmata 5.4 and 5.5 leads to \[\eta_{h,h}=\lambda_{1}\mu^{r^{h}}\big{(}1+O(B_{2}^{r^{h}})\big{)}\] since \(B_{2}\) was chosen to be greater than \(B_{1}\), which we can apply to (57): \[\eta_{h,0} =\rho_{h}\big{(}\Phi(\eta_{h,0})-\Phi(\eta_{h,h})+1\big{)}\] \[=\rho_{h}\Big{(}\Phi(\eta_{h,0})-\frac{\Phi^{(r)}(0)}{r!}\eta_{h, h}^{r}+O(\eta_{h,h}^{r+1})\Big{)} \tag{65}\] \[=\rho_{h}\Big{(}\Phi(\eta_{h,0})-w_{r}\lambda_{1}^{r}\mu^{r^{h+1 }}\big{(}1+O(B_{3}^{r^{h}})\big{)}\Big{)}\] since \(B_{3}\) was chosen to be greater than \(B_{1}\) (and thus also \(\mu\)) and \(B_{2}\). As we did earlier to obtain Lemma 5.3, we multiply the two equations (64) and (65) and divide by \(\rho_{h}\) to find that \[\eta_{h,0}\Phi^{\prime}(\eta_{h,0})=\Big{(}\Phi(\eta_{h,0})-w_{r}\lambda_{1}^ {r}\mu^{r^{h+1}}\big{(}1+O(B_{3}^{r^{h}})\big{)}\Big{)}\Big{(}1+\lambda_{2} \lambda_{3}^{h}\mu^{r^{h+1}}\big{(}1+O(B_{3}^{r^{h}})\big{)}\Big{)}.\] From this, the following result follows now in exactly the same way as Proposition 4.7 follows from (53). **Proposition 5.8**.: _For large enough \(h\) and a fixed constant \(B_{4}<1\), we have that_ \[\eta_{h,0}=\tau+\frac{\Phi(\tau)\lambda_{2}\lambda_{3}^{h}-w_{r}\lambda_{1}^ {r}}{\tau\Phi^{\prime\prime}(\tau)}\mu^{r^{h+1}}+O(\mu^{r^{h+1}}B_{4}^{r^{h}})\] _and_ \[\rho_{h}=\rho\Big{(}1+\frac{w_{r}\lambda_{1}^{r}}{\Phi(\tau)}\mu^{r^{h+1}}+O( \mu^{r^{h+1}}B_{4}^{r^{h}})\Big{)}.\] ### An adapted general scheme and the proof of Theorem 2 In this final section, we will first prove Theorems 3 and 4. Then, we will be able to put all pieces together and prove Theorem 2. Proof of Theorem 3.: We apply singularity analysis, and use the uniformity condition to obtain \[y_{h,n}=\frac{A_{h}}{\Gamma(-\alpha)}n^{-\alpha-1}\rho_{h}^{-n}(1+o(1))\] uniformly in \(h\) as \(n\to\infty\) as well as \[y_{n}=\frac{A}{\Gamma(-\alpha)}n^{-\alpha-1}\rho^{-n}(1+o(1)).\] Since in addition \(A_{h}\to A\) and \(\rho_{h}=\rho(1+\kappa\zeta^{r^{h}}+o(\zeta^{r^{h}}))\), it holds that \[\frac{y_{h,n}}{y_{n}} =\Big{(}\frac{\rho_{h}}{\rho}\Big{)}^{-n}(1+o(1))=\exp\big{(}- \kappa n\zeta^{r^{h}}+o(n\zeta^{r^{h}})\big{)}(1+o(1))\] \[=\exp\big{(}-\kappa n\zeta^{r^{h}}(1+o(1))+o(1)\big{)}.\] Proof of Theorem 4.: Fix \(\epsilon>0\). If \(h\geq m_{n}+\epsilon=\log_{r}\log_{d}(n)+\epsilon\), then Theorem 3 gives us \[\mathbb{P}(X_{n}\leq h)\geq\exp\left(-\kappa n^{1-r^{\epsilon}}(1+o(1))+o(1) \right)=1-o(1),\] thus \(X_{n}\leq h\) with high probability. If \(\{m_{n}\}\leq 1-\epsilon\), then this is the case for \(h=\lceil m_{n}\rceil\), otherwise for \(h=\lceil m_{n}\rceil+1\). Similarly, if \(h\leq m_{n}-\epsilon=\log_{r}\log_{d}(n)-\epsilon\), then Theorem 3 gives us \[\mathbb{P}(X_{n}\leq h)\leq\exp\left(-\kappa n^{1-r^{-\epsilon}}(1+o(1))+o(1) \right)=o(1),\] thus \(X_{n}>h\) with high probability. If \(\{m_{n}\}\geq\epsilon\), then this is the case for \(h=\lfloor m_{n}\rfloor\), otherwise for \(h=\lfloor m_{n}\rfloor-1\). The statement now follows by combining the two parts. _Remark 5.9_.: As in Remark 3.17, we indicate the changes which are necessary for the case that the period \(D\) of \(\Phi\) is greater than \(1\). Theorem 3 only depends on singularity analysis. It is well known (see [9, Remark VI.17]) that singularity analysis simply introduces a factor \(D\) in this situation, and as this factor \(D\) cancels because it occurs both in the asymptotic expansions of \(Y_{h}\) as well as \(Y\), this theorem remains valid for \(n\equiv 1\pmod{D}\). Theorem 2 is now an immediate consequence of Theorem 3 and Theorem 4. In analogy to the proof of Theorem 1, the analytic conditions on the generating functions are provided by Proposition 3.16. The condition on the asymptotic behaviour of \(\rho_{h}\) is given by Proposition 5.8 (with \(\zeta=\mu^{r}\)). Thus the proof of Theorem 2 is complete.
2303.13929
Autonomous Blimp Control via H-infinity Robust Deep Residual Reinforcement Learning
Due to their superior energy efficiency, blimps may replace quadcopters for long-duration aerial tasks. However, designing a controller for blimps to handle complex dynamics, modeling errors, and disturbances remains an unsolved challenge. One recent work combines reinforcement learning (RL) and a PID controller to address this challenge and demonstrates its effectiveness in real-world experiments. In the current work, we build on that using an H-infinity robust controller to expand the stability margin and improve the RL agent's performance. Empirical analysis of different mixing methods reveals that the resulting H-infinity-RL controller outperforms the prior PID-RL combination and can handle more complex tasks involving intensive thrust vectoring. We provide our code as open-source at https://github.com/robot-perception-group/robust_deep_residual_blimp.
Yang Zuo, Yu Tang Liu, Aamir Ahmad
2023-03-24T11:36:30Z
http://arxiv.org/abs/2303.13929v1
# Autonomous Blimp Control via H\({}_{\infty}\) Robust Deep Residual Reinforcement Learning ###### Abstract Due to their superior energy efficiency, blimps may replace quadcopters for long-duration aerial tasks. However, designing a controller for blimps to handle complex dynamics, modeling errors, and disturbances remains an unsolved challenge. One recent work combines reinforcement learning (RL) and a PID controller to address this challenge and demonstrates its effectiveness in real-world experiments. In the current work, we build on that using an \(H_{\infty}\) robust controller to expand the stability margin and improve the RL agent's performance. Empirical analysis of different mixing methods reveals that the resulting H\({}_{\infty}\)-RL controller outperforms the prior PID-RL combination and can handle more complex tasks involving intensive thrust vectoring. We provide our code as open-source at [https://github.com/robot-perception-group/robust_deep_residual_blimp](https://github.com/robot-perception-group/robust_deep_residual_blimp). ## I Introduction Unmanned aerial vehicles (UAVs) like multirotors and fixed wings are increasingly being used for visual tracking tasks such as aerial cinematography, wildlife monitoring[1], and precision farming[2]. However, while multirotors have limitations such as short battery life and small payload, fixed-wings must constantly move to stay airborne. We propose using autonomous blimps, which are more energy-efficient and have a higher payload for long-duration, small-region hovering tasks. Blimp control, however, presents challenges in the context of modeling uncertainties and wind disturbances. Prior work used a deep residual reinforcement learning (DRRL) framework[4, 3] to address this with a model-free proportional-integral-derivative (PID) base controller and an RL agent [5]. During training, the RL agent's action can be considered as an extra disturbance to the base controller, so the robustness of the base controller defines the permitted exploratory actions. In the current work, we replace the PID base controller with a robust model-based \(H_{\infty}\) controller to expand the stability margin. The \(H_{\infty}\) robust design framework generates a controller that makes decisions based on the worst-case scenario, which offers the most significant safety bound at the cost of control performance. This gives the RL agent a larger exploration bandwidth and more potential performance growth. The model-based approach also allows deriving the worst-case bound that considers the total amount of model uncertainty and disturbance from both the environment and the RL agent. We show in the simulated environment that the DRRL agent, consisting of the \(H_{\infty}\) robust control and a proximal policy optimization (PPO) agent[6], outperforms the previous PID-PPO combination in performance and robustness and can even handle more challenging tasks. We also improve the DRRL framework by a variable mixing factor such that the controller can grant the RL agent a variable amount of control authority. We designed the base controller's thrust vectoring to enhance the final performance further, allowing the RL agent to access a more significant state and action space for better exploration. ## II Related Work Research on reliable robotic platforms for aerial tracking tasks has led to the exploration of blimp and vision-based control, with most using PID-based control [10, 8, 7, 11, 9]. However, PID controllers are often a suboptimal solution for non-linear control problems like a blimp. Alternative solutions from model-based control frameworks, such as optimal control [12, 13], adaptive control [14], or robust Fig. 1: Top: the simulated blimp with the proposed \(H_{\infty}\)-PPO controller in the challenging coil trajectory. Bottom: descend trajectory of our \(H_{\infty}\)-PPO versus prior PID-PPO [5] controller. Our controller is more robust against disturbance and improves altitude control by utilizing thrust vectoring, while PID-PPO controller can only rely on the elevators for altitude control. As a result, it can deviate nearly 15 meters from the desired path. control [15] have been sought, but these have not yielded reliable controllers for real-world experiments due to model uncertainty and output disturbance. In recent years, RL with Gaussian Process-based models has been used for low-dimensional tasks [17, 18, 16]. In contrast, Deep RL (DRL) with large capacity models achieved a 3D path-following task in simulation [19] and in the first real-world experiment with the data-driven approach [5]. This work has extended the prior DRRL agent [5] by replacing PID with a robust \(H_{\infty}\) controller to improve safety and performance growth. ## III Methodology In this section, we first introduce the simulators and formulate the task in the reinforcement learning framework (Sec. III-B). Different from [5], we introduce the \(H_{\infty}\) controller as our base control (Sec. III-C). Lastly, we introduce our robust \(H_{\infty}\)-based deep residual reinforcement learning controller (Sec. III-E) as shown in Fig. 2, where mixer block represents the following equation, \[a_{mixed}=(1-q)a+qu \tag{1}\] The second difference from the previous work [5], which applies a fixed number of \(q\), is that we sample it randomly from a distribution. The variable \(q\) allows the controller to decide how much authority can be granted to the RL agent, depending on the situation. For example, when the wind disturbance is prominent, the controller can increase \(q\) for more intervention and safety. In the experiments section, we demonstrate that reducing the amount of intervention from the base control \(q\) improves the final performance (Sec.IV-C). Therefore, our goal is to design a robust controller to guarantee control stability during both learning and testing phase while a minimum amount of intervention is required. ### _Markov Decision Process (MDP)_ We first formulate RL problems as an MDP, and it can be represented as a tuple, \(\mathcal{M}=(\mathcal{S},\mathcal{A},R,\mathcal{P},\gamma,\rho)\), where \(s_{t}\in\mathcal{S}\) and \(a_{t}\in\mathcal{A}\) are the state and action space respectively. At any time step \(t\in\mathbb{R}\), the RL agent samples an action from its control policy based on the observed environmental state \(a_{t}\sim\pi(\cdot|s_{t})\). Then the environment returns the next state and a reward base on the underlying transition dynamics \(s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})\) and a reward function \(r_{t}=R(s_{t},a_{t})\), which defines the desired behavior and can be viewed as a task description. Given the discount factor \(\gamma\in[0,1)\) and initial state distribution \(s_{0}\sim\rho(\cdot)\), the goal of the RL agent is to find a control policy such that the total amount of discounted reward can be maximized, i.e. \[\pi^{*}=\operatorname*{arg\,max}_{\pi}\mathbb{E}_{\rho}[\sum_{t=0}^{\infty} \gamma^{t}r(s_{t},a_{t})|a_{t}{\sim}\pi(\cdot|s_{t}),s_{t+1}{\sim}\mathcal{P} (\cdot|s_{t})] \tag{2}\] ### _Task Formulation_ We train and test our mixed \(H_{\infty}\)-RL controller (Fig. 2) to perform navigation tasks in the simulated environments. The agent's goal is to control the vehicle to a desired position. We first introduce two environments: a simplified toy environment, _TurtleSim_, and the blimp simulator[11]. #### Iii-B1 Turtle Control Task Due to the similarity to the blimp control problem, we introduce it for the ablation study. As shown in Fig. (a)a, the agent observes the state in every time step and controls the robot turtle to a stationary target position. Both robot and target positions spawn randomly in every new episode. We formulate the problem by an MDP with the following state and action space, * state space: \(s_{t}=(s_{\theta},s_{l},u_{v},u_{\omega},q)_{\epsilon}\in[0,1]\), * action space: \(a_{t}=(a_{v},a_{\omega})_{t}\in[-1,1]\), where all states are in the range \([0,1]\), and the scaled state \((s_{\theta},s_{l})\) are the relative yaw angle \(\theta\) and relative distance \(l\), augmented with the mixing factor \(q\in[0,1]\), and the base control \((u_{v},u_{\omega})\) corresponds to thrust and yaw velocity command and share the same command channel with the agent's actions \((a_{v},a_{\omega})\). Then the navigation task can be formulated by the reward function, \[r_{t}=\begin{bmatrix}w_{success}&w_{track}\end{bmatrix}\begin{bmatrix}r_{success,t} \\ r_{track,t}\end{bmatrix} \tag{3}\] \[r_{success,t}=1\ if\ |t_{t}|\leq\epsilon\ else\ 0, \tag{4}\] \[r_{track,t}=-|l_{t}|, \tag{5}\] where by default \(w_{success}=500\), \(w_{track}=0.1\), and \(\epsilon=0.1\). The environment resets itself when the success reward is obtained. #### Iii-B2 Blimp Control Task Similar to the turtle control task, the goal of the RL agent is to navigate the robotic blimp (Fig. (b)b) to a virtual position target. The state and action space are specified as follows, * state space: \(s_{t}=(s_{z},s_{l},s_{\theta},u_{\zeta},u_{\eta},u_{\epsilon},u_{\delta},q)_{t}\) * action space: \(a_{t}=(a_{\zeta},a_{\eta},a_{\epsilon},a_{\delta})_{t}\) where all states are scaled in the range \([0,1]\), and the scaled state \((s_{z},s_{l},s_{\theta})\) are the relative altitude \(z\), relative distance \(l\), relative yaw angle \(\theta\), augmented with the mixing factor \(q\in[0,1]\) and the base control command \((u_{\zeta},u_{\eta},u_{\epsilon},u_{\delta})\) corresponding to the control of rudder deflection, elevator deflection, the servo thrust angle, and the thrust magnitude. The actions \((a_{\zeta},a_{\eta},a_{\epsilon},a_{\delta})\) corresponds to the same command channels. Note that the action dynamics are coupled; for example, one can ascend by an elevator when moving forward or directly thrusting upward through the thrust vector. In this context, there are two major differences from the prior work[5]. First is the usage of the reverse thrust. Fig. 2: Our robust deep residual reinforcement learning framework. Every time step, the mixer gathers the action command from the policy \(a_{t}\) and the controller \(u_{\epsilon}\) and then mixes them based on the mixing factor \(q_{t}\) evaluated by the controller. Descending a blimp is challenging since the blimp's heading velocity is usually slow, and, consequently, the altitude descent velocity from the elevator is also slow. This can cause significant altitude tracking errors. And therefore, even though reverse thrusting is generally less efficient, it helps the blimp descend much faster when lacking the heading velocity. Another difference is that we trigger the next target waypoint only when the total distance to the target is less than a threshold of 5 meters instead of the planar distance. This requires much higher efficiency over the altitude control and poses a more significant challenge for control allocation as there are diverse ways to achieve it, e.g., elevator or thrust vector. We demonstrate in the experiment that the RL agents fail to find any viable control policy without efficiently using thrust vectoring. The following reward function formulates the navigation task, \[r_{t}=\begin{bmatrix}w_{success}&w_{track}&w_{penalty}\end{bmatrix}\begin{bmatrix} r_{success,t}\\ r_{track,t}\\ r_{penalty,t}\end{bmatrix} \tag{6}\] \[r_{success,t}=1\ if\ |t_{t}|\leq\epsilon\ else\ 0, \tag{7}\] \[r_{track,t}=-w_{z}|z_{t}|-w_{l}|l_{t}|-w_{\theta}|\theta|, \tag{8}\] \(r_{penalty,t}=\Delta(a,u),\) where the default value of the task weight is \((w_{success},w_{track},w_{penalty})=(500,1,10)\), the tracking reward weight \((w_{z},w_{l},w_{\theta})=(2,5,2)\) and \(\epsilon=5\)[m]. The term \(\Delta(a,u)\) penalizes when the action deviates too much from the base control to encourage the synergy between the agent and the controller. In practice, we found out that without this ad-hoc penalty, RL agents fail to find any viable control policy. At each time step, we initialize \(\Delta(a,u)=0\), and then accumulate it if any of the conditions are triggered, * \(\Delta(a,u)+=-0.5\), if \(a_{\text{c}}u_{\text{c}}<0\) and \(|a_{\text{c}}-u_{\text{c}}|>0.4\). * \(\Delta(a,u)+=-0.5\), if \(a_{\text{n}}u_{\text{n}}>0\) and \(|a_{\text{n}}-u_{\text{n}}|>0.4\). * \(\Delta(a,u)+=-0.5\), if \(a_{\text{c}}a_{\text{s}}>0\). * \(\Delta(a,u)+=1\), if \(a_{\text{c}}u_{\text{c}}>0\). * \(\Delta(a,u)+=-0.5\), if \(u_{\text{n}}=-1\), \(u_{\text{c}}=0.5\) and \(a_{\text{c}}>0.7\). To avoid misuse of the reverse thrust, the third condition penalizes when the agent commands the thrust vector to tilt backward \(a_{\text{c}}>0\) and the thrust \(a_{\delta}\) to be positive and vice versa. Similarly, the last condition penalizes when the controller commands the thrust vector to tilt backward at its maximum \(u_{\text{c}}=0.5\), but the RL agent tilts the thrust vector even more, which leads to inefficient reverse thrusting. Lastly, all other conditions penalize when the action commands between the agent and controller are too different. ### \(H_{\infty}\) Robust Control This framework can be illustrated by the feedback control loop in Fig. 4, where K is the controller, G is our robot, and the state is directly observed from the sensor without any estimator. Given the tracking signal \(w\) and feedback observation \(y_{m}\), The goal is to design the controller such that the tracking error \(e=w-y_{m}\) can be reduced over time subject to input and output disturbance \(du\), \(dy\). Note that in the hybrid control Fig. 2, the agent action can be viewed as part of \(du\). The weighting filters have the following forms, \[W_{T}(s)=\frac{1}{T_{min}}\cdot\frac{s+\omega_{zt}}{s+\omega_{nt}}, \tag{10a}\] \[W_{KS}(s)=\frac{1}{KS_{min}}\cdot\frac{s+\omega_{zks}}{s+\omega_ {nks}},\] (10b) \[W_{S}(s)=\frac{1}{S_{max}}\cdot\frac{s+\omega_{zs}}{s+\omega_{ ns}}, \tag{10c}\] Fig. 4: \(H_{\infty}\) control framework. The goal is to design controller \(K\) such that the robot G can be stabilized and input and output disturbance (\(du\) and \(dy\)) can be rejected. Fig. 3: Simulation Environments where \(\omega\) is the cut-off frequency of each filter. The weight filter parameters in our experiments are presented in Table. II. Then the controller K can be solved by satisfying the following constraint, \[\left\|\begin{matrix}W_{S}\cdot S\\ W_{KS}\cdot KS\\ W_{T}\cdot T\end{matrix}\right\|_{\infty}\leq 1 \tag{11}\] In practice, we design the weight filters manually and then solve the controller K in MATLAB [20]. Let \(DU\) and \(U\) be the Laplace transform of the unknown input disturbance \(du\) and control command \(u\), then system response can be formulated as follows, \[G\cdot(I+\Delta\cdot W_{T})\cdot U=G\cdot(U+DU), \tag{12}\] by treating the input disturbance as part of model uncertainty, where \(I\) is the identity matrix and \(\Delta=\Delta(s)\) is the uncertainty matrix with \(\left\|\Delta\right\|_{\infty}\leq 1\). After factoring out \(G\), we can derive the following relation by matrix sub-multiplicative, \[\left\|DU\right\|_{2} =\left\|\Delta\cdot W_{T}\cdot U\right\|_{2} \tag{13}\] \[\leq\left\|\Delta\right\|_{2}\cdot\left\|W_{T}\right\|_{2}\cdot \left\|U\right\|_{2}\] (14) \[\leq\left\|W_{T}\right\|_{2}\cdot\left\|U\right\|_{2} \tag{15}\] Since matrix \(W_{T}\) is diagonal and only consists of identical values, we can consider the \(i\)-th row of (15): \[\left\|DU_{i}\right\|_{2}\leq\left\|W_{T,i}\right\|_{2}\cdot\left\|U_{i} \right\|_{2}. \tag{16}\] Note that (16) is sufficient but not necessary for (15), which means that (16) is a more strict condition to determine the upper bound of \(\left\|DU\right\|_{2}\). And by _Parseual's theorem_, we have two identities for (16): \[\left\{\begin{matrix}\left\|DU_{i}(j\omega)\right\|_{2}=\left\|du_{i}(t) \right\|_{2}\\ \left\|U_{i}(j\omega)\right\|_{2}=\left\|u_{i}(t)\right\|_{2}\end{matrix}\right. \tag{17a}\] \[\left\|u_{i}(t)\right\|_{2} \tag{17b}\] Finally, we can derive a conservative upper bound for the plant input disturbance from (16) and (17), i.e., \(\left\|du_{i}\right\|_{2}\leq\left\|W_{T,i}\right\|_{2}\left\|u_{i}\right\|_{2}\), such that the \(H_{\infty}\)-controller stabilizes the plant G. We derive the theoretical upper bound for \(\left\|du_{i}\right\|_{2}\) in both simulators as shown in the Table. I, assuming that the controller commands a step input \(u_{i}=1\) when \(t\geq 0\). Now consider the mixed command in (1). As long as the following relation is satisfied, then the process will remain stable. \[\left\|(1-q(t))\cdot(u_{i}(t)-a_{i}(t))\right\|_{2}\leq\left\|W_{T,i}(j\omega )\right\|_{2}\cdot\left\|u_{i}(t)\right\|_{2} \tag{18}\] Intuitively, the RHS is the upper bound of the input disturbance the base control \(u\) can reject. If we choose the weighting parameter \(q\) as a constant through time, we can further reduce (18) to : \[q\geq 1-\frac{\left\|W_{T,i}(j\omega)\right\|_{2}\cdot\left\|u_{i}(t)\right\| _{2}}{\left\|(u_{i}(t)-a_{i}(t))\right\|_{2}} \tag{19}\] Then according to Table. I and the constraint (19) and assuming _average case_ when agent actions are uniform random and have an average zero, i.e., \(\mathbb{E}\left[a\right]=0\), we can select any distribution for the mixing factor \(q\) as long as the mean of the distribution is positive, i.e., \(\mathbb{E}\left[q\right]\geq 0\). Or assuming _worst-case_ scenario when we have an adversarial agent, i.e., \(\mathbb{E}\left[a\right]=-u\), then when \(\mathbb{E}\left[q\right]\geq 0.08\) for TurtleSim or \(\mathbb{E}\left[q\right]\geq 0\) for the blimp simulator, the process will remain stable. However, because our plant model is likely imperfect and considering other disturbance and noise that is not modeled, the allowed maximum input disturbance will be less than the estimation. In practice, our conservative choice of \(\mathbb{E}\left[q\right]=0.5\) for the TurtleSim and \(\mathbb{E}\left[q\right]=0.3\) for the blimp simulator seem to work well. ### _Robust Hybrid \(H_{\infty}\)-RL in TurtleSim_ Given the controller command \(u=[u_{\upsilon},u_{\omega}]^{T}\), reference \(w=[0,0]^{T}\), the state vector \(x=[l,\theta]^{T}\), and plant output \(y=[l,\theta]^{T}\), the kinematics of the turtle can be represented by the following state space model, \[A_{turtle} =\begin{bmatrix}0&0\\ 0&0\end{bmatrix},B_{turtle}=\begin{bmatrix}-1\\ 1\end{bmatrix}, \tag{20}\] \[C_{turtle} =\begin{bmatrix}1&0\\ 0&1\end{bmatrix},D_{turtle}=\begin{bmatrix}0\\ 0\end{bmatrix}. \tag{21}\] Recall that \(l\) denotes the relative distance between the turtle and its target, and \(\theta\) represents the heading angle difference to the target. The kinematic model of the turtle, i.e., the plant \(G\), including the disturbances \(du\) and \(dy\), sensor measurement, and the weighting filters \(W\), yields the augmented plant \(P\) (Fig. 5). The augmented plant \(P\) is stabilized by a controller \(K\), which is obtained by applying the \(H_{\infty}\) design method given the constraint (11) and Table. II. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Parameters** & _turtle_ & _blimp, yaw_ & _blimp, as_ & _blimp, ds_ \\ \hline \(T_{max}\) & \(\sqrt{5}\) & \(\sqrt{2.2}\) & \(\sqrt{2}\) & \(\sqrt{2}\) \\ \hline \(T_{min}\) & \(0.01\) & \(0.01\) & \(0.01\) & \(0.01\) \\ \hline \(KS_{max}\) & \(10\) & \(0.8\) & \(0.5\) & \(0.5\) \\ \hline \(KS_{min}\) & \(0.01\) & \(0.01\) & \(0.01\) & \(0.01\) \\ \hline \(S_{max}\) & \(2\sqrt{2}\) & \(\sqrt{2}\) & \(\sqrt{2}\) & \(\sqrt{2}\) \\ \hline \(S_{min}\) & \(0.001\) & \(0.01\) & \(0.01\) & \(0.01\) \\ \hline \(\omega_{ns}\) & \(0.00625\sqrt{2}\) & \(0.001\sqrt{2}\) & \(0.001\sqrt{2}\) & \(0.001\sqrt{2}\) \\ \hline \(\omega_{zt}\) & \(25\) & \(0.2\) & \(0.2\) & \(0.2\) \\ \hline \(\omega_{nt}\) & \(2500\sqrt{5}\) & \(20\sqrt{2.2}\) & \(20\sqrt{2}\) & \(20\sqrt{2}\) \\ \hline \(\omega_{zks}\) & \(200\) & \(0.8\) & \(0.5\) & \(0.5\) \\ \hline \(\omega_{nks}\) & \(2\cdot 10^{5}\) & \(64\) & \(25\) & \(25\) \\ \hline \(\omega_{ss}\) & \(25\) & \(0.2\) & \(0.2\) & \(0.2\) \\ \hline \end{tabular} \end{table} TABLE II: \(H_{\infty}\) control weighting filter parameters. Note that column ”turtle” denotes the \(H_{\infty}\)-controller in _TurtleSim_, and ”blimp, yaw/as/ds” denote the yaw, ascend, and descend motion, respectively, in the blimp simulator. Fig. 5: \(H_{\infty}\) controller compact form \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline **Simulator** & \(\left\|W_{T,t}\right\|_{2}(j\omega)\) & \(\left\|u_{i}\right\|_{2}\) & Maximum \(\left\|du_{i}\right\|_{2}\) & \(\omega\left\lceil rad/s\right\rceil\) \\ \hline _TurtleSim_ & \(1.84\) & \(\sqrt{t}\) & \(1.84\sqrt{t}\) & \(100\) \\ \hline Blimp & \(33.34\) & \(\sqrt{t}\) & \(33.34\sqrt{t}\) & \(10\) \\ \hline \end{tabular} \end{table} TABLE I: Maximum allowed input disturbance. Depending on the sampling frequency \(\omega\), we increase \(\left\|W_{T}\right\|_{2}\) of the blimp simulator for more robustness and less of TurtleSim for more performance. Symbol \(t\) denotes the time duration the controller sending step input \(u_{i}=1\). Now consider the hybrid scenario, and the control command has the following form, \[a_{mixed}=\left[(1-q)a_{v}+qu_{v}\quad(1-q)a_{\omega}+qu_{\omega}\right]^{T} \tag{22}\] where \(a\sim\pi(\cdot|s)\). The mixing factor is sampled randomly from the uniform random distribution at each time step, i.e., \(q\sim\mathcal{U}(0,1)\), which satisfies the constraint in (19). Note that it is important for the agent to observe the entire interval of \(q\in[0,1]\) during training so that the agent will be able to generalize to any arbitrary \(q\) in the testing phase. ### _Robust Hybrid \(H_{\infty}\)-RL in the Blimp Simulator_ Since one compact _MIMO (multiple-input and multiple-output)_-controller using the \(H_{\infty}\)-method is hard to derive, we split the entire blimp dynamic into two linear sub-systems: the yaw motion and the others. The yaw dynamic is modeled as \((A,B,C,D)_{yaw}=(0,-20,1,0)\), with the state \(x_{yaw}=\theta\), \(y_{yaw}=\theta\), \(u_{yaw}=u_{\zeta}\) and \(w_{yaw}=\theta_{ref}\), where \(\theta\) is the heading angle of the blimp w.r.t the world frame. Furthermore, we model the time delay by the second-order _Pade approximation_ with a dead time \(T=0.65\), \[e^{-Ts}\approx\frac{2-Ts}{2+Ts} \tag{23}\] The rest of the dynamics is required for the velocity and altitude control. Since the thrusting angle introduces non-linearity, we linearize at two trim points and design two \(H_{\infty}\)-controllers for the ascending and descending motions, respectively. In both modes, we have the same states, plant outputs, and control commands, i.e., \(x_{\text{as}/ds}=[-l,z]^{T}\), \(y_{\text{as}/ds}=[l,z]^{T}\), \(u_{\text{as}/ds}=[u_{\eta},u_{\epsilon},u_{\delta}]^{T}\), and \(w_{\text{as}/ds}=[0,0]^{T}\). Recall that \(l\) denotes the relative distance, and \(z\) denotes the relative altitude. Then we have the ascending dynamics, \[A_{\text{ascend}}=\begin{bmatrix}0&0\\ 0&0\end{bmatrix},B_{\text{ascend}}=\begin{bmatrix}0&-5&9.8\\ -0.4&-0.77&0\end{bmatrix} \tag{24}\] \[C_{\text{ascend}}=\begin{bmatrix}-1&0\\ 0&1\end{bmatrix},D_{\text{ascend}}=\begin{bmatrix}0&0&0\\ 0&0&0\end{bmatrix} \tag{25}\] and descending dynamics, \[A_{\text{descend}}=\begin{bmatrix}0&0\\ 0&0\end{bmatrix},B_{\text{descend}}=\begin{bmatrix}0&5&-9.8\\ -0.4&0.77&0\end{bmatrix}, \tag{26}\] \[C_{\text{descend}}=\begin{bmatrix}-1&0\\ 0&1\end{bmatrix},D_{\text{descend}}=\begin{bmatrix}0&0&0\\ 0&0&0\end{bmatrix}. \tag{27}\] As we are applying two linear controllers to two highly nonlinear dynamics, we further restrict the controller commands by heuristics to assure that the blimp works near the linearization points, i.e., \(u_{\epsilon}\in[-1,-0.5]\) and \(u_{\delta}\in[0.4,0.6]\) in ascending mode while \(u_{\epsilon}\in[0.5,1]\) and \(u_{\delta}\in[-0.6,-0.4]\) in descending mode. Now, we obtain in total three \(H_{\infty}\) controllers by applying the \(H_{\infty}\) design method via (11) and the weighting filter (Table. II) for their linearized dynamics. The altitude controller switches the mode at zero relative altitudes, i.e., \(z=0\), while the yaw controller remains independent of the altitude control. Finally, our hybrid DRRL agent has the action in the format, \[a_{mixed}=(1-q)\begin{bmatrix}a_{\zeta}\\ a_{\eta}\\ a_{\epsilon}\\ a_{\delta}\end{bmatrix}+q\begin{bmatrix}u_{\zeta}\\ u_{\eta}\\ u_{\epsilon}\\ u_{\delta}\end{bmatrix} \tag{28}\] During training, we sample \(q\) in some distributions, while in the testing phase, the controller can provide \(q\) based on the constraint (19) or apply a constant \(q\) as we did in this work. We have experimented with different \(q\) distributions and empirically found that the mixing factor with any distribution works well if it covers the full range \(q\in[0,1]\). More details are displayed in the experiment section (Sec. IV-D). ### _Training the Robust DRRL Agent_ Our networks (Table. III) follow the actor-critic architecture, which requires two function approximators, e.g., deep neural networks, for the value estimation, \(V_{\theta}(s,a)\) and the policy distribution \(\pi_{\phi}(\cdot|s)\). A PPO agent (proximal policy optimization, [6]), with hyper-parameters in Table. IV, is employed to optimize our networks' parameters. The hybrid agent collects data by interacting with the \(N_{env}\) parallelized environment with randomized mixing factor \(q\). A waypoint is sampled randomly in every episode, and it will be triggered when the robot is within a certain distance, e.g., 1[m] for the TurtleSim and 5[m] for the blimp simulator. The environment resets when the waypoint is triggered. We inject noise into the observations to increase the agent's robustness and randomize wind disturbance and buoyancy in every episode. The transition data are stored in the buffer with size \(N_{epoch}\). When the buffer is full, the agent will start learning by querying \(L/N_{batch}\) mini-batches and updating \(N_{update}\) times for each of them or when the KL threshold \(D_{KL}\) is reached. Note that the base control can be considered part of the environment in our hybrid control scenario. The agent's goal is to optimize the total amount of reward considering the base control's decision. ## IV Experiments and Results The experiment aims to understand whether increasing the robustness of the base control can enable more performance growth and generate a robust and performant controller. ### _Experiment Setup_ We perform all experiments on a single computer (AMD Ryzen Threadripper 3960X, 24x 3.8GHz, NVIDIA GeForce RTX2080 Ti, 11GB). The PPO agent and base controllers, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{6}{c|}{Value Network} & \multicolumn{6}{c|}{Policy Network} \\ \hline Simulator & \(o\) & \(L\) & \(F_{1}\) & \(F_{2}\) & \(v\) & \(o\) & \(L\) & \(F_{1}\) & \(F_{2}\) & \(\mu\) \\ \hline TurtleSim & 5 & 64 & 64 & 64 & 1 & 5 & 24 & 24 & 24 & 2 \\ \hline Blimp & 8 & 196 & 196 & 196 & 1 & 8 & 64 & 64 & 64 & 4 \\ \hline \end{tabular} \end{table} TABLE III: Network architecture. Notation \(L\) denotes the LSTM layer, F is the fully connected layer, and the numbers indicate the layer’s size. Following the suggestion of a recent work [21], we choose _tanh_ as our activation function and initialize the last layer with small weights (e.g., 0.01) to improve the exploration. i.e., \(H_{\infty}\) and PIDs, are implemented based on Pytorch[22] to facilitate vectorized computing. Both TurtleSim and the blimp simulator are implemented based on the ROS and the latter is integrated with the Gazebo SITL simulation. We compare our robust \(H_{\infty}\)-RL controller to the previous PID-RL baseline in the TurtleSim (Sec. III-B1) and blimp simulator (Sec. III-B2). 1. \(H_{\infty}\)-PPO agent: our proposed approach 2. PID-PPO agent: previous approach [5]. We re-implemented with a randomized mixing factor \(q\) and a servo control so that it has the chance to challenge our more difficult waypoint following task. Note when the mixing factor is \(q=1\) we recover the base control, and \(q=0\) corresponds to the pure PPO agent. In the training phase, \(q\) is sampled randomly from different distributions while fixed in the testing phase. ### _Performance Metric_ Because the total rewards can be pretty misleading as it only reflects an agent's performance tracking one specific waypoint, and each agent's training reward can differ. We introduce a metric, \(b_{score}\), to compare the relative performance between the controllers to replace the reward for our path-following tasks. \[b_{score}(\pi_{i})=100\cdot\left(1-\frac{T_{\pi_{i}}}{\sum_{\pi_{j}\in\pi}T_{ \pi_{j}}}-\frac{E_{\pi_{i}}}{\sum_{\pi_{j}\in\pi}E_{\pi_{j}}}\right), \tag{29}\] where \(T_{\pi_{i}}\) is the average amount of time for each controller to complete the task and \(E_{\pi_{i}}\) computes the average control effort for each controller. The more time or energy the controller consumes to complete the task, the worse the score relative to other controllers. ### _TurtleSim_ We sample a random position target in every new episode to test the proposed hybrid agent. The environment resets when the robot reaches the target position. The experiment terminates when the agent successfully controls the robot to the target position 100 times. The goal of each agent is to trigger the terminal condition with a minimum amount of time and energy in every episode. Table. V is the result of our experiment in TurtleSim, where the energy penalty for each control policy is defined as \(E_{\pi}=0.25\cdot|\bar{a}_{mix,v}|+0.75\cdot|\bar{a}_{mix,\omega}|\), bar denotes the average over the whole trajectory. Each experiment is conducted with ten random seeds. Note that because TurtleSim did not have any dynamics, we applied the disturbance \(du\) during training to output action to simulate wind, which is formulated as time dependant noise, \[a_{mixed,t}\gets a_{mixed,t}+\delta_{t} \tag{30}\] \[\delta_{t}=\delta_{t-1}+[n_{v}\quad n_{\omega}]^{\top} \tag{31}\] where \(\delta_{t}\) is initialized as zero and bounded in \([-1,1]\), and both noises are sampled from standard Gaussian \(n_{v},n_{\omega}\sim\mathcal{N}(0,1)\). This noise is amplified five times in the testing phase, increasing the bound to \([-5,5]\). Table. V indicates that the PPO agent alone has the best performance and energy efficiency, followed by the PID and then \(H_{\infty}\) controller. Therefore, with less controller intervention \(q\), the hybrid agent will achieve better performance. Unsurprisingly, the PID controller performs better than the \(H_{\infty}\) controller, which trades both the performance and energy efficiency for more robustness against noise and disturbance. Base controller with more robustness allows training with lower mixing factor \(q\), but since TurtleSim is relatively simple and allows both base controllers to train and test with arbitrary \(q\), the advantage of \(H_{\infty}\) is not well reflected in this experiment. ### _Blimp Simulator_ The training method of the hybrid agent is introduced in Sec. III-F for both PID-PPO baseline and our \(H_{\infty}\)-PPO agent. We conduct an ablation study on the effect of the different sampling distributions for the mixing factor \(q\) during training. The testing phase is conducted in the coil trajectory, represented by a sequence of waypoints with three different wind disturbance and buoyancy levels. Each evaluation finishes when the agent completes the designated trajectory five times. The waypoints are triggered when the robot is within 10 meters, and the following waypoints will become active. The coil trajectory consists of 15 waypoints with a 50 meters radius. Each waypoint is placed 45 degrees counter-clockwise from the previous one with 3 meters increase in altitude. The coil trajectory poses a great challenge. Due to the shorter planar distance, the controllers must constantly slow down the blimp to prevent overshooting the waypoints, which can incur significant altitude loss. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Controller & \(q\) & \(|\bar{a}_{v}|\) & \(|\bar{a}_{\omega}|\) & \(T\) & \(E\) & \(b_{score}\) \\ \hline PPO & 0 & 5.58 & 4.72 & 9916 & 4.93 & 70.63 \\ \hline \(H_{\infty}\)-PPO & 0.5 & 6.25 & 5.56 & 11709 & 5.58 & 66.05 \\ \hline _PID_-PPO & 0 & 5.58 & 8.49 & 10055 & 5.19 & 69.66 \\ \hline \(H_{\infty}\)-PPO & 1 & 7.18 & 6.85 & 11765 & 6.93 & 61.91 \\ \hline _PID_-PPO & 1 & 7.32 & 5.69 & 12281 & 6.10 & 63.65 \\ \hline \end{tabular} \end{table} TABLE V: Testing in _TurtleSim_. Each row displays the average value of 10 experiments with different seeds. \begin{table} \begin{tabular}{|l|c|c|} \hline **Parameters** & _TurtleSim_ & _Blimp_ \\ \hline Time Steps per Environment \(N_{step}\) & \(50000\) & 86400 \\ \hline Parallelization \(N_{Senv}\) & 8 & 7 \\ \hline Episode Length \(L\) & 2000 & 2400 \\ \hline Loop Rate \([Hz]\) & 100 & 10 \\ \hline Epoch Length \(N_{epoch}\) & 1000 & 1920 \\ \hline Mini-batch Size \(N_{batch}\) & 100 & 128 \\ \hline Update per Epoch \(N_{update}\) & 20 & 20 \\ \hline \hline Initial Policy Learning Rate \(\alpha_{0}\) & \(5\cdot 10^{-5}\) & \(5\cdot 10^{-5}\) \\ \hline Initial Value Learning Rate \(\beta_{0}\) & \(1\cdot 10^{-4}\) & \(1\cdot 10^{-4}\) \\ \hline KL Threshold \(D_{KL}\) & \(\infty\) & 0.03 \\ \hline Discount Factor \(\gamma\) & 0.999 & 0.99 \\ \hline GAE Smoothing \(\lambda\) & 0.95 & 0.9 \\ \hline Gradient Optimizer & \(Adam\) & \(Adam\) \\ \hline \end{tabular} \end{table} TABLE IV: PPO hyper-parameters. The learning rate is scheduled by multiplying a constant of less than one every episode until it drops to a minimum of le5. Since deviating far from the track compromises the blimp's safety, the position tracking error L is introduced to the \(b_{score}\) for the blimp navigation task, i.e., \[b_{score}(\pi_{i}) \tag{32}\] \[=100\cdot\left(1-\frac{3T_{\pi_{i}}}{\sum_{\pi_{j}\in\pi}T_{\pi_{j} }}-\frac{E_{\pi_{i}}}{\sum_{\pi_{j}\in\pi}E_{\pi_{j}}}-\frac{L_{\pi_{i}}}{\sum_ {\pi_{j}\in\pi}L_{\pi_{j}}}\right),\] where \(T_{\pi_{i}}\) and \(E_{\pi_{i}}\) are the time and energy penalty, and the \(L_{\pi_{i}}\) is the average distance loss computed by the norm of the relative position. The robustness test for the agents is displayed in Table VI. The \(H_{\infty}\)-PPO outperforms the PID-PPO combination with a significantly higher success rate regardless of the wind or buoyancy condition. Even when the controller has zero intervention \(q=0\), the PPO agent trained with \(H_{\infty}\) base control performs much better. And the \(H_{\infty}\)-PPO performs the best when \(q=0.5\) without failing to any condition. This shows that our robust residual RL framework can generate a robust, high-performance controller. The explanations can be found in the table as well. First, base control robustness is vital to the final performance of the PPO agent. When \(q=1\), although \(H_{\infty}\) performs worse than PID, its robustness against disturbance secures its success rate. During training, PID can become unstable due to input disturbance from the RL agent, further jeopardizing the PPO training stability while \(H_{\infty}\) is less affected. As a result, the PPO trained with PID performs even worse than PID. The wind and buoyancy test reflect the robustness of the controller. The PID-PPO failure rate increases significantly outside the nominal condition in which PID is tuned. The effect of wind can be visualized in Fig.6. The wind barely influences \(H_{\infty}\)-PPO controller while PID-PPO controller is blown away and loses yaw control when descending. Second, the PID controller has relatively poor altitude control even after installing thrust vectoring. Since thrust vectoring introduces high nonlinearity to the system, the PID controller does not benefit much from it. The poor performance in altitude control is reflected in the buoyancy test. We conducted an ablation study about the effect of wind disturbance and \(q\) distribution during training. Table. VII displays the effect of incorporating wind disturbance during training. Regardless of the mixing factor \(q\), the \(b_{score}\) always decreases when training with the wind. With \(q=0\), the performance drops significantly, implying that the wind negatively impacts the agent more than the controller. As a result, we suggest training without any disturbance to encourage aggressive behavior since, under the supervision of the robust controller, the agent no longer needs to behave conservatively. The effect of mixing factor \(q\) during training is displayed in Table.VIII. We found the training success rate increases when \(q\) becomes larger since the base control is critical in improving the training stability. The distribution with a higher average \(q\) is preferred during training as it increases the training success rate. The distribution type is not essential as long as it covers the entire range \(q\in[0,1]\). Lastly, as mentioned, a lower average \(q\) is desired for improving the control performance in the testing phase. ## V Conclusions In this work, we have introduced a \(H_{\infty}\)-PPO hybrid agent for the blimp control task. We first improve the altitude control efficiency by incorporating the thrust vectoring into the base control and enabling the usage of reverse thrusting. Then, we applied the variable mixing factor, which allows the controller to balance robustness and performance based on the situation. A theoretical lower bound for the mixing factor is derived to guarantee stability. Lastly, we test in the blimp simulator that our robust hybrid agent can outperform \begin{table} \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline controller & **q** & \(a_{\zeta}\) & \(a_{\sigma}\) & \(a_{\epsilon}\) & \(a_{\delta}\) & \(T\) & \(L\) & \(E\) & \(b_{score}\) & fail & **wind** & \(T\) & \(L\) & \(E\) & \(b_{score}\) & fail \\ \hline PID-PPO & **0** & 0.83 & 0.90 & 1 & 1 & 3414 & 43.90 & 0.8705 & 84.38 & 7 & **0** & 2996 & 37.74 & 0.7252 & 86.50 & 4 & **0.93** & 2420 & 37.89 & 0.4988 & 88.95 & 8 \\ \hline PID-PPO & **0** & 0.56 & 0.79 & 0.99 & 0.79 & 4294 & 66.09 & 0.6943 & 81.24 & 5 & **0.5** & 4255 & 70.88 & 0.6048 & 81.38 & 5 & **1** & 2633 & 55.99 & 0.6284 & 86.64 & 3 \\ \hline PID-PPO & **1** & 0.47 & 0.55 & 0.94 & 0.57 & 2615 & 38.42 & 0.5048 & 88.36 & 4 & **1** & 2464 & 33.55 & 0.4927 & 89.15 & 8 & **1.07** & 5416 & 43.98 & 0.7562 & 39.73 & 6 \\ \hline \(H_{\infty}\)-PPO & **0** & 0.52 & 0.88 & 0.57 & 1 & 2646 & 35.97 & 0.8641 & 87.00 & 0 & **0** & 2464 & 35.12 & 0.6813 & 88.28 & 0 & **0.93** & 3442 & 35.69 & 0.6875 & 85.60 & 0 \\ \hline \(H_{\infty}\)-PPO & **0** & 0.54 & 0.46 & 0.63 & 0.61 & 0.76 & 2484 & 36.10 & 0.6665 & 88.23 & 0 & **0.5** & 2685 & 38.07 & 0.6830 & 87.49 & 0 & **1** & 2510 & 36.88 & 0.6720 & 88.08 & 0 \\ \hline \(H_{\infty}\)-PPO & **1** & 0.47 & 0.43 & 0.71 & 0.55 & 3456 & 37.10 & 0.5075 & 86.19 & 0 & **1** & 3438 & 35.98 & 0.6739 & 85.65 & 0 & **1.07** & 2633 & 36.60 & 0.6788 & 87.74 & 0 \\ \hline \end{tabular} \end{table} TABLE VI: Robustness test. Each experiment is conducted nine times with the coil trajectory. Fair trials are excluded from computing the \(b_{score}\) and marked as _fail_. The energy penalty is defined as \(E_{\pi_{i}}=0.15\cdot|u_{\zeta}|+0.05\cdot|u_{\eta}|+0.1\cdot(1-|u_{\epsilon}| )+0.7\cdot|u_{\delta}|\), which penalizes majorly on thrusting. The unit for wind is [m/s] and for buoyancy is [%]. The maximum speed of the simulated blimp is 2 [m/s]. Fig. 6: Snippet of the coil trajectories (green curves) and waypoints (red dots). The first and second columns correspond to the \(H_{\infty}\)-PPO and PID-PPO controllers. Rows correspond to the wind velocities 0, 0.5, and 1 [m/s]. The buoyancy is in nominal condition while the mixing factor \(q=0.5\). the prior PID-PPO combination and demonstrate greater robustness against wind disturbance and buoyancy changes.
2305.06714
Factorization systems and double categories
We show that factorization systems, both strict and orthogonal, can be equivalently described as double categories satisfying certain properties. This provides conceptual reasons for why the category of sets and partial maps or the category of small categories and cofunctors admit orthogonal factorization systems. The theory also gives an explicit description of various lax morphism classifiers and explains why they admit strict factorization systems.
Miloslav Štěpán
2023-05-11T10:47:03Z
http://arxiv.org/abs/2305.06714v2
# Factorization systems and double categories ###### Abstract. We show that factorization systems, both strict and orthogonal, can be equivalently described as double categories satisfying certain properties. This provides conceptual reasons for why the category of sets and partial maps or the category of small categories and cofunctors admit orthogonal factorization systems. The theory also gives an explicit description of various lax morphism classifiers and explains why they admit strict factorization systems. This work was supported from Operational Programme Research, Development and Education "Project Internal Grant Agency of Masaryk University" (No. CZ.02.2.69\(\backslash\)0.0\(\backslash\)0.0\(\backslash\)19_073\(\backslash\)0016943). The author also acknowledges the support of Grant Agency of the Czech Republic under the grant 22-02964S. of objects, vertical morphisms, horizontal morphisms and squares just like the one pictured below: In a general double category you cannot compose vertical morphisms with horizontal ones, but if you could, you might interpret the above square \(\alpha\) as telling us that the morphism \(v\circ g\) (horizontal followed by vertical) can be factored as \(h\circ u\) (vertical followed by horizontal) - this is reminiscent of ordinary factorization systems on a category. Taking this philosophy to heart, we assign to a double category \(X\) a certain _category of corners_\(\operatorname{Cnr}(X)\) (a concept introduced by Mark Weber in [19]), in which composition of vertical and horizontal morphisms is possible, and for which squares in \(X\) turn into commutative squares in \(\operatorname{Cnr}(X)\). Regarding the double category as a diagram \(X:\Delta^{op}\to\operatorname{Cat}\), producing \(\operatorname{Cnr}(X)\) amounts to taking the _codescent object_ of \(X\), a \(2\)-categorical colimit that is an analogue of ordinary coequalizers. Because the usage of codescent objects is related to the classification of pseudo/lax morphisms between strict algebras for a \(2\)-monad (see [16]), the category of corners construction gives us an explicit description of lax morphism classifiers for a certain class of \(2\)-monads. The paper is organized as follows: * In Section 2 we recall the basic notions of double category theory and define a slight generalization of crossed double categories of [19]. We also describe the category of corners construction for this class of double categories and mention some examples. * In Section 3 we establish two equivalences: the first is the equivalence between strict factorization systems and double categories for which every top right corner can be uniquely filled into a square: \[\begin{CD}a@>{g}>{}>b\\ @V{}V{c}V\\ \end{CD}\] The second is the equivalence between orthogonal factorization systems and a special kind of crossed double categories. * In Section 4 we recall codescent objects and use the category of corners construction to give explicit descriptions for various (co)lax morphism classifiers. **Prerequisities**: We assume basic familiarity with double categories and factorization systems. For Section 4 we further assume that the reader is familiar with lax/colax morphisms between strict \(T\)-algebras for a \(2\)-monad, as defined in [2, Page 1.2]. **Acknowledgements**: I want to thank my Ph.D. supervisor John Bourke for his guidance and careful readings of all the drafts of this paper. ## 2. Double categories ### Basic notions **Notation 2.1**.: A _double category_\(X\) is an internal category in \(\operatorname{Cat}\). In particular it consists of the following diagram in \(\operatorname{Cat}\): We call objects of \(X_{0}\)_the objects_ of \(X\), morphisms of \(X_{0}\) the _vertical morphisms_, objects of \(X_{1}\) the _horizontal morphisms_, the morphisms of \(X_{1}\)_squares_: We refer to the composition in the category \(X_{1}\) as _vertical square composition_, the _horizontal_ composition of horizontal morphisms as well as squares is given by the functor \(d_{1}:X_{2}\to X_{1}\). Similarly, an identity morphism in \(X_{1}\) will be called a _vertical identity square_, and a _horizontal identity square_ on a vertical morphism \(u\) will be given by \(s(u)\). By an _identity square_ in \(X\) we mean a square that is either a vertical or a horizontal identity. Objects and horizontal morphisms form a category that we'll denote by \(h(X)\). Double categories together with _double functors_ form a category that we'll denote by Dbl. Here we recall some definitions from [13] that will make an appearance in the paper: **Definition 2.2** (Duals and transposes).: Any double category \(X\) has its _transpose_\(X^{T}\) obtained by switching vertical and horizontal morphisms and compositions. Any double category \(X\) has its _vertical dual_ which we denote by \(X^{v}\). It is defined by putting: \[(X^{v})_{i}=X^{op}_{i}\text{ for }i\in\{0,1,2\}.\] Similarly, domain and codomain functors for \(X^{v}\) are obtained by applying \((-)^{op}\) on those for \(X\). There is also a notion of a _horizontal dual_\(X^{h}\), which is a diagram obtained from \(X\) by switching \(d_{0}\)'s and \(d_{1}\)'s. **Definition 2.3**.: A double category \(X\) is _flat_ if any square \(\alpha\) is uniquely determined by its boundary. **Definition 2.4**.: A double category \(X\) is (strictly) _horizontally invariant_ if for any two invertible horizontal morphisms \(g,h\) and every vertical morphism \(u\) there exists a unique square filling the picture1: Footnote 1: This definition differs from the one in [13] in that we require the filler square to be unique. We say that \(X\) is (strictly) _vertically invariant_ if its transpose \(X^{T}\) is horizontally invariant. A double category that is both horizontally and vertically invariant will be called (strictly) _invariant_. **Example 2.5**.: Let \(\mathcal{C}\) be a category. There is a double category \(\mathrm{Sq}(\mathcal{C})\) such that: * objects are the objects of \(\mathcal{C}\), * vertical morphisms are those of \(\mathcal{C}\), * horizontal morphisms are those of \(\mathcal{C}\), * squares are commutative squares in \(\mathcal{C}\). **Example 2.6**.: There is a sub-double category \(\mathrm{PbSq}(\mathcal{C})\subseteq\mathrm{Sq}(\mathcal{C})\) with the same objects and morphisms, whose squares are the pullback squares in \(\mathcal{C}\). Another example is a sub-double category \(\mathrm{MPbSq}(\mathcal{C})\subseteq\mathrm{PbSq}(\mathcal{C})\) with the same objects and horizontal morphisms whose vertical morphisms are monomorphisms in the category \(\mathcal{C}\). **Example 2.7**.: Let \(\mathcal{E}\) be a category with pullbacks. There is a double category \(\mathrm{BOFib}(\mathcal{E})\) such that: * objects are internal categories2 in \(\mathcal{E}\), Footnote 2: As defined in [3, 8.1] for instance. * vertical morphisms are internal functors that are _discrete opfibrations_, that is, internal functors \(F:A\to B\) such that the following square is a pullback in \(\mathcal{E}\): * horizontal morphisms are internal functors \(F:A\to B\) that are _bijections on objects_, i.e. the object part morphism \(F_{0}:A_{0}\to B_{0}\) is an isomorphism in \(\mathcal{E}\), * a square in \(\mathrm{BOFib}(\mathcal{E})\) is a pullback square in \(\mathrm{Cat}(\mathcal{E})\). Note that all of the above examples are flat and invariant. ### Crossed double categories Crossed double categories (a generalization of _crossed simplicial groups_ of [11]) were introduced by Mark Weber in [19] to calculate various internal algebra classifiers. For instance, if \(S\) is the free symmetric strict monoidal category \(2\)-monad on \(\mathrm{Cat}\), the bar construction (also called a _resolution_, see Definition 4.6) \(\mathrm{Res}(*)\) of a terminal \(S\)-algebra \(*\) has the structure of a crossed double category. In that paper, any crossed double category can be turned into a category "in the best possible way" - this is called the _category of corners_ construction. In the case of \(\mathrm{Res}(*)\) the category of corners construction produces the _free symmetric strict monoidal category containing a commutative monoid_, which happens to be the category \(\mathrm{FinSet}\) of finite ordinals and all functions between them. In this paper we consider a slight generalization of crossed double categories, obtained by dropping the "splitness" assumption on the opfibration that appears in the definition. This allows us to consider a bigger class of examples - the ones for which there is no canonical choice of "opcartesian lifts". We then present an analogue of the category of corners construction for this wider class of double categories and prove some of its key properties. All of this is in preparation for Section 3 where we show that under some conditions the category of corners admits a strict or an orthogonal factorization system. **Definition 2.8**.: A double category \(X\) is said to be _crossed_ if \(d_{0}:X_{1}\to X_{0}\) is an opfibration and \(d_{1}:X_{2}\to X_{1},s:X_{0}\to X_{1}\) are morphisms of opfibrations3: Footnote 3: Note that the map \(d_{0}^{2}=d_{0}\circ d_{0}:X_{2}\to X_{0}\) is an opfibration since it is a composite of an opfibration and a pullback of an opfibration. In elementary terms, this is to say the following: A square \(\kappa\) is said to be _opcartesian_ (by which we mean \(d_{0}\)-opcartesian if regarded as a morphism in \(X_{1}\)) when given any square \(\alpha\) and a vertical morphism \(v\) (as in the picture below), there exists a unique square \(\beta\) so that the following equality of squares holds: (1) To say that \(d_{0}:X_{1}\to X_{0}\) is an opfibration is to say that any tuple \((g,f)\) of a "composable" pair of a horizontal and a vertical morphisms (as pictured below) can be filled to an opcartesian square: (2) Such tuples will be referred to as (top-right) _corners_. Finally, to say that \(d_{1}:X_{2}\to X_{1}\) and \(s:X_{0}\to X_{1}\) are morphisms of opfibrations is to say that the opcartesian squares are closed under horizontal composition and that every horizontal identity square is opcartesian. We will denote by Crossed the full subcategory of Dbl spanned by crossed double categories. **Remark 2.9** (Split version).: Crossed double categories studied by Mark Weber ([19]) are defined as in Definition 2.8, except it is required that \(d_{0}:X_{1}\to X_{0}\) is a **split** opfibration and the maps \(d_{1},s\) are morphisms of split opfibrations. This amounts to, for every top-right corner \((g,f)\), having **a choice** of opcartesian square \(\kappa_{g,f}\) filling the corner, and requiring that identity squares are chosen opcartesian, and moreover vertical and horizontal composition of chosen opcartesian squares is chosen opcartesian. In this paper we will call them _split crossed_ to emphasize the presence of chosen filler squares. **Remark 2.10** (Dual version).: There is a dual version of a crossed double category that we will call _co-crossed_, it is obtained by replacing "opfibration" by "fibration" everywhere. Note then that the double category \(X\) is co-crossed if and only if \(X^{v}\) is crossed. **Remark 2.11**.: The requirement that every corner can be filled into an opcartesian square is equivalent to saying that every corner can be filled into a pre-opcartesian square and vertical composition of pre-opcartesian squares is pre-opcartesian. By a _pre-opcartesian_ square we mean a square satisfying (1) only for squares for which \(v\) is the identity. This equivalence is proven [4, Proposition 8.1.7] for a general fibration. **Example 2.12**.: If \(\mathcal{C}\) is a category with pullbacks, the double category \(\mathrm{Sq}(\mathcal{C})\) is co-crossed: a square is cartesian (with respect to the codomain functor \(d_{0}:\mathcal{C}^{2}\to\mathcal{C}\)) if and only if it is a pullback square in \(\mathcal{C}\). Clearly, vertical and horizontal composition of pullback squares yields a pullback square, and identity squares are pullbacks. For similar reasons, the following double categories are all co-crossed. In each of these, every square is cartesian: * \(\mathrm{PbSq}(\mathcal{C})\), * \(\mathrm{MPbSq}(\mathcal{C})\), * \(\mathrm{BOFib}(\mathcal{E})\). **Example 2.13**.: Let \((L,R)\) be an _algebraic weak factorization system_ ([6, 2.2]) on a category \(\mathcal{C}\) with pullbacks. There is an associated double category \(R\)-\(\mathbb{A}lg\) of \(R\)-algebras. It can be shown that the codomain functor of this double category is a fibration ([6, Proposition 8]) and moreover, \(d_{1},s\) are morphisms of fibrations. Thus \(R\)-\(\mathbb{A}lg\) is a co-crossed double category. Of note is that it is not these examples that will play a role later on, but rather their vertical duals (that are crossed). **Example 2.14**.: Assume \((T,m,i)\) is a 2-monad on a 2-category \(\mathcal{K}\) and let \((A,a)\) be a strict \(T\)-algebra. By its _resolution_, denoted \(\mathrm{Res}(A,a)\), we mean the following diagram in T-\(\mathrm{Alg}_{s}\): For a cartesian4 2-monad \(T\), \(\mathrm{Res}(A,a)\) is a category internal in T-\(\mathrm{Alg}_{s}\). Footnote 4: \(T\) preserves pullbacks and the naturality squares for \(m,i\) are pullbacks \[T^{3}A\xrightarrow{\hskip 14.226378pt\rule[-14.226378pt]{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt }{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt} {0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0.0pt}{0. Let now \(S\) be the free symmetric strict monoidal category \(2\)-monad on Cat. Recall that for a category \(\mathcal{A}\), \(S\mathcal{A}\) has objects the tuples of objects of \(\mathcal{A}\), and a morphism \((a_{1},\ldots,a_{n})\to(b_{1},\ldots,b_{n})\) is a tuple \(((f_{1},\ldots,f_{n}),\rho)\), where \(\rho\) is a permutation on the n-element set and \(f_{i}:a_{i}\to a_{\rho(i)}\) are morphisms in \(\mathcal{A}\). Denote by \(*\) the terminal \(S\)-algebra. Since \(S\) is cartesian, \(\operatorname{Res}(*)\) is a double category. Moreover, it is (split) crossed by [19, Example 4.4.5]. This double category has finite ordinals as objects, order-preserving maps as horizontal morphisms, permutations as vertical morphisms and squares being commutative squares in Set. Analogous results hold if we instead consider the free braided strict monoidal category \(2\)-monad on Cat. A special class of a crossed double category will be of interest to us: #### 2.2.2. Codomain-discrete double categories **Definition 2.15**.: A double category \(X\) will be called _codomain-discrete_ if every top-right corner can be uniquely filled into a square: This property amounts to the codomain functor \(d_{0}:X_{1}\to X_{0}\) being a discrete opfibration. In that case, \(d_{1},s\) are automatically morphisms of opfibrations and thus any codomain-discrete double category is crossed. We denote by \(\operatorname{CodDiscr}\subseteq\operatorname{Dbl}\) the full subcategory spanned by codomain-discrete double categories. **Remark 2.16**.: Codomain-discrete double categories first appeared in [11, 2.3] as double categories satisfying the _star condition_. Of note is the fact that every codomain-discrete double category is flat but not necessarily invariant as the following example demonstrates: **Example 2.17**.: Let \(\mathcal{A},\mathcal{B}\) be categories. There is a double category \(X_{\mathcal{A},\mathcal{B}}\) such that: * Objects are the objects of \(\mathcal{A}\times\mathcal{B}\), * vertical morphisms are morphisms in \(\mathcal{A}\times\mathcal{B}\) of form \((f,1_{b})\). * horizontal morphisms are morphisms in \(\mathcal{A}\times\mathcal{B}\) of form \((1_{a},g)\), * a square is a commutative square in \(\mathcal{A}\times\mathcal{B}\). This double category is clearly codomain-discrete and thus flat. It is not invariant because for example if we have \(\theta,\psi\) distinct isomorphisms in \(\mathcal{A}\), the following can't be filled into a square: \[\begin{CD}(a,b)@>{(\theta,1_{b})}>{}>(a^{\prime},b)\\ @V{}V{(1_{a^{\prime}},f)}V\\ @V{}V{(\psi,1_{b^{\prime}})}V@V{}V{(a^{\prime},b^{\prime})}V\end{CD}\] **Example 2.18**.: Given a 2-monad \(T\) on \(\operatorname{Cat}\) of form \(\operatorname{Cat}(T^{\prime})\) for \(T^{\prime}\) a cartesian monad on \(\operatorname{Set}\), the transpose of the resolution \(\operatorname{Res}(A,a)\) of a strict \(T\)-algebra is codomain-discrete. We will encounter this class of examples in Section 4.3. ### The category of corners In [19], given a crossed double category \(X\), the category of corners \(\operatorname{Cnr}(X)\) is constructed in two steps: first, a 2-category of corners \(\mathcal{B}\) is constructed, and then the category of corners is obtained by taking connected components in each hom category of \(\mathcal{B}\), i.e. \(\operatorname{Cnr}(X)=(\pi_{0})_{*}\mathcal{B}\). In order to avoid very long proofs, we define \(\operatorname{Cnr}(X)\) for our notion of a crossed double category \(X\) straight away without ever introducing \(\mathcal{B}\) (which in the absence of the splitness assumption would only be a weak 2-category). The universal property of this construction will be studied in Subsection 4.2. #### 2.3.1. Definition and examples **Definition 2.19**.: Let \(X\) be a double category and assume we're given two bottom-left corners \((e,m),(e^{\prime},m^{\prime})\) for which the domains of \(e,e^{\prime}\) and codomains of \(m,m^{\prime}\) agree (as pictured below). A _2-cell_\(\beta\) between them (denoted \(\beta:(e,m)\Rightarrow(e^{\prime},m^{\prime})\)) is a square as below for which \(d_{1}(\beta)\circ e=e^{\prime}\): \[\begin{CD}a\\ @V{e^{\prime}}V{e}V@V{}V{a}V\\ @V{}V{a^{\prime\prime}}V@V{}V{m}V\\ @V{}V{\beta}V@V{}V{}V\\ a^{\prime\prime}@V{}V{m^{\prime}}V\end{CD}\] **Construction 2.20**.: Let \(X\) be a crossed double category. Define the _category of corners_\(\operatorname{Cnr}(X)\) as follows. Its objects are the objects of \(X\), while the morphism \(a\to b\) is an equivalence class of corners, denoted \([u,g]:a\to b\): \[\begin{CD}a\\ @V{u}V{}V\\ @V{a^{\prime}}V{g}V\end{CD}\] Here two corners \((u,g),(v,h)\) are _equivalent_ if and only if there exists a zigzag of \(2\)-cells between them: The identity on \(a\in X\) is the equivalence class \([1_{a},1_{a}]\), while the composite \([v,h]\circ[u,g]\) is defined to be the equivalence class of corners obtained by filling the middle corner **with a choice** of an opcartesian square: Note that the composite is well defined on the equivalence classes. It is also independent of the choice of the square \(\kappa\): if \(\kappa^{\prime}\) is another opcartesian square filling the corner, there is a unique square \(\beta\) such that the following holds: This square exhibits now the equality between the compositions: **Proposition 2.21**.: \(\mathrm{Cnr}(X)\) is a category. Proof.: Let \([f,g]:a\to b\) be a morphism in \(\mathrm{Cnr}(X)\). To show that \([f,g]\circ[1_{a},1_{a}]==[f,g]\), note that the horizontal identity square on \(f\) is by definition opcartesian so we might as well use it for the composite (the composite is independent of the choice) and the result follows. Analogously \([1_{b},1_{b}]\circ[f,g]=[f,g]\). Consider now a composable triple \([f,g]\), \([f^{\prime},g^{\prime}]\), \([f^{\prime\prime},g^{\prime\prime}]\) and fill it to form a single corner as depicted below: Denote this composite corner by \([f^{\prime\prime},g^{\prime\prime}]\circ[f^{\prime},g^{\prime}]\circ[f,g]\) and call it the _ternary composite_. Now to define the composition \([f^{\prime\prime},g^{\prime\prime}]\circ[f^{\prime},g^{\prime}]\), choose the square \(\kappa_{2}\) as above. To define \(([f^{\prime\prime},g^{\prime\prime}]\circ[f^{\prime},g^{\prime}])\circ[f,g]\), choose the square \(\kappa_{3}\circ\kappa_{1}\) (as a vertical composite of opcartesian squares, it is opcartesian). We see that \(([f^{\prime\prime},g^{\prime\prime}]\circ[f^{\prime},g^{\prime}])\circ[f,g]\) is equal to the ternary composite. By an analogous argument (and using that opcartesian squares are closed under horizontal composites), \([f^{\prime\prime},g^{\prime\prime}]\circ([f^{\prime},g^{\prime}]\circ[f,g])\) also equals this ternary composite and thus composition is associative. If \(F:X\to Y\) is any double functor between crossed double categories, there is an induced functor \(\mathrm{Cnr}(F):\mathrm{Cnr}(X)\to\mathrm{Cnr}(Y)\) sending: (3) This gives us a functor \(\mathrm{Cnr}(-):\mathrm{Crossed}\to\mathrm{Cat}\). **Remark 2.22**.: If \(X\) is codomain-discrete, note that there is a 2-cell \(\beta:(u,g)\Rightarrow(v,h)\) between corners if and only if \(u=v\), \(g=h\) and \(\beta=1_{g}\) is the identity square. Thus the category \(\operatorname{Cnr}(X)\) has corners as morphisms with no equivalence relation involved. This has also been observed in [19, Corollary 5.4.7]. **Example 2.23**.: Let \(\mathcal{C}\) be a category with pullbacks and consider the double category \(\operatorname{PbSq}(\mathcal{C})^{v}\). The category \(\operatorname{Cnr}(\operatorname{PbSq}(\mathcal{C})^{v})\) has as objects the objects of \(\mathcal{C}\), while a morphism \(a\to b\) is an equivalence class of corners (usually called spans) like this: Note that there is a span isomorphism between two spans \((u,g),(v,h)\) if and only if there is a 2-cell between them if we regard them as corners. The composition of corners is defined using pullbacks. In other words, we have: \[\operatorname{Cnr}(\operatorname{PbSq}(\mathcal{C})^{v})=\operatorname{Span} (\mathcal{C}).\] **Example 2.24**.: Let \(\mathcal{C}\) be a category with pullbacks and consider the double category \(\operatorname{MPbSq}(\mathcal{C})^{v}\). By a similar reasoning as above we obtain that the category of corners corresponding to this double category is isomorphic to the category \(\operatorname{Par}(\mathcal{C})\) of _partial maps_ in \(\mathcal{C}\), as defined in [9, p. 246]. **Example 2.25**.: Let \(\mathcal{E}\) be a category with pullbacks and consider now the double category \(\operatorname{BOFib}(\mathcal{E})^{v}\). Morphisms in \(\operatorname{Cnr}(\operatorname{BOFib}(\mathcal{E})^{v})\) are equivalence classes of spans \((F,G)\) where \(F\) is a bijection on objects and \(G\) is a discrete opfibration as pictured below: This category of corners is isomorphic to \(\operatorname{Cof}(\mathcal{E})\), the category of internal categories and _cofunctors_, see for instance [8, Theorem 18]. **Example 2.26**.: If \((L,R)\) is an algebraic weak factorization system on a category \(\mathcal{C}\) with pullbacks, consider the co-crossed double category \(R\)-\(\mathbb{A}lg\) of \(R\)-algebras (Example 2.13). Its vertical dual \(R\)-\(\mathbb{A}lg^{v}\) is thus crossed. Its category of corners construction now gives the _category of weak maps_\(\boldsymbol{Wk}_{l}(L,R)\) associated to the system \((L,R)\), see [7, Section 3.4 and Remark 13]. **Example 2.27**.: If \(X\) is split crossed, our category of corners construction agrees with that of [19, Corollary 5.4.5], as is easily verified. Recall from Example 2.14 the free symmetric strict monoidal category \(2\)-monad \(S\) and the crossed double category \(\operatorname{Res}(*)\). The objects of \(\operatorname{Cnr}(\operatorname{Res}(*))\) are finite ordinals, while a morphism \(m\to n\) is an equivalence class of corners consisting of a permutation followed by an order-preserving map: It can be proven that \(\operatorname{Cnr}(\operatorname{Res}(*))=\operatorname{FinSet}\), the category of finite ordinals and all functions (see [19, Theorem 6.3.1]). If \(B\) is the free braided strict monoidal category \(2\)-monad, \(\operatorname{Cnr}(\operatorname{Res}(*))=\operatorname{Vine}\), the category with objects being natural numbers and morphisms being _vines_, that is, "braids for which the strings can merge" (see [19, Theorem 6.3.2]). #### 2.3.2. Some properties of \(\operatorname{Cnr}(X)\) The following proposition captures the idea that "a square in \(X\) turns into a commutative square in \(\operatorname{Cnr}(X)\)": **Proposition 2.28**.: Let \(X\) be a crossed double category. We have: \(a\)\(\xrightarrow{m}\)\(b\)\(\xrightarrow{[c^{\prime},1] Proof.: Denote \([u,g]:=[e,1]\circ[1.m]\) and denote by \(\kappa\) the opcartesian square we used for this composition. From opcartesianness there is a unique square \(\beta\) such that: This square \(\beta\) now exhibits the equality \([1,m^{\prime}]\circ[e^{\prime},1]=[e^{\prime},m^{\prime}]=[u,g]=[e,1]\circ[1,m]\). The additional assumption requiring that every square is opcartesian simplifies the description of \(\operatorname{Cnr}(X)\) for a crossed double category \(X\): **Lemma 2.29**.: Let \(X\) be a crossed double category in which every square is opcartesian. Then any \(2\)-cell \(\beta:[e,m]\Rightarrow[e^{\prime},m^{\prime}]\) between corners is vertically invertible. In particular two corners in \(\operatorname{Cnr}(X)\) are equivalent if and only if there exists a single (vertically invertible) \(2\)-cell between them. Proof.: Consider a square \(\beta\) as pictured below. Since the vertical identity square \(1_{g}\) on the morphism \(g\) is opcartesian, there exists a unique square \(\gamma\) such that: Thus \(\gamma\circ\beta=1_{g}\). Post-composing with \(\beta\), we get: \[\beta\circ\gamma\circ\beta=1_{h}\circ\beta\] Since \(\beta\) is opcartesian, we get \(\beta\circ\gamma=1_{h}\) as well. **Notation 2.30**.: Given a crossed double category \(X\), denote by \(\mathcal{E}_{X}\) the class of corners in \(\operatorname{Cnr}(X)\) of form \([f,1_{b}]\) and \(\mathcal{M}_{X}\) the class of corners of form \([1_{a},g]\). We will call these _vertical_ and _horizontal_ corners. **Proposition 2.31**.: Let \(X\) be a crossed double category. Then the class \(\mathcal{E}_{X}\) has the right cancellation property. Both classes \(\mathcal{E}_{X},\mathcal{M}_{X}\) contain all identities and are closed under composition. We also have: \[\operatorname{Cnr}(X)=\mathcal{M}_{X}\circ\mathcal{E}_{X}.\] Moreover, if every square is opcartesian in \(X\), we have that \(\mathcal{E}_{X}\) is weakly orthogonal to \(\mathcal{M}_{X}\): \[\mathcal{E}_{X}\not\sqcap\mathcal{M}_{X}.\] Proof.: To show the cancellation property, assume \([s,t]\circ[e,1]=[u,1]\in\mathcal{E}_{X}\). We then have a square as pictured below left: The same square \(\beta\) now exhibits the equality \([s,t]=[\theta s,1]\) (pictured above right) and thus \([s,t]\in\mathcal{E}_{X}\). The fact that \(\mathcal{E}_{X},\mathcal{M}_{X}\) contain identities and are closed under composition, as well as the fact that \(\mathrm{Cnr}(X)=\mathcal{M}_{X}\circ\mathcal{E}_{X}\) are obvious. Assume now that every square is opcartesian in \(X\). Since \(\mathrm{Cnr}(X)=\mathcal{M}_{X}\circ\mathcal{E}_{X}\), to prove weak orthogonality it suffices to show that given two factorizations \([e,m]=\)\(=\)\([e^{\prime},m^{\prime}]\) of the same morphism, there exists a morphism of factorizations between them, i.e.: Since \([e,m]=[e^{\prime},m^{\prime}]\) and thanks to Lemma 2.29, there exists a single invertible square like this: It's now easy to verify that the corner \([\theta,1]\) makes both squares in the above diagram commute. The classes \((\mathcal{E}_{X},\mathcal{M}_{X})\) give rise to an ordinary weak factorization system \((\widetilde{\mathcal{E}_{X}},\widetilde{\mathcal{M}_{X}})\) for which the first class is obtained by closing \(\mathcal{E}_{X}\) under codomain-retracts and the second is obtained from \(\mathcal{M}_{X}\) by closing it under domain retracts. This is a well-known result so we omit its proof. **Example 2.32**.: Recall the example \(\mathrm{Cnr}(\mathrm{PbSq}(\mathcal{C})^{v})=\mathrm{Span}(\mathcal{C})\). Since every square is opcartesian (a pullback), we obtain that the class \(\mathcal{E}_{\mathrm{PbSq}(\mathcal{C})^{v}}\) is weakly orthogonal to \(\mathcal{M}_{\mathrm{PbSq}(\mathcal{C})^{v}}\). Note that in this case, both classes are already closed under the required retracts. We obtain: **Proposition 2.33**.: The two canonical classes of morphisms in the category \(\mathrm{Span}(\mathcal{C})\) form a weak factorization system. ## 3. Factorization systems and double categories In this section we will be putting additional hypotheses on the crossed double category \(X\) to ensure that the classes \((\mathcal{E}_{X},\mathcal{M}_{X})\) of morphisms in \(\mathrm{Cnr}(X)\) have more desirable properties (namely, form a strict or an orthogonal factorization system). This gives us the direction: \[\text{double categories}\ \rightsquigarrow\ \text{factorization systems}.\] For the opposite direction, we introduce a construction that sends two classes \((\mathcal{E},\mathcal{M})\) of morphisms in a category \(\mathcal{C}\) to a certain double category \(D_{\mathcal{E},\mathcal{M}}\) of commutative squares. In Subsection 3.1 we then show that the mappings \(X\mapsto(\mathcal{E}_{X},\mathcal{M}_{X})\) and \((\mathcal{E},\mathcal{M})\mapsto D_{\mathcal{E},\mathcal{M}}\) induce an equivalence between the categories of strict factorization systems and codomain-discrete double categories. In Subsection 3.2 we prove analogous results for the categories of orthogonal factorization systems and _factorization double categories_: a symmetric variant of crossed double categories whose bottom-left corners satisfy a certain joint monicity property. ### Strict factorization systems In [17] it has been shown that distributive laws in Span can equivalently be described as strict factorization systems. Given a codomain-discrete double category \(X\), the category of corners \(\mathrm{Cnr}(X)\) can be constructed using a distributive law in \(\mathrm{Span}(\mathrm{Cat})\)5 - this gives a first hint that there is a relationship between double categories and factorization systems. Footnote 5: This is the original construction of \(\mathrm{Cnr}(X)\) for a codomain-discrete double category \(X\) in [19] **Definition 3.1**.: A _strict factorization system_\((\mathcal{E},\mathcal{M})\) on a category \(\mathcal{C}\) consists of two wide sub-categories \(\mathcal{E},\mathcal{M}\subseteq\mathcal{C}\) such that for every morphism \(f\in\mathcal{C}\) there exist unique \(e\in\mathcal{E},m\in\mathcal{M}\) such that: \(f=m\circ e\). **Definition 3.2**.: Denote by \(\mathcal{SFS}\) the category whose: * objects are strict factorization systems \(\mathcal{E}\subseteq\mathcal{C}\supseteq\mathcal{M}\), * a morphism \((\mathcal{E}\subseteq\mathcal{C}\supseteq\mathcal{M})\to(\mathcal{E}^{\prime} \subseteq\mathcal{C}^{\prime}\supseteq\mathcal{M}^{\prime})\) is a functor \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) satisfying: \[F(\mathcal{E}) \subseteq\mathcal{E}^{\prime},\] \[F(\mathcal{M}) \subseteq\mathcal{M}^{\prime}.\] **Lemma 3.3**.: Let \(X\) be codomain-discrete. Then the classes \((\mathcal{E}_{X},\mathcal{M}_{X})\) form a strict factorization system on \(\mathrm{Cnr}(X)\). The assignment \(X\mapsto(\mathcal{E}_{X},\mathcal{M}_{X})\) induces a functor \(\mathrm{CodDiscr}\to\mathcal{SFS}\). Proof.: Recall from Remark 2.22 that for such \(X\), two morphisms \([u,g],[v,h]\) in \(\mathrm{Cnr}(X)\) are equal if and only if \(u=v,g=h\). From this it follows that the factorization \([u,g]=[1,g]\circ[u,1]\) is unique. If \(H:X\to Y\) is a double functor, the induced functor \(\mathrm{Cnr}(H):\mathrm{Cnr}(X)\to\mathrm{Cnr}(Y)\) (see (3)) satisfies \(\mathrm{Cnr}(H)(\mathcal{E}_{X})\subseteq\mathcal{E}_{Y}\), \(\mathrm{Cnr}(H)(\mathcal{M}_{X})\subseteq\mathcal{M}_{Y}\) and thus is a morphism in \(\mathcal{SFS}\). Denote the above functor simply by \(\mathrm{Cnr}(-):\mathrm{CodDiscr}\to\mathcal{SFS}\). **Example 3.4**.: Let \(\mathcal{A},\mathcal{B}\) be categories and \(X_{\mathcal{A},\mathcal{B}}\) the codomain-discrete double category from Example 2.17. The category of corners \(\mathrm{Cnr}(X_{\mathcal{A},\mathcal{B}})\) is isomorphic to just \(\mathcal{A}\times\mathcal{B}\) and this category admits a strict factorization system \((\mathcal{E},\mathcal{M})\), where: \[\mathcal{E} :=\{(1_{a},f)|a\in\mathcal{A},f\in\mathcal{B}\},\] \[\mathcal{M} :=\{(g,1_{b})|g\in\mathcal{A},b\in\mathcal{B}\}.\] **Construction 3.5**.: Let \((\mathcal{E},\mathcal{M})\) be two classes of morphisms in a category \(\mathcal{C}\), both closed under composition and containing all identities. We define a double category \(D_{\mathcal{E},\mathcal{M}}\) as follows: * The objects are the objects of \(\mathcal{C}\), * the category of objects and vertical morphisms is \(\mathcal{E}\), * the category of objects and horizontal morphisms is \(\mathcal{M}\), * the squares are commutative squares in \(\mathcal{C}\). If we have two classes \((\mathcal{E},\mathcal{M}),(\mathcal{E}^{\prime},\mathcal{M}^{\prime})\) on categories \(\mathcal{C},\mathcal{C}^{\prime}\) and \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) a functor satisfying \(F(\mathcal{E})\subseteq\mathcal{E}^{\prime}\) and \(F(\mathcal{M})\subseteq\mathcal{M}^{\prime}\), there is an induced double functor \(D_{F}:D_{\mathcal{E},\mathcal{M}}\to D_{\mathcal{E}^{\prime},\mathcal{M}^{ \prime}}\) defined in the obvious way. **Lemma 3.6**.: Let \((\mathcal{E},\mathcal{M})\) be a strict factorization system on a category \(\mathcal{C}\). Then \(D_{\mathcal{E},\mathcal{M}}\) is codomain-discrete. The assignment \((\mathcal{E},\mathcal{M})\mapsto D_{\mathcal{E},\mathcal{M}}\) induces a functor \(\mathcal{SFS}\to\mathrm{CodDiscr}\). Proof.: Every morphism in \(\mathcal{C}\) of form \(e\circ m\) can be uniquely factored as \(m^{\prime}\circ e^{\prime}\) with \(e^{\prime}\in\mathcal{E},m^{\prime}\in\mathcal{M}\). But this precisely means that \(D_{\mathcal{E},\mathcal{M}}\) is codomain-discrete. In the above construction we have seen that \((\mathcal{E},\mathcal{M})\mapsto D_{\mathcal{E},\mathcal{M}}\) is functorial, the rest is now obvious. **Theorem 3.7**.: The functor \(\operatorname{Cnr}(-):\operatorname{CodDiscr}\to\mathcal{SFS}\) is the equivalence inverse to the functor \(D:\mathcal{SFS}\to\operatorname{CodDiscr}\) and so we have: \[\mathcal{SFS}\simeq\operatorname{CodDiscr}.\] Proof.: We will show that there are natural isomorphisms \(1\cong D\circ\operatorname{Cnr}(-)\) and \(\operatorname{Cnr}(-)\circ D\cong 1\): To see that \(1\cong D\circ\operatorname{Cnr}(-)\), let \(X\) be a domain-discrete double category. First note that by Remark 2.22, the identity-on-objects functor \(\mathcal{E}\to\mathcal{E}_{X}\) sending a morphism \(e\mapsto(e,1)\) is an isomorphism. Similarly we have \(\mathcal{M}\cong\mathcal{M}_{X}\) via the identity-on-objects functor \(m\mapsto(1,m)\). There is now a double functor \(X\to D_{\mathcal{E}_{X},\mathcal{M}_{X}}\) that is identity on objects and whose vertical morphism and horizontal morphism components are given by the functors described above. To see that it is well-defined on squares, we'd need to prove the direction "\(\Rightarrow\)" in the picture below: (4) But this direction follows from from Proposition 2.28. Since both double categories are flat, to show that this double functor is an isomorphism it suffices to show the direction "\(\Leftarrow\)" in Diagram (4). Consider the square used for the composition \((e,1)\circ(m,1)\): From the commutativity of the square on the above right we get \(e^{\prime}=\widehat{e}\), \(m^{\prime}=\widehat{m}\) and thus we obtain the left square in Diagram (4). To see that \(\operatorname{Cnr}(-)\circ D\cong 1\), consider now a strict factorization system \((\mathcal{E},\mathcal{M})\) on \(\mathcal{C}\), we then have a functor \(\mathcal{C}\to\operatorname{Cnr}(D_{\mathcal{E},\mathcal{M}})\) that is identity on objects and sends \(f\mapsto(e,m)\), where \(f=me\) is the unique factorization with \(e\in\mathcal{E},m\in\mathcal{M}\). This functor is an isomorphism and preserves the classes \(\mathcal{E},\mathcal{M}\). ### Orthogonal factorization systems #### 3.2.1. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.2. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.3. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.4. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.5. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.6. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.7. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.8. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.2.9. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.2.1. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.2.1. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.2.2.2.3 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.3.2.4. The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.3.5.1 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.6.1 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.7.1 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.8.2.1.2 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.9.2 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.1 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.2 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.3 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.4.1 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.5 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\operatorname{Cnr}(-)\)-module \(\mathcal{M}\). We will show that \(\operatorname{Cnr}(-)\circ D\cong 1\), and we have \(\operatorname{Cnr}(-)\circ D\cong 1\). #### 3.3.1.6.2 The \(\operatorname{Cnr}(-)\)-module We now consider the \(\ **Definition 3.8**.: An _orthogonal factorization system_\((\mathcal{E},\mathcal{M})\) on a category \(\mathcal{C}\) consists of two wide sub-categories \(\mathcal{E},\mathcal{M}\subseteq\mathcal{C}\) satisfying6: Footnote 6: Note that this definition is equivalent to the more standard one in which the orthogonality \(\mathcal{E}\perp\mathcal{M}\) appears. See [1, Theorem 3.7] * For every morphism \(f\in\mathcal{C}\) there exist \(e\in\mathcal{E},m\in\mathcal{M}\) such that \(f=m\circ e\), and if \(f=m^{\prime}e^{\prime}\) is a second factorization with \(e^{\prime}\in\mathcal{E},m^{\prime}\in\mathcal{M}\), there exists a unique morphism \(\theta\) so that this commutes: \[\begin{CD}a@>{e}>{}>a^{\prime}@>{m}>{}>b\\ @V{}V{\mathcal{\Phi}}V@V{}V{\mathcal{\Phi}}V\\ a@>{e^{\prime}}>{}>a^{\prime\prime}@>{m^{\prime}}>{}>b\end{CD}\] * we have that \(\mathcal{E}\cap\mathcal{M}=\{\text{isomorphisms in }\mathcal{C}\}\). In the same way as in Definition 3.2 define the category \(\mathcal{OFS}\) with objects orthogonal factorization systems and morphisms being functors preserving both classes. To describe orthogonal factorization systems as certain double categories, we will introduce a more symmetric version of a crossed double category. Given a double category \(X\), denote: \[X^{*}:=((X^{v})^{h})^{T}. \tag{5}\] This is the double category obtained from \(X\) by taking both the vertical and horizontal opposites as well as the transpose. **Definition 3.9**.: A square \(\lambda\) in a double category \(X\) will be called _bicartesian_ if it is opcartesian in both \(X\) and \(X^{*}\). In elementary terms, this means that given a square \(\alpha\) with the same top-right corner as \(\lambda\), there exist unique squares \(\epsilon,\delta\) so that both the bottom left composite and the bottom right composite are equal to the square \(\alpha\): **Definition 3.10**.: A double category \(X\) is _top-right bicrossed_ if every top-right corner can be filled into a bicartesian square, and moreover bicartesian squares are closed under horizontal and vertical compositions and contain vertical and horizontal identities. In a top-right bicrossed double category \(X\), the two conditions of being opcartesian in \(X\) and in \(X^{*}\) can be expressed as a single condition as follows: **Lemma 3.11**.: Let \(\lambda\) be a bicartesian square in a top-right bicrossed double category \(X\) and let \(\alpha\) be any square with the same top-right corner. Then there exists a unique square \(\beta\) such that this equation holds: Proof.: From the definition of opcartesianness in \(X^{*}\) there is a unique square \(\gamma\) such that: (6) Because the horizontal identity square on a vertical morphism \(u\) is opcartesian in \(X\) (because \(X\) is top-right bicrossed), there exists a unique square \(\gamma^{\prime}\) such that: This gives us the **existence**. To prove the **uniqueness**, let \(\beta\) be a different square satisfying the equation in the Lemma. Then the composite \(\beta\circ 1_{\pi_{1}}^{\bullet}\) (vertical composite of \(\beta\) and the horizontal identity square on \(\pi_{1}\)) satisfies the equation (6) (with \(\beta\circ 1_{\pi_{1}}^{\bullet}\) in place of \(\gamma\)). Because \(\lambda\) is opcartesian in \(X^{*}\), this forces \(\beta\circ 1_{\pi_{1}}^{\bullet}=\gamma=\gamma^{\prime}\circ 1_{\pi_{1}}^{\bullet}\). Because \(1_{\pi_{1}}^{\bullet}\) is opcartesian in \(X\), this in turn forces \(\beta=\gamma^{\prime}\). **Example 3.12**.: In the double category \(\mathrm{Sq}(\mathcal{C})^{v}\), a square is bicartesian if and only if it's a pullback square. If \(\mathcal{C}\) has pullbacks, the double category \(\mathrm{Sq}(\mathcal{C})^{v}\) is top-right bicrossed. Both \(\mathrm{MPbSq}(\mathcal{C})^{v}\) (for a category \(\mathcal{C}\) with pullbacks) and \(\mathrm{BOFib}(\mathcal{E})^{v}\) are top-right bicrossed with every square being bicartesian. We will now focus on double categories in which every top-right corner can be filled into a square and every square is bicartesian. Such double categories are automatically crossed so we may again use the category of corners construction. Notice that in this case, two corners \((e,m),(e^{\prime},m^{\prime})\) are equivalent if and only if there is a single 2-cell between them (Lemma 2.29), and moreover every 2-cell has the following form (since the vertical identities on morphisms are bicartesian): (7) The following somewhat technical notion is introduced in this paper to guarantee the uniqueness of factorizations up to a unique morphism: **Definition 3.13**.: A top-left corner \((\pi_{1},\pi_{2})\) is said to be _jointly monic_ if, given squares \(\kappa_{1}\), \(\kappa_{2}\) pictured below: We have the following implication: \[(\pi_{1}\theta=\pi_{1}\theta^{\prime}\wedge\pi_{2}\psi=\pi_{2}\psi^{\prime}) \Rightarrow\theta=\theta^{\prime},\psi=\psi^{\prime}.\] **Example 3.14**.: In \(\mathrm{Sq}(\mathcal{C})\) a top-right corner \((\pi_{1},\pi_{2})\) is jointly monic if and only if the pair of morphisms \((\pi_{1},\pi_{2})\) is jointly monic in the category \(\mathcal{C}\). In \(\mathrm{PbSq}(\mathcal{C})\) a top-right corner is jointly monic if and only if the pair is jointly monic in \(\mathcal{C}\)**with respect to all isomorphisms**. **Example 3.15**.: In the double category \(\mathrm{MPbSq}(\mathcal{C})\) every top-left corner is jointly monic as we now show: Let there be squares \(\kappa_{1},\kappa_{2}\) as in the definition, then \(\theta=\psi\), \(\theta^{\prime}=\psi^{\prime}\) and the equality \(\pi_{1}\theta=\pi_{1}\theta^{\prime}\) forces \(\theta=\theta^{\prime}\) because \(\pi_{1}\) is a monomorphism. **Example 3.16**.: Every top-right corner is jointly monic in the double category \(\operatorname{BOFib}(\mathcal{E})\) as well: Let there be a top-left corner and squares as pictured below: Assume that \(FH=FH^{\prime}\), \(GP=GP^{\prime}\). We again get \(H=P,H^{\prime}=P^{\prime}\). Now since \(F\) is a bijection on objects, we have \(H_{0}=H_{0}^{\prime}\) (the object parts of the functors agree). Because \(G\) is a discrete opfibration, the square below is a pullback and we obtain \(H_{1}=H_{1}^{\prime}\) as well: Recall Notation 2.30. **Lemma 3.17**.: Let \(X\) be a double category in which every top-right corner can be filled into a square and every square is bicartesian. Then: \[\mathcal{E}_{X}\cap\mathcal{M}_{X}\subseteq\{\text{isomorphisms in }\operatorname{Cnr}(X)\}.\] Proof.: Let \([u,1]=[1,h]\) be a morphism in the intersection. There is then a 2-cell as follows (see the remark above diagram (7)): Since every square is bicartesian, \(\theta,h\) are isomorphisms, and so is \(\theta^{-1}=u\). Hence \([u,1]\) is an isomorphism in \(\operatorname{Cnr}(X)\) with the inverse being \([u^{-1},1]\). **Lemma 3.18**.: Let \(X\) be a double category in which every top-right corner can be filled into a square and every square is bicartesian. An equivalence class of corners \([u,g]\) is invertible in \(\operatorname{Cnr}(X)\) if and only if both \(u\) and \(g\) are isomorphisms. Proof.: Let \([u,g]\) be an isomorphism in \(\operatorname{Cnr}(X)\) with inverse \([v,h]\) as pictured together with the inverse laws below: (8) From the pictures below we obtain the following equalities: \[[v,1]\circ[u,g] =[1,\widehat{g}\psi],\] \[[u,g]\circ[1,h] =[\theta^{\prime}\widehat{u},1].\] Now consider the following composite: \[[v\theta^{\prime}\widehat{u},1]=[v,1]\circ[\theta^{\prime}\widehat{u},1]=[v, 1]\circ[u,g]\circ[1,h]=[1,\widehat{g}\psi]\circ[1,h]=[1,\widehat{g}\psi h]\] This composite belongs both in \(\mathcal{E}_{X}\) and \(\mathcal{M}_{X}\), so as in the proof of Lemma 3.17 we obtain that \((\widehat{g}\psi)h\) is an isomorphism that we denote by \(\Theta\). This implies that \(\Theta^{-1}(\widehat{g}\psi)h=1\) and so \(h\) is a split monomorphism. Since \(h(\widehat{g}\psi)=1\) by Equation (8), \(h\) is a split epimorphism. Thus \(h\) is an isomorphism and by similar reasoning, \(v\) is also an isomorphism. Hence \([v,h]\) is an isomorphism in \(\operatorname{Cnr}(X)\) with the inverse being given by \([v^{-1},1]\circ[1,h^{-1}]\) Note that for \(X=\operatorname{PbSq}(\mathcal{C})^{v}\) the above lemma gives the usual folklore characterization of isomorphisms in the category \(\operatorname{Span}(\mathcal{C})\) of spans. **Lemma 3.19**.: Let \(X\) be a double category in which every top-right corner can be filled into a square and every square is bicartesian. Assume in addition that \(X\) is invariant. Then: \[\mathcal{E}_{X}\cap\mathcal{M}_{X}\supseteq\{\text{isomorphisms in }\operatorname{Cnr}(X)\}.\] Proof.: We will show that when \(X\) is horizontally invariant, we have: \[[u,g]\in\operatorname{Cnr}(X),\,g\text{ is an isomorphism }\Rightarrow[u,g]\in \mathcal{E}_{X}.\] Let \([u,g]\) be such a corner. From horizontal invariance we get the square (pictured below), that exhibits the equality \([u,g]=[\theta u,1]\): Thus, \([u,g]\in\mathcal{E}_{X}\). Dually, if \(X\) is vertically invariant, we have: \[[u,g]\in\operatorname{Cnr}(X),\,u\text{ is an isomorphism }\Rightarrow[u,g]\in \mathcal{M}_{X}.\] Now if \([u,g]\) is an isomorphism, by previous lemma both \(u,g\) are isomorphisms, and by the above implications \([u,g]\) belongs to both \(\mathcal{E}_{X}\) and \(\mathcal{M}_{X}\). **Lemma 3.20**.: Let \(X\) be a double category in which every top-right corner can be filled into a square and every square is bicartesian. Assume further that every top-left corner in \(X^{v}\) is jointly monic. Then the \((\mathcal{E}_{X},\mathcal{M}_{X})\)-factorization of a morphism in \(\operatorname{Cnr}(X)\) is unique up to a unique morphism. Proof.: Assume that \([e,m]=[e^{\prime},m^{\prime}]\) are two \((\mathcal{E}_{X},\mathcal{M}_{X})\)-factorizations of a morphism in \(\operatorname{Cnr}(X)\). We wish to show that there is a unique morphism between them: (9) As in the proof of Theorem 2.31, one such morphism is given by the corner \([\theta,1]\), where \(\theta\) is the domain of the 2-cell square between \((e,m)\) and \((e^{\prime},m^{\prime})\): Assume that there is a different morphism \([s,t]:a^{\prime}\to a^{\prime\prime}\) making both squares in (9) commute. The commutativity of these two squares gives the following 2-cells: Assume now we had the following: \[\begin{split}\theta\widetilde{\theta}&=\theta^{ \prime},\\ \widetilde{\psi}\psi&=\psi^{\prime}.\end{split} \tag{10}\] The square \(\beta^{\prime}\) would then exhibit the equality \([s,t]=[\theta,1]\): \[\xy(0,0)*{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ -1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1} {-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{-1}{ Because the corner \((se,m^{\prime}t)\) is jointly monic in \(X^{v}\), to show (10) it suffices to show: \[\theta\widetilde{\theta}se =\theta^{\prime}se,\] \[m^{\prime}t\widetilde{\psi}\psi =m^{\prime}t\psi^{\prime}.\] The first equality holds because: \[\theta\widetilde{\theta}se=\theta e=e^{\prime}=\theta^{\prime}se,\] while the second equality holds because: \[m^{\prime}t\widetilde{\psi}\psi=m\psi=m^{\prime}=m^{\prime}t\psi^{\prime}.\] We therefore propose the following terminology: **Definition 3.21**.: By an (orthogonal) _factorization double category_ we mean a double category \(X\) with the following properties: * \(X\) is invariant, * every top-right corner in \(X\) can be filled into a square and every square is bicartesian, * every top-left corner in \(X^{v}\) is jointly monic. Denote by FactDbl the full subcategory of Dbl consisting of factorization double categories. **Remark 3.22**.: Any factorization double category is automatically flat: Given two squares \(\alpha,\lambda\) with the same boundary, by Lemma 3.11 there exists a unique square \(\beta\) as follows: Now in \(X^{v}\) the corner \((u,v)\) is jointly monic and we have \(\theta u=1_{d}u\), \(v\psi=v1_{d}\). Thus \(\theta=1_{d}\) and \(v=1_{d}\). Invariance now forces \(\gamma=\square_{d}\), the identity square on the morphism \(1_{d}\) and so \(\alpha=\lambda\). Combining Lemmas 3.17, 3.19, 3.20 we obtain: **Proposition 3.23**.: Let \(X\) be a factorization double category. Then the classes \((\mathcal{E}_{X},\mathcal{M}_{X})\) of vertical and horizontal corners form an orthogonal factorization system on the category \(\mathrm{Cnr}(X)\). **Example 3.24** (Partial maps).: Let \(\mathcal{C}\) be a category with pullbacks and consider the double category \(\operatorname{MPbSq}(\mathcal{C})^{v}\). It is obviously flat and invariant with very square bi-cartesian. We have seen that corners in its vertical dual are jointly monic in Example 3.15. \(\operatorname{MPbSq}(\mathcal{C})^{v}\) is thus a factorization double category. Combined with the description of the category of corners from Example 2.24 and Proposition 3.23 we obtain that the category \(\operatorname{Par}(\mathcal{C})\) of objects and partial maps admits an orthogonal factorization system given by vertical corners followed by horizontal ones: In [9] these are called the _domains_ and _total maps_ in \(\mathcal{C}\). **Example 3.25** (Categories and cofunctors).: If \(\mathcal{E}\) is a category with pullbacks, the double category \(\operatorname{BOFib}(\mathcal{E})^{v}\) is a factorization double category. In the category \(\operatorname{Cnr}(\operatorname{BOFib}(\mathcal{E})^{v})=\operatorname{ Cof}(\mathcal{E})\) every morphism can be factored as (the opposite of) a bijection on objects functor followed by a discrete opfibration (as mentioned in [8, Theorem 18]). By the results in this section, these classes form an orthogonal factorization system on \(\operatorname{Cof}(\mathcal{E})\). **Example 3.26**.: Given a fibration \(P:\mathcal{E}\to\mathcal{B}\), there is an associated sub-double category \(X_{P}\subseteq\operatorname{Sq}(\mathcal{E})\) whose vertical morphisms are \(P\)-vertical morphisms (those that are sent to isomorphisms by \(P\)), horizontal morphisms are cartesian lifts of morphisms in \(\mathcal{B}\), and squares are commutative squares in \(\mathcal{E}\). **Proposition 3.27**.: \(X_{P}\) is a factorization double category. Proof.: Invariance is straightforward. To show joint monicity, assume we're given the data as in Definition 3.13. Note then that from the existence of squares \(\kappa_{1},\kappa_{2}\) in \((X^{P})^{v}\) it follows that \(P\theta=(P\psi)^{-1}\), and from \(\theta\pi_{1}=\theta^{\prime}\pi_{1}\) we have \(P\theta=P\theta^{\prime}\). Since \(\pi_{2}\) is a cartesian lift, the following picture forces \(\psi\circ\theta^{\prime}=1\) and thus \(\psi=\psi^{\prime}\): Given a top-right corner \(\lambda,u\), the bicartesian filler square is given by the cartesian lift of the pair \((P\lambda,b^{\prime})\) and the unique canonical comparison morphism: The category of corners \(\operatorname{Cnr}(X_{P})\) is isomorphic to the category \(\mathcal{E}\) via the functor sending an equivalence class \([u,\lambda]\) to the composite \(\lambda\circ u\). From the results in this section we obtain that the category \(\mathcal{E}\) admits an orthogonal factorization system given by the class of \(P\)-vertical morphisms followed by cartesian morphisms. Factorization systems associated to fibrations are a special case of _simple reflective factorization systems_ associated to prefibrations and have been studied in [18]. **Non-example 3.28** (Spans).: Let \(\mathcal{C}\) be a category with pullbacks and consider the double category \(\operatorname{PbSq}(\mathcal{C})^{v}\) of (opposite) pullback squares. This is not a factorization double category because not every top-left corner in \(\operatorname{PbSq}(\mathcal{C})\) is jointly monic (see Example 3.14). We can not thus use Theorem 3.23 to obtain an orthogonal factorization system on \(\operatorname{Cnr}(\operatorname{PbSq}(\mathcal{C})^{v})=\operatorname{Span}( \mathcal{C})\) and in fact, the two canonical classes of spans in \(\operatorname{Span}(\mathcal{C})\) do not form one7. Footnote 7: They do form a weak factorization system as we’ve seen in Example 2.32 To see this, let \(\mathcal{C}=\operatorname{Set}\), denote by \(\operatorname{sw}:2\to 2\) the non-identity automorphism of the two-element set. Note that the class \([sw,1]:2\to 2\) in Span is not the identity morphism. Consider now the span \((!,!):*\to*\): As both \([sw,1]\) and \([1,1]\) in the place of the dotted line make the diagram below commute, we obtain that the factorization is not unique up to a unique isomorphism and thus the classes are not orthogonal. Recall now the assignment \((\mathcal{E},\mathcal{M})\mapsto D_{\mathcal{E},\mathcal{M}}\) from Construction 3.5. We have: **Proposition 3.29**.: Let \((\mathcal{E},\mathcal{M})\) be an orthogonal factorization system on a category \(\mathcal{C}\). Then \(D_{\mathcal{E},\mathcal{M}}\) is a factorization double category. The assignment \((\mathcal{E},\mathcal{M})\to D_{\mathcal{E},\mathcal{M}}\) induces a functor \(\mathcal{OFS}\to\operatorname{FactDbl}\). Proof.: Let's verify each point: * **Invariance**: We show the horizontal invariance, the vertical invariance is done similarly. Because the classes \(\mathcal{E},\mathcal{M}\) are closed under composition, given \(u\in\mathcal{E}\) and two isomorphisms \(\theta,\psi\), the composite \(\psi^{-1}u\theta\) gives the unique square with the given boundary: * **Filling corners into squares, every square bicartesian**: Consider a top-right corner as pictured below: * The filler square is given by the \((\mathcal{E},\mathcal{M})\)-factorization of the morphism \(e^{\prime}m^{\prime}\) in \(\mathcal{C}\). Next, let there be a square as pictured below left: * We wish to show that it is bicartesian. Assume there is another square (pictured above right) with the same top-right corner. Because \(me=m^{\prime}e^{\prime}\) are two factorizations of the same morphism, there is a unique isomorphism \(\theta\in\mathcal{E}\cap\mathcal{M}\) between them (pictured below left). It then gives a comparison square between the first square and the second square, as pictured below right: * This gives the **existence**. To prove the **uniqueness**, assume that there is a different comparison square. Its vertical domain map then gives the morphism of factorizations \((e,m),(e^{\prime},m^{\prime})\) and is thus forced to be equal to \(\theta\). Thus the square is opcartesian in \(D_{\mathcal{E},\mathcal{M}}\). The proof that it is also opcartesian in \(\left(D_{\mathcal{E},\mathcal{M}}\right)^{*}\) is done the same way. * **Joint monicity**: Let \((\pi_{1},\pi_{2})\), \(\kappa_{1}\), \(\kappa_{2}\) be the data in \(D_{\mathcal{E},\mathcal{M}}\) as pictured below: \(a^ assume we have \([e,1]=[e^{\prime},1]\), then there is a 2-cell like this: Since the double category \(X\) is invariant, the square is forced to be the identity and thus \(e=e^{\prime}\). Analogously there is an isomorphism \(\mathcal{M}\cong\mathcal{M}_{X}\). Define a double functor \(X\to D_{\mathcal{E}_{X},\mathcal{M}_{X}}\) so that it is identity on objects, on vertical morphisms sends \(e\mapsto[e,1]\) and on horizontal morphisms sends \(m\mapsto[1,m]\). Because both double categoroies are flat, to show that it is an isomorphism it's enough to prove the following: The direction "\(\Rightarrow\)" follows already from Proposition 2.28. For the direction "\(\Leftarrow\)" assume the right square commutes. If we denote \([e,1]\circ[1,m]=[u,g]\), we obtain the square required above right as the following composite, where the upper square is the square used for the composition of the corners, and the lower square exists from the equality \([u,g]=[e^{\prime},m^{\prime}]\): To see that \(\operatorname{Cnr}(-)\circ D\cong 1\), let \((\mathcal{E},\mathcal{M})\) be an orthogonal factorization system on a category \(\mathcal{C}\) and define an identity-on-objects functor \(F:\operatorname{Cnr}(D_{\mathcal{E},\mathcal{M}})\to\mathcal{C}\) so that it sends the class of corners \([e,m]\) to \(m\circ e\). This is well defined because if \([e,m]=[e^{\prime},m^{\prime}]\), there is a 2-cell between \((e,m),(e^{\prime},m^{\prime})\) in \(D_{\mathcal{E},\mathcal{M}}\): But since squares in this double category are commutative squares in \(\mathcal{C}\), we get: \(me=m\psi\theta e=m^{\prime}e^{\prime}\). Functoriality of \(F\) is straightforward, faithfulness follows from the fact that \((\mathcal{E},\mathcal{M})\)-factorizations are unique up to a unique isomorphism, and fullness follows because every morphism in \(\mathcal{C}\) has an \((\mathcal{E},\mathcal{M})\)-factorization. Thus \(F\) is an isomorphism. ## 4. Codescent objects and double categories The purpose of this section is to put the previous ones into the broader perspective of 2-category theory. In Subsection 4.1 we give a brief exposition of a 2-categorical colimit called the _codescent object_. In Subsection 4.2 we study codescent objects of double categories: if \(X\) is crossed, the codescent object is given by the category of corners \(\operatorname{Cnr}(X)\) - this was in fact the original reason for introducing the category of corners of crossed double categories in [19]. As a colimit, the category \(\operatorname{Cnr}(X)\) enjoys a universal property: it is the universal category equipped with functors \((F:X_{0}\to\operatorname{Cnr}(X),\xi:h(X)\to\operatorname{Cnr}(X))\) satisfying a certain _naturality condition_ (Proposition 4.4). In Subsection 4.3 we use the results to obtain an explicit description of various lax morphism classifiers. Section 3.1 then gives us a conceptual reason for why they admit strict factorization systems. ### Review of codescent objects Denote by \(\Delta\) the category of finite ordinals and order-preserving maps and by \(\Delta_{2}\) its full subcategory spanned by the ordinals \([0],[1],[2]\). Let \(W:\Delta_{2}\to\operatorname{Cat}\) be the 2-functor that regards each ordinal as a category. **Definition 4.1**.: Let \(\mathcal{K}\) be a 2-category. By (_strict, reflexive_) _coherence data_ in \(\mathcal{K}\) we mean a 2-functor \(X:\Delta_{2}^{op}\to\mathcal{K}\). By the _lax codescent object_ of \(X\) we mean the \(W\)-weighted colimit of \(X\). In elementary terms, a \(W\)-weighted cocone for coherence data \(X\) is a pair \((F,\xi)\) of a \(1\)-cell \(F:X_{0}\to Y\) and a \(2\)-cell \(\xi:Fd_{1}\Rightarrow Fd_{0}\) satisfying the following equations: A \(W\)-weighted cocone \((F,\xi)\) with apex \(Y\) is then the lax codescent object for \(X\) if given any other cocone \((G,\psi)\) with apex \(Z\), there exists a unique map \(\theta:Y\to Z\) such that8: Footnote 8: This is the \(1\)-dimensional universal property. We omit mentioning the corresponding \(2\)-dimensional universal property here since for the case we’ll be interested in (\(\mathcal{K}=\text{Cat}\)) it follows automatically from the \(1\)-dimensional one. For the full definition see [5, 2.2]. \[\theta F =G, \tag{12}\] \[\theta\xi =\psi. \tag{11}\] ### Codescent objects and double categories **Lemma 4.2**.: Let \(X\) be a double category regarded as a diagram \(X:\Delta^{op}\to\text{Cat}\). There is a natural bijection between codescent cocones for \(X\) and pairs of functors \((F:X_{0}\to\mathcal{Y},\xi:h(X)\to\mathcal{Y})\) (recall \(h(X)\) is the category of objects and horizontal morphisms in \(X\)) that agree on objects and satisfy the following _naturality condition_: Proof.: A cocone \((F,\xi)\) for \(X\) consists of a functor \(F:X_{0}\to\mathcal{Y}\) and a natural transformation \(\xi:Fd_{1}\Rightarrow Fd_{0}:X_{1}\to\mathcal{Y}\). The cocone axioms mean precisely that given a composable pair of morphisms \((h,g)\) in \(h(X)\) and an object \(a\in X\), we have: \[\xi_{g}\circ\xi_{h} =\xi_{g\circ h},\] \[\xi_{s(a)} =1_{Fa}.\] In other words, \(\xi\) induces a functor \(h(X)\to\mathcal{Y}\) that sends an object \(a\in h(X)\) to \(Fa\) and a morphism \(g\in h(X)\) to \(\xi_{g}\). The naturality condition above is precisely the condition that \(\xi\) is a natural transformation. Similarly, the cocone \((F,\xi)\) is the codescent object if and only if the corresponding pair of functors is initial in the sense that given a different pair of functors \((G,\psi)\) satisfying the naturality condition, there is a unique map commuting with the functors: From now on, we will not distinguish between codescent cocones \((F,\xi)\) and pairs of functors satisfying the conditions above. **Proposition 4.3** (Invariance under transposition).: Let \(X\) be a double category. We have: \(\operatorname{CoDesc}(X)\cong\operatorname{CoDesc}(X^{T})\). Proof.: Because Cat admits cotensors with an arrow, it's enough to show just the one-dimensional universal property9, i.e. show that there's a natural bijection between the sets of \(W\)-weighted codescent cocones: Footnote 9: See [15, Page 306] \[\operatorname{Cocone}(X,\mathcal{C})\cong\operatorname{Cocone}(X^{T},\mathcal{ C})\] The bijection is given by \((F,\xi)\mapsto(\xi,F)\). **Proposition 4.4**.: Let \(X\) be a crossed double category. Then the pair of functors \((F:X_{0}\to\operatorname{Cnr}(X),\xi:h(X)\to\operatorname{Cnr}(X))\) sending \(u\mapsto[u,1]\), \(g\mapsto[1,g]\) is the codescent object of \(X\). Proof.: The naturality condition has already been verified in Proposition 2.28. Let now \((G:X_{0}\to Y,\psi:h(X)\to Y)\) be another cocone. We have to show that there is a unique functor \(\theta:\operatorname{Cnr}(X)\to Y\) commuting with the functors: The above commutativity forces \(\theta(a)=Fa=\xi(a)\) for an object \(a\in\operatorname{Cnr}(X)\), and on morphisms \(\theta([u,g])=\psi(g)\circ Gu\). It is routine to verify that this mapping is well defined and a functor. **Remark 4.5**.: Note that if \(X\) is a general double category, the category of corners can be generalized in terms of generators and relations as follows. The codescent object \(\operatorname{CoDesc}(X)\) has objects the objects of \(X\), while a morphism is an equivalence class of paths \([f_{1},\dots,f_{n}]\), with \(f_{i}\) being either a vertical or a horizontal morphism of \(X\). The equivalence relation on morphisms is then generated by the following: \[[f_{1},f_{2}] =[f_{2}\circ f_{1}]\text{ if both }f_{1},f_{2}\text{ are vertical or horizontal,}\] \[[1_{a}] =[]_{a}\text{ if }1_{a}\text{ is a vertical or a horizontal identity morphism in }X,\] \[[g,v] =[u,h]\text{ if there is a square }\alpha\text{ as pictured below:}\] To give an example, let \(G,H\) be groups. We can define a double category \(X\) with one object, vertical morphisms being elements of \(G\), horizontal morphisms being elements of \(H\), such that there are no non-identity squares in \(X\). Then \(\operatorname{CoDesc}(X)=G\ast H\) is the free product of the groups \(G,H\). ### Lax morphism classifiers **Definition 4.6**.: Let \((T,m,i)\) be a 2-monad on a 2-category \(\mathcal{K}\) and let \((A,a)\) be a strict \(T\)-algebra. By its _resolution_, denoted \(\operatorname{Res}(A,a)\), we mean the following coherence data in T-\(\operatorname{Alg}_{s}\): **Theorem 4.7**.: Assume the 2-category \(\text{T-Alg}_{s}\) admits lax codescent objects of resolutions of strict algebras. Then the inclusion 2-functor \(\text{T-Alg}_{s}\to\text{T-Alg}_{l}\) admits a left 2-adjoint: This left adjoint is given by the formula: \[(A,a)^{\prime}=\operatorname{CoDesc}(\operatorname{Res}(A,a)).\] Proof.: Lemma 3.2 in [16]. Given a strict \(T\)-algebra \((A,a)\), the algebra \((A^{\prime},a^{\prime}):=(A,a)^{\prime}\) has the property of being a _lax morphism classifier_: there is a lax \(T\)-morphism \((A,a)\rightsquigarrow(A^{\prime},a^{\prime})\) (the unit of the above adjunction) such that for any lax morphism \((F,\overline{F}):(A,a)\to(B,b)\) there exists a unique strict \(T\)-morphism \(G:(A^{\prime},a^{\prime})\to(B,b)\) such that the following commutes: **Definition 4.8**.: A monad \((T^{\prime},m^{\prime},i^{\prime})\) on a category \(\mathcal{E}\) is said to be _cartesian_ if \(T^{\prime}\) preserves pullbacks and the naturality squares for \(m^{\prime},i^{\prime}\) are pullbacks. If \(T^{\prime}:\mathcal{E}\to\mathcal{E}\) is a cartesian monad on a category \(\mathcal{E}\) with pullbacks, it preserves internal categories in \(\mathcal{E}\) and thus induces a cartesian 2-monad \(\operatorname{Cat}(T^{\prime})\) on the 2-category \(\operatorname{Cat}(\mathcal{E})\) of internal categories and functors10. Footnote 10: See for example [5, Remark 3.16]. We will make use of the following alternative definition of a discrete opfibration. If \(A\) is a category, by \(s:A_{1}\to A_{0}\) we mean the function that sends a morphism in \(A\) to its domain. **Assumption:** Let \((T,m,i)\) be a 2-monad on \(\operatorname{Cat}\) of form \(\operatorname{Cat}(T^{\prime})\) for a cartesian monad \(T^{\prime}\) on \(\operatorname{Set}\). Assume that \(T\) preserves codescent objects. Denote by \(U:\operatorname{T-Alg}_{s}\to\operatorname{Cat}\) the forgetful 2-functor. **Proposition 4.9**.: \(U\operatorname{Res}(A,a)\) is a double category. Its transpose, \(U\operatorname{Res}(A,a)^{T}\), is codomain-discrete. Proof.: The fact that \(U\operatorname{Res}(A,a)\) is a category internal in \(\operatorname{Cat}\) follows directly from the fact that \(T\) is a cartesian 2-monad. Denote now by \(s,t:A_{1}\to A_{0}\) the domain, codomain maps of the category \(A\). Regarding \(U\mathrm{Res}(A,a)\) as a diagram in Set: The domain functor \(d_{0}^{T}\) of \(U\mathrm{Res}(A,a)^{T}\) has object and morphism components these maps: \[(d_{0}^{T})_{0} =Tt\] \[(d_{0}^{T})_{1} =T^{2}t.\] To show that it's a discrete opfibration is to show that the square below is a pullback in Set (see Example 2.7): This follows from the fact that \(m\) is a cartesian natural transformation. **Notation 4.10**.: Given a strict \(T\)-algebra \((A,a)\) we denote: \[\mathrm{Cnr}(A,a):=\mathrm{Cnr}(U\mathrm{Res}(A,a)^{T}).\] Since by our assumption \(T\) preserves codescent objects, we can lift the codescent object \(\mathrm{Cnr}(A)\) from Cat to T-Alg\({}_{s}\). Combining Proposition 4.9 with Proposition 4.3, we obtain: **Theorem 4.11**.: Let \((T,m,i)\) be a \(2\)-monad on Cat of form \(\mathrm{Cat}(T^{\prime})\) for a cartesian monad \(T^{\prime}\) on Set. Assume that \(T\) preserves reflexive codescent objects. Then the lax morphism classifier for a \(T\)-algebra \((A,a)\) is given by the category of corners associated to the transpose of the resolution of this \(T\)-algebra. In other words: \[(A,a)^{\prime}=\mathrm{Cnr}(A,a).\] ### Examples **Example 4.12**.: Let \((T,m,i)\) be the free strict monoidal category \(2\)-monad on Cat. It is of form \(\mathrm{Cat}(T^{\prime})\) for the free monoid monad on Set, and it preserves reflexive codescent objects since it preserves sifted colimits (see [19, Example 4.3.7]). Let \((\mathcal{A},\otimes)\) be a strict monoidal category. The double category \(\mathrm{Res}(A,\otimes)\) has: * Objects the tuples of objects \((a_{1},\ldots,a_{n})\in\operatorname{ob}\,T\mathcal{A}\), * vertical morphisms being tuples of morphisms \((f_{1},\ldots,f_{n})\in\operatorname{mor}\,T\mathcal{A}\), * horizontal morphisms being _partial evaluations_11, that is, objects of \(T^{2}\mathcal{A}\) whose codomain is given by \(T\otimes\) and whose domain is given by the multiplication \(m_{\mathcal{A}}\). For instance: Footnote 11: See for instance [12]. \[(a_{1},a_{2},a_{3},a_{4})\xrightarrow{((a_{1}),(a_{2},a_{3}),(),(a_{4}))}(a_{ 1},a_{2}\otimes a_{3},I,a_{4})\] * squares being morphisms of \(T^{2}\mathcal{A}\). The fact that the transpose of this double category is codomain-discrete amounts to having a unique filler for every bottom-left corner in \(\operatorname{Res}(\mathcal{A},\otimes)\), for example consider the following: \[(a_{1},a_{2},a_{3},a_{4})\] \[\xrightarrow{(f_{1},f_{2},f_{3},f_{4})}\] \[\xrightarrow{((b_{1}),(b_{2},b_{3}),(),(b_{4}))}(b_{1},b_{2} \otimes b_{3},I,b_{4})\] The unique filler is given by square \(((f_{1}),(f_{2},f_{3}),(),(f_{4}))\in\operatorname{mor}\,T^{2}\mathcal{A}\). By Theorem 4.11, the lax morphism classifier is the category \(\operatorname{Cnr}(\mathcal{A},\otimes)\) described as follows. The objects are tuples of objects from \(\mathcal{A}\), while a morphism is a tuple \((e,(f_{1},\ldots,f_{n}))\) of a partial evaluation followed by a tuple of morphisms. For instance here is an example of a morphism \((a_{1},a_{2},a_{3})\rightarrow(b_{1},b_{2},b_{3},b_{4})\): \[(a_{1},a_{2},a_{3})\xrightarrow{((a_{1},a_{2}),(),(a_{3}),())}(a_{1}\otimes a _{2},I,a_{3},I)\] \[\xrightarrow{\big{\downarrow}(f_{1},f_{2},f_{3},f_{4})}\] \[(b_{1},b_{2},b_{3},b_{4})\] The strict monoidal structure is given by concatenation of lists. By Lemma 3.3, this category admits a strict factorization system given by partial evaluations followed by tuples of morphisms of \(\mathcal{A}\). The following is the extension of the previous example in the sense that we obtain it if we put \(X=\ast\): **Example 4.13**.: Fix a set \(X\). Consider a cartesian monad \(T\) on \(\operatorname{Set}^{X\times X}\) (the category of graphs with the set of objects being \(X\)) given by paths: \[(T\mathcal{C})_{A,B}=\operatorname{Path}_{\mathcal{C}}(A,B)=\{(f_{1},\ldots,f_ {m})|m\in\mathbb{N},\operatorname{cod}(f_{i})=\operatorname{dom}(f_{i+1}) \forall i<m\}\] Consider its extension \(\operatorname{Cat}(T)\) (that we again denote by \(T\)) to \(\operatorname{Cat}(\operatorname{Set}^{X\times X})=\)\(=\operatorname{Cat}^{X\times X}\), the 2-category of Cat-graphs whose set of objects is \(X\). A strict \(T\)-algebra \(\mathcal{C}\) is a Cat-graph equipped with a composition functor for each tuple \((A,B)\in X\times X\): \[\operatorname{Path}_{\mathcal{C}}(A,B)\to\mathcal{C}(A,B).\] It is easily verified that such \(\mathcal{C}\) is precisely a small 2-category with the set of objects being \(X\). Also, lax \(T\)-algebra morphisms are identity-on-objects lax functors. Any 2-category \(\mathcal{C}\) (regarded as a \(T\)-algebra) gives rise to its resolution \(U\mathrm{Res}(\mathcal{C})\), which is a diagram in \(\operatorname{Cat}^{X\times X}\). Denote its codescent object by \(\mathcal{C}^{\prime}\) - this is a Cat-graph with the set of objects being \(X\) that moreover has the structure of a 2-category. As colimits in \(\operatorname{Cat}^{X\times X}\) are computed pointwise, \(\mathcal{C}^{\prime}(x,y)\) is given by the codescent object of \(\mathrm{Res}(\mathcal{C})(x,y)\). Because the 2-monad multiplication \(m:T^{2}\Rightarrow T\) is a pointwise discrete opfibration, each \(\mathrm{Res}(\mathcal{C})(x,y)\) is a codomain-discrete double category and so \(\mathcal{C}^{\prime}(x,y)\) can be computed using the category of corners construction as follows. Objects in \(\mathcal{C}^{\prime}(x,y)\) are the objects of \(T\mathcal{C}(x,y)\), that is, paths of morphisms in the 2-category \(\mathcal{C}\). Morphisms are corners whose first component is given by a _partial evaluation 2-cell_ (an object of \(T^{2}\mathcal{C}(x,y)\)) and the second component is given by a tuple of 2-cells in \(\mathcal{C}\) (a morphism in \(T\mathcal{C}(x,y)\)). For instance this morphism \((f_{1},f_{2},f_{3},f_{4})\to(g_{1},g_{2},g_{3})\): The 2-category structure of the lax functor classifier \(\mathcal{C}^{\prime}\) is given by concatenation of paths and tuples of 2-cells. By Lemma 3.3, each hom category of \(\mathcal{C}^{\prime}\) admits a strict factorization system given by _partial evaluation 2-cells_ and tuples of 2-cells on \(\mathcal{C}\). Moreover, these strict factorization systems are stable under post- and precomposition with 1-cells of \(\mathcal{C}^{\prime}\). This description of the lax functor classifier 2-category has been sketched in [14, Page 246]. **Remark 4.14** (Colax morphism classifiers).: We can apply dualities to compute **co-lax** morphism classifier for a 2-monad \(T\) on \(\operatorname{Cat}\) of form \(\operatorname{Cat}(T^{\prime})\) as follows. First note that the opposite category 2-functor \((-)^{op}:\operatorname{Cat}^{co}\to\operatorname{Cat}\) induces a 2-isomorphism \[\operatorname{T-Alg}_{c}\cong T^{co}\text{-Alg}_{l},\] \[(A,a)\mapsto(A^{op},a^{op}).\] This implies that a \(T\)-algebra \((B,b)\) is the colax \(T\)-morphism classifier for \((A,a)\) if and only if \((B^{op},b^{op})\) is the lax \(T^{v}\)-morphism classifier for \((A^{op},a^{op})\). Now let \((A,a)\) be a strict \(T\)-algebra. The lax-\(T^{co}\)-morphism classifier for \((A^{op},a^{op})\) is a \(T^{co}\)-algebra \(\operatorname{Cnr}(A^{op})\), and thus the colax morphism classifier is given by the formula: \[(A,a)^{\prime}=\operatorname{Cnr}(A^{op})^{op}.\] For instance, the colax monoidal functor classifier for a strict monoidal category \((\mathcal{A},\otimes)\) again has tuples of objects in \(\mathcal{A}\) as object, and a morphism \((a_{1},a_{2},a_{3},a_{4})\to\to(b_{1},b_{2},b_{3},b_{4})\) is a corner (or rather, a _cospan_) like this: \[(a_{1},a_{2},a_{3},a_{4})\] \[(b_{1},b_{2}\otimes b_{3},I,b_{4})\xleftarrow{}((b_{1}),(b_{2},b_ {3}),(),(b_{4}))\ \ (b_{1},b_{2},b_{3},b_{4})\] **Example 4.15**.: While this example does not follow from the results as stated in this paper, it follows from their internal analogue in \(\operatorname{Cat}(\mathcal{E})\), where \(\mathcal{E}=\operatorname{Graph}\). Let \(\operatorname{fc}:\operatorname{Graph}\to\operatorname{Graph}\) be the free category on a graph monad. It is a cartesian monad, it thus admits an extension to a 2-monad \(T:=\operatorname{Cat}(\operatorname{fc})\) on the 2-category \(\operatorname{Cat}(\operatorname{Graph})\). This 2-category consists of structures like double categories except we can not compose squares or horizontal morphisms horizontally, there's only vertical composition. A strict \(T\)-algebra is a double category. It can also be seen that a colax \(T\)-algebra morphism is a colax double functor. Given a \(T\)-algebra \(X\), the construction \(\operatorname{Cnr}(X)\) of the colax double functor classifier agrees with the construction \(\mathbb{P}\text{ath}\,\,X\) of [10, The construction 1.1, Proposition 1.19]. The fact that it admits an internal strict factorization system was also proven in [10, 1.5 Proposition]. The internal versions of the results in this paper as well as the generalization of the category of corners to lax \(T\)-algebras will appear in the author's upcoming Ph.D. thesis.
2304.11141
H2TF for Hyperspectral Image Denoising: Where Hierarchical Nonlinear Transform Meets Hierarchical Matrix Factorization
Recently, tensor singular value decomposition (t-SVD) has emerged as a promising tool for hyperspectral image (HSI) processing. In the t-SVD, there are two key building blocks: (i) the low-rank enhanced transform and (ii) the accompanying low-rank characterization of transformed frontal slices. Previous t-SVD methods mainly focus on the developments of (i), while neglecting the other important aspect, i.e., the exact characterization of transformed frontal slices. In this letter, we exploit the potentiality in both building blocks by leveraging the \underline{\bf H}ierarchical nonlinear transform and the \underline{\bf H}ierarchical matrix factorization to establish a new \underline{\bf T}ensor \underline{\bf F}actorization (termed as H2TF). Compared to shallow counter partners, e.g., low-rank matrix factorization or its convex surrogates, H2TF can better capture complex structures of transformed frontal slices due to its hierarchical modeling abilities. We then suggest the H2TF-based HSI denoising model and develop an alternating direction method of multipliers-based algorithm to address the resultant model. Extensive experiments validate the superiority of our method over state-of-the-art HSI denoising methods.
Jiayi Li, Jinyu Xie, Yisi Luo, Xile Zhao, Jianli Wang
2023-04-21T17:27:43Z
http://arxiv.org/abs/2304.11141v1
H2TF for Hyperspectral Image Denoising: Where Hierarchical Nonlinear Transform Meets Hierarchical Matrix Factorization ###### Abstract Recently, tensor singular value decomposition (t-SVD) has emerged as a promising tool for hyperspectral image (HSI) processing. In the t-SVD, there are two key building blocks: (i) the low-rank enhanced transform and (ii) the accompanying low-rank characterization of transformed frontal slices. Previous t-SVD methods mainly focus on the developments of (i), while neglecting the other important aspect, i.e., the exact characterization of transformed frontal slices. In this letter, we exploit the potentiality in both building blocks by leveraging the Hierarchical nonlinear transform and the Hierarchical matrix factorization to establish a new Tensor Factorization (termed as H2TF). Compared to shallow counter partners, e.g., low-rank matrix factorization or its convex surrogates, H2TF can better capture complex structures of transformed frontal slices due to its hierarchical modeling abilities. We then suggest the H2TF-based HSI denoising model and develop an alternating direction method of multipliers-based algorithm to address the resultant model. Extensive experiments validate the superiority of our method over state-of-the-art HSI denoising methods. Hyperspectral denoising, t-SVD, ADMM. ## I Introduction Hyperspectral images (HSIs) inevitably contain mixed noise due to sensor failures or complex imaging conditions [1, 2], which seriously affects subsequent applications. Traditional hand-crafted HSI denoising methods, e.g., low-rankness [3], total variation (TV) [4], sparse representations [5], and non-local self-similarity [6], utilize interpretable domain knowledge to design generalizable regularizations for HSI denoising. Their representation abilities may be inferior to data-driven methods using deep neural networks (DNNs) [7, 8, 9], which can learn representative denoising mappings via supervised learning with abundant training pairs. However, supervised deep learning methods mostly neglect the prior information of HSIs, which sometimes results in generalization issues over different HSIs and various types of noise. More recently, tensor singular value decomposition (t-SVD) attracts much attention in HSI denoising [10, 11]. The t-SVD views HSI as an implicit low-rank tensor and exploits the low-rankness in the transformed domain, which can more vividly characterize the structures of HSIs since it is flexible to select appropriate transforms and the accompanying low-rank characterization of the transformed frontal slices. Under such a framework, there are naturally two key building blocks: (i) The selection of the low-rank enhanced transform. A suitable transform can obtain a lower-rank transformed tensor and enhance the recovery quality [12, 13]. (ii) The characterization of low-rankness of transformed frontal slices. The implicit low-rankness of HSIs is exploited by the low-rank modeling of frontal slices in the transformed domain. Classical t-SVD-based methods mainly focused on the first building blocks, i.e., the design of different transforms. For example, the discrete Fourier transform (DFT) [14] was first used in the t-SVD, and then the discrete cosine transform (DCT) [15] was employed. Later methods exploited more representative and flexible transforms such as non-invertible transforms [16] and data-dependent transforms [17] to enhance the low-rankness of transformed frontal slices. These methods have achieved increasingly satisfactory results for HSI denoising [10, 11]. Nevertheless, these t-SVD methods pay less attention to the second building block, i.e., the exact characterization of transformed frontal slices. Specifically, they all employ shallow representations such as low-rank matrix factorization (MF) [13], QR factorization [18], and nuclear norm [12, 16] to characterize the transformed frontal slices. In this work, we exploit a more representative formulation to capture complex structures of transformed frontal slices. Specifically, we leverage the hierarchical matrix factorization (HMF), which tailors a hierarchical formulation of learnable matrices along with nonlinear layers to capture each frontal slice in the transformed domain. The hierarchical modeling ability of HMF makes it more representative to capture the complex structures of HSIs. Meanwhile, we leverage the hierarchical nonlinear transform (HNT) to enhance the low-rankness of transformed frontal slices. With the Hierarchical nonlinear transform and Hierarchical matrix factorization, we develop a new Tensor Factorization method (termed as H2TF) under the t-SVD framework. Correspondingly, we develop the H2TF-based HSI denoising model. Attributed to the stronger representation abilities of HMF than shallow MF or its surrogates, our H2TF-based model can better capture fine details of the underlying clean HSI than conventional t-SVD-based methods. Thus, our model is expected to deliver better HSI denoising results. Meanwhile, the parameters of H2TF can be inferred from the observed noisy HSI in an unsupervised manner. In summary, the contributions of this letter are: **(i)** We propose a new tensor factorization, i.e., the H2TF, which leverages the expressive power of two key building blocks--the HNT and the HMF, to respectively enhance the low-rankness of transformed data and characterize complex structures of transformed frontal slices. By virtue of their hierarchical modeling abilities, H2TF can faithfully capture fine details of the clean HSI, and thus is beneficial for effectively removing heavy noise in the HSI. **(ii)** We suggest an unsupervised H2TF-based HSI denoising model and develop an alternating direction method of multipliers (ADMM)-based algorithm. Extensive experiments on simulated and real-world data validate the superiority of our method over state-of-the-art (SOTA) HSI denoising methods, especially for details preserving and heavy noise removal. ## II The Proposed H2TF ### _The t-SVD framework_ We first introduce the general formulation of t-SVD. Suppose that the noisy HSI \(\mathcal{Y}\in\mathbb{R}^{h\times w\times b}\) admits \(\mathcal{Y}=\mathcal{X}+\mathcal{N}\), where \(\mathcal{X}\) denotes the clean HSI and \(\mathcal{N}\) denotes noise. To infer the underlying clean HSI \(\mathcal{X}\) from the observed \(\mathcal{Y}\), t-SVD method generally formulates the following model: \[\min_{\mathcal{Z},\theta}\;L(\mathcal{Y},\mathcal{X})+\sum_{k}\psi(\mathcal{Z} ^{(k)}),\;\mathrm{where}\;\mathcal{X}=\phi_{\theta}(\mathcal{Z}). \tag{1}\] Here, \(L(\mathcal{Y},\mathcal{X})\) denotes the fidelity term and \(\psi(\mathcal{Z}^{(k)})\) represents the low-rank characterization of \(\mathcal{Z}^{(k)}\) (which denotes the \(k\)-th frontal (spatial) slice of \(\mathcal{Z}\in\mathbb{R}^{h\times w\times b}\)[16]). \(\phi_{\theta}(\cdot):\mathbb{R}^{h\times w\times b}\rightarrow\mathbb{R}^{h \times w\times b}\) denotes a transform with learnable parameters \(\theta\), which transforms the low-rank representation \(\mathcal{Z}\) into the original domain. Sometimes the transform \(\phi_{\theta}(\cdot)\) may not be learnable (e.g., the fixed DFT [14]), and in those situations the optimization variable only includes \(\mathcal{Z}\). The philosophy of the t-SVD model (1) is to minimize the rank in the transformed domain, which can model the implicit low-rankness of HSI. There are naturally two key building blocks for exactly modeling the implicit low-rankness, i.e., the selection of the transform \(\phi_{\theta}(\cdot)\) and the exact low-rank characterization \(\psi(\cdot)\) of the transformed frontal slice \(\mathcal{Z}^{(k)}\). Most t-SVD-based methods focus on the design of different transforms \(\phi_{\theta}(\cdot)\) (see examples in [13, 16, 17]), but all of them pay less attention to the exact characterization of the transformed frontal slice. They mostly adopt shallow representations to characterize \(\mathcal{Z}^{(k)}\), e.g., MF [13, 19], QR factorization [18], and nuclear norm [15, 16]. However, these shallow representations may not be expressive enough to capture fine details of the clean HSI. Therefore, more representative methods are desired to enhance the representation abilities of the model in the transformed domain. ### _HMF for Characterizing \(\mathcal{Z}^{(k)}\)_ To cope with this challenge, we leverage the HMF (hierarchical matrix factorization) to characterize \(\mathcal{Z}^{(k)}\). The hierarchical modeling ability of HMF helps it more faithfully capture complex structures of the transformed frontal slice \(\mathcal{Z}^{(k)}\) than shallow counter partners, e.g., SVD, MF, and QR factorization. The standard MF used in previous t-SVD methods [13, 19] decomposes a low-rank matrix \(\mathbf{Z}\in\mathbb{R}^{h\times w}\) into two factors as \(\mathbf{Z}=\mathbf{W}_{2}\mathbf{W}_{1}\), where \(\mathbf{W}_{2}\in\mathbb{R}^{h\times r}\), \(\mathbf{W}_{1}\in\mathbb{R}^{r\times w}\), and \(r\) is the rank. To model the hierarchical structures of \(\mathbf{Z}\), we extend the MF to the product of multiple matrix factors \(\{\mathbf{W}_{d}\}_{d=1}^{l}\): \[\mathbf{Z}=\mathbf{W}_{l}\mathbf{W}_{l-1}\cdots\mathbf{W}_{1}, \tag{2}\] where \(\mathbf{W}_{d}\in\mathbb{R}^{r_{d}\times r_{d-1}}\), \(r_{l}=h\), and \(r_{0}=w\). It was shown in [20] that such a linear HMF can induce an implicit low-rank regularization on \(\mathbf{Z}\) when using gradient-based optimization. Generally, the larger \(l\) is (i.e., adding depth to the HMF), the tendency towards low-rank solutions goes stronger and oftentimes leads to better matrix recovery performances. Thus, the HMF is suitable to play the role of low-rank regularization in the t-SVD model (1). Nevertheless, the linear HMF (2) may not be sufficient to capture nonlinear interactions inside HSIs. It motivates us to utilize the nonlinear HMF [21, 22] to model the low-rank matrix \(\mathbf{Z}\) via \(\mathbf{Z}=\mathbf{W}_{l}\sigma(\mathbf{W}_{l-1}\cdots\mathbf{W}_{3}\sigma( \mathbf{W}_{2}\mathbf{W}_{1}))\), where \(\sigma(\cdot)\) is a nonlinear scalar function. Classical HMF-based methods [20, 21] utilize HMF to tackle the two-dimensional matrix. However, matrixing the HSI inevitably destroys its high-dimensional data structures. Therefore, we suggest tailoring \(b\) nonlinear HMFs to model the transformed tensor \(\mathcal{Z}\) by using each HMF to represent one of the frontal slices of \(\mathcal{Z}\). Formally, we represent each frontal slice of \(\mathcal{Z}\) by \[\mathcal{Z}^{(k)}=\mathcal{W}_{l}^{(k)}\sigma(\mathcal{W}_{l-1}^{(k)}\cdots \mathcal{W}_{3}^{(k)}\sigma(\mathcal{W}_{2}^{(k)}\mathcal{W}_{1}^{(k)})),k=1,2, \cdots,b.\] The above HMFs can be equivalently formulated as the tensor formulation \(\mathcal{Z}=\mathcal{W}_{l}\Delta\sigma(\mathcal{W}_{l-1}\Delta\cdots \mathcal{W}_{3}\Delta\sigma(\mathcal{W}_{2}\Delta\mathcal{W}_{1}))\), where \(\Delta\) is the tensor face-wise product [23] and \(\{\mathcal{W}_{d}\in\mathbb{R}^{r_{d}\times r_{d-1}\times b}\}_{d=1}^{l}\) are some factor tensors. Compared to shallow counter partners, e.g., MF, QR factorization, and nuclear norm, the above nonlinear HMF can better capture complex hierarchical structures of HSIs due to its nonlinear hierarchical modeling abilities, which helps to better recover fine details of HSI and remove heavy noise. ### _The Proposed H2TF_ Next, we introduce our H2TF. Recall that two key building blocks in the t-SVD are the selection of the transform \(\phi_{\theta}(\cdot)\) and the characterization of the transformed frontal slice \(\mathcal{Z}^{(k)}\). We have leveraged the HMF to characterize \(\mathcal{Z}^{(k)}\), and we further leverage the HNT (hierarchical nonlinear transform) as the first building block \(\phi_{\theta}(\cdot)\): \[\phi_{\theta}(\mathcal{Z}):=\sigma(\cdots\sigma(\mathcal{Z}\times_{3}\mathbf{H} _{1})\times_{3}\cdots\times_{3}\mathbf{H}_{m-1})\times_{3}\mathbf{H}_{m},\] where \(\sigma(\cdot)\) is a nonlinear scalar function, \(\theta:=\{\mathbf{H}_{p}\in\mathbb{R}^{b\times b}\}_{p=1}^{m}\) are learnable parameters of HNT, and \(\times_{3}\) is the mode-3 tensor-matrix product [24]. It was throughout demonstrated [13] that the HNT can effectively enhance the low-rankness of transformed tensor and thus obtain a better low-rank representation than shallow transforms (e.g., DFT [14] and DCT [15]), which benefits the implicit low-rank modeling. **Definition 1** (H2TF).: _Finally, we can define the following factorization modality of a certain low-rank tensor \(\mathcal{X}\) parameterized by \(\{\mathcal{W}_{d}\}_{d=1}^{l}\) and \(\{\mathbf{H}_{p}\}_{p=1}^{m}\):_ \[\begin{split}\mathcal{X}=\phi_{\theta}\big{(}\mathcal{W}_{l} \Delta\sigma(\mathcal{W}_{l-1}\Delta\cdots\mathcal{W}_{3}\Delta\sigma( \mathcal{W}_{2}\Delta\mathcal{W}_{1}))\big{)},\\ \phi_{\theta}(\mathcal{Z}):=\underbrace{\sigma(\cdots\sigma(\mathcal{Z} \times_{3}\mathbf{H}_{1})\times_{3}\cdots\times_{3}\mathbf{H}_{m-1})\times_{3} \mathbf{H}_{m}}_{\mathrm{Hierarchical\ nonlinear\ transform}},\end{split} \tag{3}\] _which we call the H2TF representation of \(\mathcal{X}\)._ A general illustration of the proposed H2TF is shown in Fig. 1. H2TF benefits from the HMF to exploit complex hierarchical information of transformed frontal slices and the HNT to enhance the low-rankness in the transformed domain. With the hierarchical modeling abilities of H2TF, the characterization of HSIs would be more accurate. Therefore, H2TF can more faithfully capture fine details and rich textures of HSIs and remove heavy mixed noise. Now, we discuss the connections between H2TF and some popular matrix/tensor factorizations. **Remark 1**.: _By changing the layer number of hierarchical matrix factorization (i.e., \(l\)) and the layer number of hierarchical nonlinear transform (i.e., \(m\)), H2TF includes many matrix/tensor factorizations as special cases:_ **(i)** _When \(l=2\), i.e., the HMF degenerates into the MF, our H2TF degenerates into the hierarchical low-rank tensor factorization [13]._ **(ii)** _When \(m=1\) and \(\mathbf{H}_{m}\) is an identity matrix (i.e., the transform \(\phi_{\theta}(\cdot)\) is an identical mapping), our H2TF degenerates into the plain HMFs [21, 22] applied on each frontal slice of the tensor separately. In the following, we interpret this case as "\(m=0\)" since the transform is neglected._ **(iii)** _When \(l=2\) and \(m=1\) with \(\mathbf{H}_{m}\) being the fixed inverse DFT matrix, our H2TF degenerates into the classical low-tubal-rank tensor factorization [19, 25]._ Moreover, H2TF can explicitly preserve the low-rankness of the tensor when omitting some nonlinearity, as stated below. **Lemma 1**.: _Suppose that \(\mathcal{X}=\big{(}\mathcal{W}_{l}\Delta(\mathcal{W}_{l-1}\Delta\cdots \Delta\mathcal{W}_{l})\big{)}\in\mathbb{R}^{h\times w\times b}\), where \(\{\mathcal{W}_{d}\in\mathbb{R}^{r_{d}\times r_{d-1}\otimes l}\}_{d=1}^{l}\) (\(r_{l}=h\) and \(r_{0}=w\)) are factor tensors, \(\phi(\mathcal{Z}):=\mathcal{Z}\times_{3}\mathbf{F}^{-1}\) is the inverse DFT, and \(\mathbf{F}^{-1}\) is the inverse DFT matrix (which is a special case of H2TF). Then we have \(\mathrm{rank}_{t}(\mathcal{X})\leq\min\{r_{0},r_{1},\cdots,r_{l}\}\), where \(\mathrm{rank}_{t}(\cdot)\) denotes the tensor tubal-rank [12, 13, 14]._ Lemma 1 indicates that H2TF can preserve the low-rankness in the linear special case, where the degree of low-rankness (the upper bound of tubal-rank) is conditioned on the sizes of factor tensors. Therefore, we can readily control the degree of low-rankness by tuning the sizes of factor tensors in H2TF. ### _H2TF for HSI Denoising_ H2TF is a potential tool for multi-dimensional data analysis and processing. We consider HSI denoising as a representative real-world application. By applying the H2TF representation (3) into (1), we can obtain the following HSI denoising model: \[\min_{\{\mathcal{W}_{d}\}_{d=1}^{l}:\{\mathbf{H}_{\mathbf{F}}\}_{ p=1}^{m}} L(\mathcal{Y},\mathcal{X}),\] \[\mathrm{where}\ \mathcal{X}=\phi_{\theta}\big{(}\mathcal{W}_{l} \Delta\sigma(\mathcal{W}_{l-1}\Delta\cdots\mathcal{W}_{3}\Delta\sigma( \mathcal{W}_{2}\Delta\mathcal{W}_{l}))\big{)}.\] In the HSI denoising problem, we consider the fidelity term as \(L(\mathcal{Y},\mathcal{X})=\|\mathcal{Y}-\mathcal{X}-\mathcal{S}\|_{F}^{2}+ \alpha_{1}\|\mathcal{S}\|_{\ell_{1}}\), where \(\|\cdot\|_{F}^{2}\) denotes the Frobenius norm and we introduce \(\mathcal{S}\in\mathbb{R}^{h\times w\times b}\) to represent sparse noise (often contains impulse noise and stripes). The \(\ell_{1}\)-norm enforces the sparsity on \(\mathcal{S}\) so that the sparse noise can be eliminated. Here, \(\alpha_{1}\) is a trade-off parameter. Meanwhile, our H2TF can be readily combined with other proven techniques to enhance the denoising abilities. Here, we consider the hybrid spatial-spectral TV (HSSTV) regularization [26] to further capture spatial and spatial-spectral local smoothness of HSIs. The HSSTV is formulated as \(\|\mathcal{X}\|_{\mathrm{HSSTV}}:=\alpha_{2}\|\mathcal{X}\|_{\mathrm{TV}}+ \alpha_{3}\|\mathcal{X}\|_{\mathrm{SSTV}}\), where \(\|\mathcal{X}\|_{\mathrm{TV}}:=\|\nabla_{\mathcal{X}}\mathcal{X}\|_{\ell_{1}}+ \|\nabla_{\mathcal{Y}}\mathcal{X}\|_{\ell_{1}}\), \(\|\mathcal{X}\|_{\mathrm{SSTV}}:=\|\nabla_{\mathcal{X}}(\nabla_{z}\mathcal{X}) \|_{\ell_{1}}+\|\nabla_{\mathcal{Y}}(\nabla_{z}\mathcal{X})\|_{\ell_{1}}\), and \(\alpha_{i}\) (\(i=2,3\)) are trade-off parameters. Here, the derivative operators are defined as \((\nabla_{x}\mathcal{X})_{(i,j,k)}:=\mathcal{X}_{(i+1,j,k)}-\mathcal{X}_{(i,j,k)},\ (\nabla_{y}\mathcal{X})_{(i,j,k)}:=\mathcal{X}_{(i,j+1,k)}-\mathcal{X}_{(i,j,k)}\), and \((\nabla_{z}\mathcal{X})_{(i,j,k)}:=\mathcal{X}_{(i,j,k+1)}-\mathcal{X}_{(i,j,k)}\), where \(\mathcal{X}_{(i,j,k)}\) denotes the \((i,j,k)\)-th element of \(\mathcal{X}\). Based on the formulations of fidelity term and HSSTV, the proposed H2TF-based HSI denoising model is formulated as \[\min_{\{\mathcal{W}_{d}\}_{d=1}^{l}:\{\mathbf{H}_{\mathbf{F}}\}_{ p=1}^{m},\mathcal{S}}\|\mathcal{Y}-\mathcal{X}-\mathcal{S}\|_{F}^{2}+\alpha_{1}\| \mathcal{S}\|_{\ell_{1}}+\|\mathcal{X}\|_{\mathrm{HSSTV}}, \tag{4}\] \[\mathrm{where}\ \mathcal{X}=\phi_{\theta}\big{(}\mathcal{W}_{l} \Delta\sigma(\mathcal{W}_{l-1}\Delta\cdots\mathcal{W}_{3}\Delta\sigma( \mathcal{W}_{2}\Delta\mathcal{W}_{1}))\big{)}.\] Compared to previous t-SVD-based HSI denoising methods [10, 11], H2TF has powerful representation abilities brought from the hierarchical structures and thus could better capture fine details of HSIs. Besides, the parameters of H2TF are unsupervisedly inferred from the noisy HSI by optimizing (4) without the requirement of training process. ### _ADMM-Based Algorithm_ To tackle the problem (4), we develop an ADMM-based algorithm. By introducing auxiliary variables \(\mathcal{V}_{i}\) (\(i=1,2,3,4\)), (4) can be equivalently formulated as \[\min_{\{\mathcal{W}_{d}\}_{d=1}^{l}:\{\mathbf{H}_{\mathbf{F}}\}_{ p=1}^{m},} \|\mathcal{Y}-\mathcal{X}-\mathcal{S}\|_{F}^{2}+\alpha_{1}\|\mathcal{S}\|_{\ell_{1} }+\alpha_{2}\|\mathcal{V}_{1}\|_{\ell_{1}}+\] \[\alpha_{2}\|\mathcal{V}_{2}\|_{\ell_{1}}+\alpha_{3}\|\mathcal{V}_{ 3}\|_{\ell_{1}}+\alpha_{3}\|\mathcal{V}_{4}\|_{\ell_{1}},\] \[\mathrm{s.t.}\ \mathcal{V}_{1}=\nabla_{x}\mathcal{X},\mathcal{V}_{2}= \nabla_{y}\mathcal{X},\quad\mathcal{V}_{3}=\nabla_{x}(\nabla_{x}\mathcal{X}),\mathcal{V}_{4}=\nabla_{y}(\nabla_{z}\mathcal{X}),\] where \(\mathcal{X}=\phi_{\theta}\big{(}\mathcal{W}_{l}\Delta\sigma(\mathcal{W}_{l-1} \Delta\cdots\mathcal{W}_{3}\Delta\sigma(\mathcal{W}_{2}\Delta\mathcal{W}_{1})) \big{)}\). The corresponding augmented Lagrangian function is \[\mathcal{L}_{\mu}(\{\mathcal{W}_{d}\}_{d=1}^{l},\{\mathbf{H}_{ \mathbf{F}}\}_{p=1}^{m},\mathcal{S},\{\mathcal{V}_{i}\}_{i=1}^{4},\{A_{i}\}_{i= 1}^{4})\] \[=\|\mathcal{Y}-\mathcal{X}-\mathcal{S}\|_{F}^{2}+\alpha_{1}\| \mathcal{S}\|_{\ell_{1}}+\alpha_{2}\|\mathcal{V}_{1}\|_{\ell_{1}}+\alpha_{2}\| \mathcal{V}_{2}\|_{\ell_{1}}+\] \[\alpha_{3}\|\mathcal{V}_{3}\|_{\ell_{1}}+\alpha_{3}\|\mathcal{V}_{ 4}\|_{\ell_{1}}+\frac{\mu}{2}\|\nabla_{x}\mathcal{X}+\frac{A_{1}}{\mu}- \mathcal{V}_{1}\|_{F}^{2}+\] \[\frac{\mu}{2}\|\nabla_{y}\mathcal{X}+\frac{A_{2}}{\mu}-\mathcal{V}_{ 2}\|_{F}^{2}+\frac{\mu}{2}\|\nabla_{x}(\nabla_{z}\mathcal{X})+\frac{A_{3}}{\mu}- \mathcal{V}_{3}\|_{F}^{2}+\] \[\frac{\mu}{2}\|\nabla_{y}(\nabla_{x}\mathcal{X})+\frac{A_{4}}{\mu}- \mathcal{V}_{4}\|_{F}^{2},\] where \(\mu\) is the penalty parameter, \(A_{i}\) (\(i=1,2,3,4\)) are multipliers, and \(\mathcal{X}\) is defined as in (3). The joint minimization problem can be decomposed into easier subproblems, followed by the update of Lagrangian multipliers. The \(\mathcal{V}_{i}\) (\(i=1,2,3,4\)) subproblems are \[\begin{cases}\min_{\nu_{1}}\frac{\mu}{2}\|\nabla_{x}\mathcal{X}^{t}+\frac{A_{i }^{t}}{\mu}-\mathcal{V}_{1}\ The \(\mathcal{S}\) subproblem is \(\min_{\mathcal{S}}\lVert\mathcal{Y}-\mathcal{X}^{t}-\mathcal{S}\rVert_{F}^{2}+ \alpha_{1}\lVert\mathcal{S}\rVert_{\ell_{1}}\), which can be exactly solved by \(\mathcal{S}^{t+1}=Soft\frac{\alpha_{1}}{2}(\mathcal{Y}-\mathcal{X}^{t})\). The \(\mathcal{X}\) subproblem is \[\min_{\{\mathcal{W}_{d}\}_{d=1}^{L}:\{\mathbf{H}_{p}\}_{p=1}^{m} }\|\mathcal{Y}-\mathcal{X}-\mathcal{S}^{t}\|_{F}^{2}+\frac{\mu}{2}(\lVert \nabla_{x}\mathcal{X}-\mathcal{D}_{1}^{t}\rVert_{F}^{2}+\] \[\lVert\nabla_{y}\mathcal{X}-\mathcal{D}_{2}^{t}\rVert_{F}^{2}+ \lVert\nabla_{x}(\nabla_{z}\mathcal{X})-\mathcal{D}_{2}^{t}\rVert_{F}^{2}+ \lVert\nabla_{y}(\nabla_{x}\mathcal{X})-\mathcal{D}_{2}^{t}\rVert_{F}^{2}),\] where \(\mathcal{D}_{i}^{t}:=\mathcal{V}_{i}^{t}-\frac{\Lambda_{i}^{t}}{\mu}\) (\(i=1,2,3,4\)). The clean HSI \(\mathcal{X}\) is parameterized by \(\{\mathcal{W}_{d}\}_{d=1}^{l}\) and \(\{\mathbf{H}_{p}\}_{p=1}^{m}\), as presented in Eq. (3). To tackle the nonlinear and nonconvex \(\mathcal{X}\) subproblem, we apply the adaptive moment estimation (Adam) algorithm [27]. In each iteration of the ADMM-based algorithm, we employ one step of the Adam to update \(\{\mathcal{W}_{d}\}_{d=1}^{l}\) and \(\{\mathbf{H}_{p}\}_{p=1}^{m}\). Finally, the Lagrange multipliers are updated by \(\Lambda_{1}^{t+1}=\Lambda_{1}^{t}+\mu(\nabla_{x}\mathcal{X}^{t}-\mathcal{V}_{1 }^{t})\), \(\Lambda_{2}^{t+1}=\Lambda_{2}^{t}+\mu(\nabla_{y}\mathcal{X}^{t}-\mathcal{V}_{2 }^{t})\), \(\Lambda_{3}^{t+1}=\Lambda_{3}^{t}+\mu(\nabla_{x}(\nabla_{z}\mathcal{X}^{t})- \mathcal{V}_{3}^{t})\), and \(\Lambda_{4}^{t+1}=\Lambda_{4}^{t}+\mu(\nabla_{y}(\nabla_{z}\mathcal{X}^{t})- \mathcal{V}_{4}^{t})\). ## III Experiments ### _Experimental Settings_ We compare H2TF with SOTA model-based methods LRTDTV [28], SSTV-LRTF [11], RCTV [4], and HLRTF [13] and deep learning methods HSID-CNN [9] and SDeCNN [8]. We use the pre-trained models of HSID-CNN and SDeCNN provided by authors. All hyperparameters of these methods are carefully adjusted based on authors' suggestions to achieve the best results. We report the peak-signal-to-noise-ratio (PSNR) and structural similarity (SSIM). For more implementation details, please refer to supplementary materials. We include four HSIs and three multi-spectral images (MSIs) as simulated datasets. The HSIs are _WDC_ (\(256\times 256\times 32\)), _PaviaC_ (\(256\times 256\times 32\)), _PaviaC_ (\(256\times 256\times 32\)), and _Indian_ (\(145\times 145\times 32\)). The MSIs are _Beads_ (\(256\times 256\times 31\)), _Cloth_ (\(256\times 256\times 31\)), and _Cups_ (\(256\times 256\times 31\)) in the CAVE dataset [29]. The noise settings of simulated data are explained as below. **Case 1**: All bands are added with Gaussian noise of standard deviation 0.2. **Case 2**: The Gaussian noise for Case 1 is kept. Besides, all bands are added with impulse noise with sampling rate 0.1. **Case 3**: The same as Case 2 plus 50% of bands corrupted by deadlines. The number of deadlines for each chosen band is generated randomly from 6 to 10, and their spatial width is chosen randomly from 1 to 3. **Case 4**: The same as Case 2 plus 40% of bands corrupted by stripes. The number of stripes in each corrupted band is chosen randomly from 6 to 15. **Case 5**: The same as Case 2 plus both the deadlines in Case 3 and the stripes in Case 4. To test our method in real scenarios, we choose two real-world noisy HSIs _Shanghai_ (\(300\times 300\times 32\)) and _Urban_ (\(307\times 307\times 32\)) as real-world experimental datasets. ### _Experimental Results_ #### Iii-B1 Results The quantitative results on simulated data are reported in Table I. Our H2TF obtains better quantitative results than other competitors. H2TF outperforms other TV and tensor factorization-based methods (LRTDTV, SSTV-LRTF, RCTV, and HLRTF), which shows the stronger representation abilities of H2TF than existing shallow tensor factorizations thanks to the hierarchical structures of H2TF. Some visual results on simulated and real data are shown in Figs. 2-3. H2TF generally outperforms other competitors in two aspects. First, H2TF can more effectively remove heavy mixed noise. Second, H2TF preserves fine details of HSIs better than other methods. The superior performances of H2TF are mainly due to its hierarchical modeling abilities, which help to better characterize fine details of HSI and robustly capture the underlying structures of HSI under extremely heavy noise. More visual results can be found in supplementary. #### Iii-B2 Discussions The HMF is an important building block in H2TF. We test the influence of the layer number of HMF (i.e., \(l\)); see Fig. 4 (a). A suitable layer number of HMF (e.g., \(l=5\)) can obtain both good performances and a lightweight model. The HNT is another important building block. We change the layer number of HNT to test its influence; see Fig. 4 (b). Also, a proper layer number of HNT (e.g., \(m=2\)) can bring good performances. According to Lemma 1, the sizes of factor tensors in HMF, i.e., \(\{r_{d}\}_{d=1}^{4}\), determine the degree of low-rankness. Hence, we test such connections by changing the sizes of factor tensors; see Fig. 4 (c) (Here, \(r_{0}\) and \(r_{5}\) are fixed as the sizes of observed data and \(\{r_{d}\}_{d=1}^{4}\) are selected in \(\{(1,2,4,8),(2,4,8,16),\)\((3,6,12,24),\cdots,(20,40,80,160)\}\)). When the sizes (rank) are too small, the model lacks representation abilities and when the sizes (rank) are too large, the model overfits. Nevertheless, our method is quite robust w.r.t. \(\{r_{d}\}_{d=1}^{4}\). ## IV Conclusions We propose the H2TF for HSI denoising. Our H2TF simultaneously leverages the hierarchical matrix factorization and the hierarchical nonlinear transform to compactly represent HSIs with powerful representation abilities, which can more faithfully capture fine details of HSIs than classical tensor factorization methods. Comprehensive experiments validate the superiority of H2TF over SOTA methods, especially for HSI details preserving and heavy noise removal.
2307.01602
Coping with seasons: evolutionary dynamics of gene networks in a changing environment
In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions.
Csenge Petak, Lapo Frati, Melissa H. Pespeni, Nick Cheney
2023-07-04T09:41:15Z
http://arxiv.org/abs/2307.01602v1
# Coping with seasons: evolutionary dynamics of gene networks in a changing environment ###### Abstract. In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions. Evolution, Gene regulatory networks, Environmental variability + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. cancer cells (Beng and Seng, 2013). There has been much theoretical work using mathematical and agent-based models to understand the conditions and scenarios in which the different kinds of bet-hedging strategies could evolve. The general consensus among these studies is that while the two bet-hedging strategies are fundamentally similar in how and when they evolve (Beng and Seng, 2013; Seng, 2013; Seng, 2013), higher frequency of environmental change favors the conservative (Beng and Seng, 2013; Seng, 2013), and stronger selection pressure favors the diversifying BH strategy (Beng and Seng, 2013; Seng, 2013). However, these models didn't include a complex genotype-to-phenotype mapping function, and thus the strategies weren't evolved from scratch. Instead, the probability of producing an alternative phenotype like a diversifying BH was part of the genotype as an explicit evolvable variable (Beng and Seng, 2013). The phenotypes of biological organisms are determined from their genotypes through a complex nonlinear mapping. Models of gene regulatory networks (GRNs, where nodes represent genes and edges represent activating or repressing directional regulatory interactions) are commonly used as conceptual proxies to genotype-to-phenotype mapping functions. Since the structure of GRNs are evolvable and shape the kind of phenotypic variation that is available for natural selection, they are often used in studies investigating the evolution of robustness and evolvability (Beng and Seng, 2013; Seng, 2013; Seng, 2013). In this study, instead of using traditional mathematical models, we used an evolutionary algorithm to model the evolution of GRNs to investigate the emergence and success of diversifying and conservative BH strategies under different conditions. This approach allowed us to find these strategies without biasing or limiting our model; the strategies evolved without an explicit incentive through the evolution of different network structures, which led to some unexpected results. ## 2. Methods ### Genotypes and phenotypes In most computational models of GRNs, regulatory interactions between genes are simulated by an adjacency matrix \(W\) of size \(N\times N\), representing a weighted, directed graph. Then, the expression level of the genes making up the phenotype are calculated through the iterative multiplication of \(W\) by a vector of gene expression levels \(\vec{p}\) with a non-linear transformation. In our model, the initial vector \(\vec{p}\) (representing gene products coming from the parent, i.e., maternal factors) was a one-hot vector of length \(N=50\) in all experiments. In order to generate a phenotype for each individual, this fixed input vector was iteratively multiplied a 100 times by their individual matrices as such: \[\vec{p}_{t+1} =\sigma\left(W\vec{p}_{t}\right)\] \[\sigma\left(x\right) =\frac{1}{1+e^{-10x}}\] Where \(W\vec{p}\) describes the "strength" of interaction between genes and \(\sigma\) is a sigmoid function. The value of \(\vec{p}\) is bounded between 0 and 1. The individuals' phenotype was the value of gene expression levels \(\vec{p}\) after this iterative process. Figure 1. A) Maximum fitness (blue) and average standard deviation among offspring of the same parent (orange) decreased over time. Grey vertical line: target switch. B) Phenotypes of the highest fitness parents along with 20 of their offspring. Rows: offspring, columns: genes, colors: expression level. Gene is off – purple, on – yellow, half expressed – turquoise. Example phenotypes in order of appearance: specialist, diversifying BH, specialist, specialist, conservative BH. ### The evolutionary algorithm At the beginning of each experiment, a population of a 1000 haploid, asexual individuals was generated along with the two target vectors \(\vec{A}\) and \(\vec{B}\). \(\vec{A}\) was a series of \(N/2\) 1s followed by \(N/2\) 0s, and \(\vec{B}\) was 1 - \(\vec{A}\), see **Fig 1B** first and third example phenotypes. Each experiment started with season A, during which the fitness of the individuals was calculated based on their distance from \(\vec{A}\): \[f_{A}(\vec{p})=1-\frac{\sum\limits_{i=1}^{N}\lvert\vec{p}_{i}-\vec{A}_{i} \rvert}{N}\] Every G generations (season length) the target changed from \(\vec{A}\) to \(\vec{B}\) or from \(\vec{B}\) to \(\vec{A}\). After each individual's phenotype and fitness was calculated, they were sorted based on fitness and the top \(\mu\) individuals were selected to survive to the next generation and generate \((popsize/\mu)-1\) offspring to keep a constant population size (\(\mu\) + \(\lambda\) Evolution Strategy). Offspring were mutated at \(N*m\) positions by adding a random value drawn from a normal distribution \(\sim\)\(N(0,0.5)\). ### Experiment Experiments were run for 75 environmental switches for 6 different season lengths (G): 20, 50, 100, 300, 400, 500, and 3 mutation rates (\(m\)) and truncation sizes (\(\mu\)/pop size): 0.05, 0.1, 0.2 for season lengths 50, 300 and 500. Each combination of parameters was repeated 10 times. In order to calculate the mutational robustness of an evolved diversifying and conservative BH network, we mutated and evaluated the networks cumulatively 20 times. At each step, we quantified how much of a specialist, diversifying, and conservative BH they were based on the following coefficients: the diversifying BH coefficient was the standard deviation of offspring fitnesses given one of the targets, the conservative BH coefficient was the proportion of genes that are half expressed, and the specialist coefficient was the maximum between the average fitness of the offspring calculated for each of the two environments. Code is available at: [https://github.com/Cpetak/coping_with_seasons_GRN](https://github.com/Cpetak/coping_with_seasons_GRN) ## 3. Results Populations evolved to have a lower decrease in average fitness upon a switch in the environment in all of our experiments. The maximum fitness the population reached by the end of a season also decreased over time, **Fig 1A**. This was due to the slow but steady incorporation of in-between 0.5 values to the phenotype, meaning that instead of genes being on or off, more and more genes were half expressed in the individuals over the generations. Therefore, by the end of most experiments, **conservative BHs** took over the population, **Fig 1B** right most example phenotype. In the majority of the runs across different settings of the parameters, we also observed the raise and eventual fall of the **diversifying BH**. Our results showed that during approximately the first third of the simulations, for a short period of time after each environmental switch the diversifying BH quickly grew in frequency (highlighted by the increase of average standard deviation among the offspring of a single parent in **Fig 1A**, and the second example phenotype in **Fig 1B**). However, in most cases this form of bet-hedging quickly got replaced by specialists and conservative BHs. We observed the initial success of the diversifying BH in all experiments where the season length was \(\leq\) 100 generations, as well as at season length 300 with a medium or low mutation rate. At season length 500, this strategy was only observed in combination with a low mutation rate or low truncation size. Apart from a single run (\(G=500\), \(m=0.05\), \(\mu\)/pop size = 0.2), the diversifying strategy was lost eventually. In contrast, the incorporation of in-between values into the phenotype was observed in every experiment. However, as we increased the season length, mutation rate, or truncation size, less and less genes were half expressed after the 75 environmental switches, i.e., the conservative BH didn't appear or appeared later, while the diversifying BH reached higher frequencies and remained longer in the population. **In summary, season length of 300 was found to be ideal for the emergence and success of the diversifying BH, and increasing either the mutation rate or the truncation size favored the conservative strategy**. Next, we looked at the **mutational robustness** of an evolved diversifying and conservative bet-hedger GRN, see **Fig 2**. The conservative BH strategy was found to be considerably robust to random mutations. When the random mutations did change the phenotype, we found that it became even more of a conservative BH and less of a specialist. This robustness could have been the result of the sparse networks underlying the conservative BH phenotype (few edges with large weights, most edges 0 or small weight, data not shown). On the other hand, the diversifying BH lost its ability to produce alternative phenotypes drastically. In most replicate experiments the mutated GRNs produced only one type of specialist after a few rounds of mutations. These networks were less sparse, though the degree distribution was similarly power law. ## 4. Discussion In this study, we investigated when and how the bet-hedger strategy evolves in a frequently changing environment. In contrast to previous work, we used an agent-based evolutionary algorithm of gene regulatory networks. This allowed us to evolve diversifying and conservative BHs without explicitly selecting for it, or having hard-coded strategies. We found that across different settings of the frequency of environmental change, strength of selection and mutation rates **diversifying BH evolved first, followed by conservative BH**. The above mentioned 3 parameters only changed the degree by which these strategies evolved, in a manner in line with results of previous studies. Higher frequency of environmental change favored the conservative (Bartos et al., 2016; Goyal et al., 2017), smaller truncation size favored the diversifying BH strategy (Bartos et al., 2016; Goyal et al., 2017). At the beginning of every simulation, in the first environment, the populations quickly adapted and found the optimal solution. When the environment changed, suddenly the previously least fit individuals got to survive and reproduce while the previously fit lineages went extinct. After a huge drop in average fitness, the population adapted again to find the new fitness peak. The effect of a new mutation that causes the individual to be a bet-hedger, either by having an in-between phenotype or by having some proportion of the offspring be the opposite phenotype, is at first disadvantageous. If such mutation appears, it needs to stick around in the population despite being selected against until the environment changes. However, when the environment does change, the bet-herger has a huge advantage over the specialist. The fitness of the conservative BH remains the same in-between value, which is now much higher than that of the specialist of the previous environment. Similarly, since the diversifying BH produces the alternative phenotype at some proportion, those individuals will now survive and reproduce. This explains why we saw the bet-hedging strategies increase in frequency right after the environment changed (**Fig 1**). While the conservative BH and specialist strategies are purely exploitative, diversifying BHs can be thought of an interesting solution to the tension between exploration and exploitation, in that there is structure to the variation that it creates. Our hypothesis for why we saw the initial success of the diversifying BH followed by its replacement by the conservative BH has to do with **how quickly the alternative strategies can be found and the mutational robustness of the evolved GRNs.** As the populations traverse the genotype space over the generations, going from parts of the landscape that produce one target phenotype to parts that produce the other phenotype, individuals can end up on the border of this high dimensional space where a few mutations push them into the other phenotype. Thus, the diversifying strategy could have been quickly found and selected for in our simulations for this reason, while the genotype that corresponds to an in-between phenotype could have been further away in the genotype space. Despite this advantage, our results suggest that the diversifying strategy is inherently more unstable. Offspring of a diversifying BH could easily become a specialist for the current environment due to random mutations, which then outcompetes the bet-hedger unless the environment switches right away. In contrast, once the conservative phenotype is found, it is robust to mutations, thus they are less likely to produce specialists that would drive them to extinction (**Fig 2**). In conclusion, in response to adaptation to environmental variability, we observed the evolution of GRNs that were capable of generating the two alternative optimal phenotypes given random mutations (diversifying BHs), even without the implementation of gene duplication and deletion that was used in previous studies that found the evolution of this behavior in GRNs [5, 13]. We also saw the evolution of the conservative BHs, as it outcompetes the diversifying strategy in most of our experiment. We argue that this dynamic could be explained by the robustness of the strategies. ## Acknowledgments This material is based upon work supported by the 2021-2022 University of Vermont Dr. Roberto Fabri Fialho Research Award to C.P. and the National Science Foundation Grant No. 2008413. Computations were performed on the Vermont Advanced Computing Core supported in part by NSF Award No. 1827314.
2305.17619
AI Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching
In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts using Natural Language Processing (NLP) techniques, it would be possible to quickly determine which calls are most relevant for coaching purposes. In this paper, we present AI Coach Assist, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an effective way to recommend calls to the contact center managers that are more likely to contain coachable moments. Our experimental findings demonstrate the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.
Md Tahmid Rahman Laskar, Cheng Chen, Xue-Yong Fu, Mahsa Azizi, Shashi Bhushan, Simon Corston-Oliver
2023-05-28T03:29:59Z
http://arxiv.org/abs/2305.17619v1
AI Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching ###### Abstract In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts using Natural Language Processing (NLP) techniques, it would be possible to quickly determine which calls are most relevant for coaching purposes. In this paper, we present "AI Coach Assist", which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an effective way to recommend calls to the contact center managers that are more likely to contain coachable moments. Our experimental findings demonstrate the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents. ## 1 Introduction AI has the potential to revolutionize many industries, including the contact center industry. With the growing demand for high-quality customer service, contact centers are constantly seeking ways to improve their processes and enhance their agents' performance. One way to achieve this goal is by providing effective coaching and feedback to agents, which can help them identify areas of improvement and develop the necessary skills to provide exceptional customer service. As a common practice, contact center managers or supervisors manually select call recordings to listen in, and grade agents' performance using a rubric that contains questions such as "_did the agent greet the customer by name_" or "_did the agent properly resolve the customer issue_" to score the call in order to verify if the agent is following the company's preferred protocol. The grades given by the managers along with their comments are then shared with the agents to improve their performance. However, with the large volume of calls that contact centers receive, it is very challenging for managers or supervisors to determine which calls are most important for agent coaching. Thus, the traditional approaches to randomly select calls for agent coaching has the following limitations: * **Time-consuming process:** Coaching agents can be a time-consuming process, particularly for managers and supervisors who must manually review large numbers of calls to identify which calls are most relevant for coaching. * **Inefficient use of resources:** Without an efficient and effective process for determining which calls are most relevant for coaching, resources may be wasted on calls that are not critical for improving agent performance. This is where NLP could be useful. By analyzing call transcripts using NLP models, it could be possible to recommend calls to the contact center managers/supervisors that are most relevant for coaching purposes. This will lead to an improved coaching experience by prioritizing the calls for analysis that are more likely to contain coachable moments, resulting in saving time for the contact center managers as well as improving agent performance, ultimately leading to better customer satisfaction. For the purpose of improving real-world contact centers, we present the AI Coach Assist system to assist contact center managers or supervisors by suggesting calls that could be more useful for agent coaching. In this paper, we explore the concept of our proposed AI Coach Assist system, which leverages the advantage of fine-tuning a pre-trained transformer-based language model Devlin et al. (2019); Sanh et al. (2019); Liu et al. (2019); Lan et al. (2020); Zhong et al., 2022). Moreover, we provide a detailed overview of its development process (implementation and preparation of a balanced dataset to avoid biases), as well as our experimental findings. In addition, we demonstrate how it could be productionized in real-world contact centers to assist managers/supervisors. Note that our model does not automate the scoring of employee performance or replace human review. Instead, our model is intended to help contact center supervisors by recommending calls for coaching their employees instead of the traditional random sampling of calls. ## 2 Related Work The significant performance gain achieved via leveraging transformer-based language models Vaswani et al. (2017); Devlin et al. (2019); Liu et al. (2019); Lan et al. (2020) in a wide range of NLP tasks in recent years has also led to the use of transformer-based models in the contact center industry Laskar et al. (2022); Chen et al. (2022); Khasanova et al. (2022). The successful deployment of these models in industries has helped many organizations to enhance their processes, resulting in improved customer satisfaction. In recent years, several studies Fu et al. (2022) have explored the potential of AI-powered call analysis (e.g., entity recognition, sentiment analysis, etc.), along with providing real-time assistance to contact center agents. In addition to these studies, several commercial solutions have been developed that offer AI-powered call analysis and AI assistance for agents in contact centers. Some of these solutions also offer real-time feedback to agents during calls123, allowing them to adjust their behavior and improve their performance in real-time. However, to the best of our knowledge, there is no prior commercial application that assists contact center managers by suggesting calls that could be the most useful to coach agents. Footnote 1: [https://cloud.google.com/solutions/contact-center](https://cloud.google.com/solutions/contact-center), accessed in Feb 2023. Footnote 2: [https://cresta.com/product/agent-assist/](https://cresta.com/product/agent-assist/), accessed in Feb 2023. One potential approach for this purpose could be the use of automatic call recommendation, where calls are analyzed using NLP techniques and suggested to the contact center managers based on various factors, such as agents' behavior, issue resolution, customer satisfaction, sales success, etc. These suggested calls can then be analyzed by the managers for coaching purposes to provide relevant feedback to agents. In this regard, we propose _AI Coach Assist_, a system that leverages the transformer architecture to effectively analyze the full call transcripts in contact centers and recommends contact center managers with calls that are more likely to contain coachable moments for a given query. In the following section, we describe how we construct a dataset, which we denote as _QA Scorecard_, to train and evaluate our proposed AI Coach Assist system. ## 3 The QA Scorecard Dataset We collected our data from real-world contact centers. The dataset consists of customer-agent call conversation transcripts generated using Automatic Speech Recognition (ASR) systems, along with annotations indicating whether a call is coachable or not. The process of annotating the dataset was carefully designed and implemented, as the annotations were performed by real-world contact center managers and supervisors who analyzed the whole conversation/transcript. In this way, we ensure the high quality of the dataset. The data annotation works as follows, the managers/supervisors assign a score to the call based on the performance of the agent for a particular question. We consider a call as coachable for a particular question if the call achieves less than 50% scores, otherwise, we consider the call for that particular question as not coachable. The dataset was collected over a period of one year and includes a diverse range of call types from different industries, with a variety of customer interactions, reflecting the real-world complexities of the contact center industry. The resulting dataset consists of a large number of call transcripts and annotations, providing a robust representation of real-world customer-agent interactions. Note that a total of 58 questions are curated, which are distributed among training, validation, and test sets. While constructing the training, validation, and test splits, we observe that the class distribution (whether coachable or not coachable) for many question-transcript pairs was imbalanced. Thus, to ensure an unbiased dataset (as well as to avoid model overfitting), for each question, we ensured that the ratio between _coachable_ and _not coachable_ classes (or vice-versa) to be at most 1:2. In Table 1, we describe the distribution of our dataset based on our training, validation, and test set. Meanwhile, to evaluate the performance of _AI Coach Assist_ based on the type of the questions, we also categorize the questions into 11 types using human annotators. We show the question types with example questions for each type in Table 2. ## 4 Our Proposed Approach We treat the AI Coach Assist model as a text classification model that combines the query/question given by the contact center manager or supervisor with the call transcript to predict whether a given call is coachable or not. Due to the recent success of fine-tuning pre-trained transformer models for text classification Devlin et al. (2019); Liu et al. (2019); Lan et al. (2020), we also leverage the pre-trained language models based on the transformer architecture for this task. As we are doing text classification instead of generation, we give input data to the pre-trained language model as follows (see Figure 1): at first, we create a text sequence by concatenating the question and the call transcript. Then, this concatenated text sequence is given as input to the language model to learn the contextual relationship between the sentences. The pre-trained transformer language model is fine-tuned to output a probability score for each input sequence, indicating the likelihood that the call is coachable or not, for the given question. Whether a question-transcript pair is _coachable_ or _not coachable_ is determined based on the probability score of the class having the higher score. Since our objective is to build the AI Coach Assist system for real-world contact centers, we consider the following two cases while selecting the pre-trained language models: (i) Utilize a model to ensure high efficiency:We choose DistilBERT Sanh et al. (2019) for this scenario. DistilBERT is a distilled version of BERT Devlin et al. (2019), designed to be smaller and faster while retaining a similar level of performance. Despite its smaller size, DistilBERT has been shown to perform similarly to BERT on many NLP tasks, making it a suitable alternative for many NLP applications. This makes it a popular choice for real-world scenarios where computational resources are limited but the preference is to deploy a fast and optimize model in production. (ii) Utilize a model to ensure higher accuracy:For this purpose, we leverage the DialogLED model Zhong et al. (2022), which was pre-trained \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Split** & **Total Samples** & **Not Coachable** & **Coachable** & **Avg. Question Length** & **Avg. Transcript Length** \\ \hline Training & 12065 & 6521 & 5544 & 9.77 & 659.53 \\ \hline Validation & 1653 & 891 & 762 & 9.62 & 664.55 \\ \hline Test & 3435 & 1855 & 1580 & 9.77 & 727.77 \\ \hline \hline \end{tabular} \end{table} Table 1: Data distribution on each split (train/valid/test) based on the total number of question-transcript pairs, _coachable_ and _not coachable_ labels, and the average length of questions and transcripts. \begin{table} \begin{tabular}{l l} \hline \hline **Question Type** & **Example Question** \\ \hline Account Verification & _Did the agent verify the customer’s email address?_ \\ \hline Addressing Customer & _Did the agent use the customer’s name appropriately?_ \\ \hline Behavioral & _Did the agent show proper empathy statements?_ \\ \hline Closing & _Did the agent properly end the call?_ \\ \hline Providing Complete Information & _Did the agent mention the payment terms in detail?_ \\ \hline Customer Identification & _Did the agent verify the customer’s information?_ \\ \hline Customer Satisfaction & _Was the customer happy?_ \\ \hline Greeing & _Did the agent properly greet the customer?_ \\ \hline Information Collection & _Did the agent collect all necessary information from the customer?_ \\ \hline Issue Identification & _Could the agent properly identify the issue?_ \\ \hline Issue Resolution & _Could the agent resolve the issue?_ \\ \hline \hline \end{tabular} \end{table} Table 2: Example Questions based on Question Types on long dialog conversations, having more similarities with our customer-agent conversation dataset. Though in comparison to DistilBERT, the DialogLED model may require higher computational resources for production deployment, it fulfills our criteria of using a model that may provide higher accuracy for being pre-trained on long dialog conversations, mimicking the customer-agent conversations in the real world. In addition, DialogLED can also process long text sequences, contrary to the 512-token limit of most transformer-based models Devlin et al. (2019); Sanh et al. (2019); Liu et al. (2019); Lan et al. (2020). This makes DialogLED a suitable choice to build the AI Coach Assist system since the average length of the transcripts in our QA scorecard dataset is longer than 512 words. ## 5 Experiments In this section, we first present the experimental settings and the implementation details of our proposed model. Then we discuss our experimental findings in detail. ### Implementation For the DialogLED model, we adopt the DialogLED-base4 model from the HuggingFace library Wolf et al. (2020). Specifically, we used the _LEDForSequenceClassification_ which adds a classification head on top of the LED Longformer-Encoder-Decoder) model Beltagy et al. (2020). We ran our experiments in GCP5 on an _n1-standard-32_ machine with 4 _Nvidia T4_ GPUs. A total of \(3\) epochs were run, with the training batch size set to \(2^{6}\), and the maximum sequence length set to \(1024\). The learning rate was set to \(2e-5\). For the DistilBERT model, we leverage its base model from HuggingFace7. We also set the learning rate for DistilBERT to \(2e-5\) and ran 3 epochs with the training batch size set to 16 while the maximum sequence length set to \(512\). Note that for both models, these hyperparameters were tuned based on the performance in the validation set. The best-performing method in the validation set was then used for evaluation on the test set. Footnote 4: [https://huggingface.co/MingZhong/DialogLED-base-16384](https://huggingface.co/MingZhong/DialogLED-base-16384) Footnote 5: [https://console.cloud.google.com/](https://console.cloud.google.com/) Footnote 6: Larger batch size leads to _Out of GPU Memory_ errors. Footnote 7: [https://huggingface.co/distilbert-base-cased](https://huggingface.co/distilbert-base-cased) ### Results & Discussions In this section, we first present the results of our base models. Then we conduct some ablation tests and also compare our proposed models with some classical machine learning baselines to further validate the effectiveness of our approach. Finally, we study the advantages and limitations of our model based on various question types. Figure 1: An overview of our proposed AI Coach Assist model. Given a query and a transcript, the transformer-based language model will determine whether a call is coachable or not. For the given query: “Did the agent properly greet the customer?”, on the left (a), we show an example transcript where the agent did proper greeting, i.e., mentioned his/her name as well as the company name. On the right (b), we show an example transcript where the agent did not properly greet the customer as the agent name and the company name were not mentioned. #### 5.2.1 Performance of the Base Models In this section, we compare the performance of using DialogLED and DistilBERT as the base model for the AI Coach Assist system. Though we consider _precision_ and _accuracy_ as the main criteria for the production deployment of this system, for this performance evaluation we also consider _recall_ and _f1_ in addition to _precision_ and _accuracy_. We observe from our results given in Table 3 that the DialogLED model outperforms its counterpart DistilBERT model in terms of all metrics (_precision, recall, f1, and accuracy_). The DialogLED-based model also ensures scores above 60 in all 4 metrics. Moreover, in terms of accuracy and f1, it achieves a score of 70.52 and 65.76, respectively. Meanwhile, both models achieve comparatively lower recall scores, noticeably the DistilBERT model achieves a recall score even below 60. However, in our criteria for production deployment, a highly precise model is more important, with both DialogLED and DistilBERT achieving higher precision scores (67.92 and 62.53, respectively) in comparison to their recall scores (63.72 and 58.39, respectively). The superior performance using DialogLED over DistilBERT in all these metrics demonstrates the effectiveness of fine-tuning a language model for contact center telephone transcripts that is pre-trained on dialog conversations. Moreover, since customer-agent conversations can also be quite long and may not fit within the 512 tokens limit of DistilBERT-like models (as shown in Table 1), the ability of DialogLED to process input text of larger size may also help it to achieve better performance. In the following section, we conduct some ablation studies to further investigate the effectiveness of our models. #### 5.2.2 Ablation Studies In this section, we conduct some ablation studies to investigate our approach of concatenating the query and the transcript as input for our transformer-based language models (DialogLED/DistilBERT), as well as how the sequence length impacts the overall performance of DialogLED. We show the results from our ablation study in Table 4. For our first ablation test, we remove the query from the input text to better study the relationship between the query and the transcript. We find that for both models the accuracy is dropped by a great margin if the query is removed. The removal of the query from the input text leads to an accuracy drop of 10.90% for DialogLED and 8.03% for DistilBERT. In terms of precision, the performance is deteriorated by 14.96% and 5.13%, for DialogLED and DistilBERT, respectively. These findings demonstrate that the model learns to predict the _coachable_ and _not coachable_ moments in transcripts for the given query based on the concatenated representation of the query and the transcript. For our other ablation test, we reduce the input sequence length from our DialogLED model. We find that reducing the input sequence length from 1024 to 512 and 256 leads to a huge drop in accuracy (dropped by 4.71% and 9.76%, respectively) and precision (dropped by 2.81% and 7.02%, respectively). This demonstrates the effectiveness of using the DialogLED model which can process longer input sequences. Moreover, we observe that when the size of the input sequence length for DialogLED is 512 (same as DistilBERT), it still outperforms DistilBERT in terms of both accuracy and precision. This further gives an implication that the utilization of a model that is pre-trained on conversational data is more \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **Precision** & **Accuracy** \\ \hline DialogLED & 67.92 & 70.52 \\ - _without query_ & 57.76 & 62.83 \\ - _reduced sequence length = 512_ & 66.01 & 67.52 \\ - _reduced sequence length = 256_ & 63.15 & 63.64 \\ \hline DistilBERT & 62.53 & 66.25 \\ - _without query_ & 59.32 & 61.22 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation Tests on the QA Scoreccard dataset. \begin{table} \begin{tabular}{c c c c} \hline \hline **Model** & **Precision** & **Recall** & **F1** & **Accuracy** \\ \hline DialogLED & 67.92 & 63.72 & 65.76 & 70.52 \\ \hline DistilBERT & 62.53 & 58.39 & 60.39 & 66.25 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance Comparisons between the AI Coach Assist models on our QA Scoreccard dataset. \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **Precision** & **Accuracy** \\ \hline TF-IDF + SVM & 57.9 & 57.7 \\ \hline TF-IDF + Decision Tree & 58.0 & 60.8 \\ \hline TF-IDF + Random Forest & 59.3 & 60.1 \\ \hline TF-IDF + Naive Bayes & 52.5 & 53.3 \\ \hline DialogLED & 67.9 & 70.5 \\ \hline DistilBERT & 62.5 & 66.3 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance Comparisons between some baselines and proposed models on the QA Scoreccard dataset. helpful to improve the performance of the Ai Coach Assist system. #### 5.2.3 Performance against other Baselines In this section, we compare our proposed models for the AI Coach Assist system: _DialogLED_ and _DistilBERT_, with some baseline models to further study their effectiveness. Below, we describe the baseline models that we use for comparisons: **TF-IDF with Classical Machine Learning Models as Baselines:** We use TF-IDF as keyword-based features for some classical machine learning models, such as Support Vector Machine (SVM) [1], Random Forest [14], Decision Tree [15], and Naive Bayes [16], as our baseline models for comparisons. We show our experimental results in Table 5 to observe that both of our proposed models (the DialogLED model which obtains the highest accuracy and the DistilBERT model which ensures high efficiency) for AI Coach Assist outperform all TF-IDF feature-based classical machine learning approaches. On Average, the DistilBERT model and the DialogLED model outperform the baseline models by 8.97% and 12.48% in terms of precision, while 16.20% and 17.78% in terms of accuracy, respectively. #### 5.2.4 Performance based on Question Types In this section, we conduct an in-depth analysis of the proposed models for AI Coach Assist: DialogLED and DistilBERT. For our analysis, we investigate their performance in different question types. In Figure 2, we show their accuracy on each question type. We observe that for most question types, DialogLED outperforms DistilBERT (the only exceptions are the following question types: _Addressing Customer_, _Behavioural_, and _Customer Identification_). Among the questions where DialogLED outperforms DistilBERT (7 out of 11), the highest performance gains are in question types that are of _Greeting_ and _Account Verification_. For _Greeting_, it achieves the best accuracy with a score of 84.62, while for _Account Verification_, the accuracy is 83.33. Meanwhile, even though the DialogLED model achieves an accuracy of at least 60 for all question types, the DistilBERT model achieves quite low scores for some question types (e.g., only 57% scores for _Closing_ type and _Customer Satisfaction_ type questions). For the DialogLED model, it finds the _Behavioral_ and the _Issue Resolution_ type questions most challenging, as its accuracy drops below 70. Among these two question types, the _Behavioral_ question type achieves the lowest accuracy score of 63.59, followed by _Issue Resolution_, with an accuracy of 64.0. ## 6 Usage in Real World Contact Centers In this section, we discuss how the AI Coach Assist system can be used in real-world contact centers. Since determining the calls that are coachable is not required in real-time, rather they are required after the call is over, the inference speed of the model may not be an issue in this regard. Moreover, for contact centers where the computing resource is not a problem, our DialogLED-based model could be used, as it achieves better accuracy than its Distil Figure 2: Performance of DialogLED and DistilBERT on our QA Scorecard dataset based on each Question Type. BERT counterpart. Since the size of the trained DialogLED model is 648 MB, the DistilBERT model which takes only 263 MB could be used in scenarios where the computing resource is limited. We also prototype our proposed system for usage in a real contact center. Since directly predicting a score to a call might impact the evaluation of agent performance in contact centers, as such metrics could be used by managers for performance evaluation of agents, in our prototype we rather recommend a list of calls to the managers that are highly likely to contain coachable moments for a particular question type. Thus, instead of using those calls for a direct performance evaluation of agents, the managers still require to listen to the conversation or read the ASR-generated transcript. More particularly, using our proposed AI Coach Assist, we help the managers with a list of calls that they may use to manually grade agent performance, contrary to the existing methods of random call selection. In this way, the proposed prototype of AI Coach Assist may not cause any ethical concerns. ## 7 Conclusion In this paper, we presented _AI Coach Assist_, a transformer-based pairwise sentence classification model that combines the query/question given by the contact center manager or supervisor with the call transcript to determine which calls are most relevant for coaching purposes. The evaluation results demonstrate the potential of AI Coach Assist to transform the way contact centers coach their agents, providing an efficient and effective method that recommends calls that are the most relevant for coaching purposes. This will help to improve the coaching process and enhance the performance of contact center agents, leading to better customer satisfaction. Note that our model is intended to help contact center supervisors to be more effective in coaching their employees by improving over the random sampling of calls. The model does not automate the evaluation of employee performance by replacing human review. In the future, we will study how to improve the performance on the question types where the model performs poorly. We will also study how to utilize other question-answering models (Laskar et al., 2020, 2022d) or leverage generative large language models (OpenAI, 2023; Anil et al., 2023) that can point out the reason for a call being _coachable_ and _not coachable_. ### Limitations As our models are trained on customer-agent conversations in English, they might not be suitable to be used in other domains, types of inputs (i.e written text), or languages. Moreover, as we demonstrated in the paper that the model has limitations in certain question types, the user needs to decide which question types to be used when deploying the system in production. Though the DialogLED model performs better, it requires higher computing resources. On the contrary, even though the DistilBERT model consumes lower memory, its performance is poorer than the DialogLED model. ### Ethics Statement * **Data Annotation:** Since the calls are annotated by real-world contact center managers/supervisors, we did not require any additional compensation for this annotation. Rather, we develop a system where the managers/supervisors put their scores for different call conversations in their contact centers. To map the questions to different question types, Labelbox8 was used for data annotation and the annotators were provided with adequate compensation (above minimum wages). Footnote 8: [https://labelbox.com/](https://labelbox.com/) * **Privacy:** There is a data retention policy available so that the call transcripts will not be used if the user does not give permission to use their call conversations for model training and evaluation. To protect user privacy, sensitive data such as personally identifiable information (e.g., credit card number, phone number) were removed while collecting the data. * **Intended Use by Customers:** Note that our model is intended to help contact center supervisors to be more effective in coaching their employees by improving over the random sampling of calls. The model does not automate the scoring of employee performance or replace human review. * **Prevention of Potential Misuses:** Below, we discuss some of the potential misuses of the system and our suggestions to mitigate them: _(i) Automatic Performance Reviews of Agents by Considering all Recommended Calls as Bad Calls:_ One potential misuse of the system could be the evaluation of agent performance by considering all recommended calls as bad calls without any manual review of the call. To mitigate this, we can do the following: * Contact center supervisors that use this system must be properly instructed that this system does not determine whether an agent performs badly in a certain call. Rather, the intention of the proposed system is to only suggest a set of calls to the managers (instead of randomly selecting calls) that they need to manually review to determine whether the agent requires coaching or not. _(ii) Considering Agents with More Recommended Calls as an Indicator to Poorer Agent Performance:_ Another potential misuse of the system is if contact center managers start considering that if more calls are recommended by our system for a particular agent, then the agent is more likely to perform poorly. To prevent this, we can do the following: * We may suggest some positive calls as well as negative calls to the managers. Here, positive calls are the ones that our system rates with a very high score and categorizes as not requiring any coaching. Whereas negative calls are the ones that our system rates with quite lower scores and classifies as coaching required. To avoid any misuse of the suggested calls, the proposed AI Coach Assist system should never reveal to the managers whether a call requires coaching or not. Rather it should only allow the managers to make the final decision on whether the call is a positive call or a negative call. Once the suggested calls are manually reviewed by the managers and categorized as positive by them, these calls can then be used to train other agents that require improvement in certain areas, whereas a call categorized as negative can be used to train a particular agent who did not perform well (i.e., requires coaching) in that specific call. * In addition, to avoid suggesting too many calls for the same agent, the system may suggest only a certain number of calls (not above a pre-defined limit) per agent to the managers. _(iii) Using Bad Questions For Model Development:_ In some contact centers, there may be questions that are used for evaluating agent performance which may contain any potential biases toward a specific race or gender. We can prevent this in the following way: * The system should only respond to a pre-selected set of questions that were used during the training phase of the model. Any questions that may pose any ethical concerns or potential biases should not be used while training the model such that these questions can also be automatically ignored during the inference phase. * **License:** We maintained the licensing requirements accordingly while using different tools (e.g., HuggingFace). ## Acknowledgements We appreciate the reviewers for their excellent review comments that helped us to improve the quality of this paper. We would also like to thank **Shayna Gardiner** and **Elena Khasanova** for reviewing the ethical concern of the proposed system.
2305.08739
On the angular control of rotating lasers by means of line calculus on hyperboloids
We propose a new paradigm for modelling and calibrating laser scanners with rotation symmetry, as is the case for Lidars or for galvanometric laser systems with one or two rotating mirrors. Instead of bothering about the intrinsic parameters of a physical model, we use the geometric properties of the device to model it as a specific configuration of lines, which can be recovered by a line-data-driven procedure. Compared to universal data-driven methods that train general line models, our algebraic-geometric approach only requires a few measurements. For example, a galvanometric laser scanner with two mirrors is modelled as a grid of hyperboloids represented by a grid of 3x3 lines, providing a new type of lookup table: containing not more than 9 elements, lines rather than points, where we replace the approximating interpolation with exact affine combinations of lines. The proposed method is validated in a realistic virtual setting. As a collateral contribution, we present a robust algorithm for fitting ruled surfaces of revolution on noisy line measurements.
Rudi Penne, Ivan De Boi, Steve Vanlanduit
2023-05-05T11:46:44Z
http://arxiv.org/abs/2305.08739v1
# On the angular control of rotating lasers by means of line calculus on hyperboloids ###### Abstract We propose a new paradigm for modelling and calibrating laser scanners with rotation symmetry, as is the case for Lidars or for galvanometric laser systems with one or two rotating mirrors. Instead of bothering about the intrinsic parameters of a physical model, we use the geometric properties of the device to model it as a specific configuration of lines, which can be recovered by a line-data-driven procedure. Compared to universal data-driven methods that train general line models, our algebraic-geometric approach only requires a few measurements. For example, a galvanometric laser scanner with two mirrors is modelled as a grid of hyperboloids represented by a grid of \(\mathbf{3}\times\mathbf{3}\) lines, providing a new type of lookup table: containing not more than 9 elements, lines rather than points, where we replace the approximating interpolation with exact affine combinations of lines. The proposed method is validated in a realistic virtual setting. As a collateral contribution, we present a robust algorithm for fitting ruled surfaces of revolution on noisy line measurements. **Keywords:** Line geometry, galvanometric laser scanners, line variety sensor models, data-driven calibration, hyperboloid fitting, Plucker coordinates ## 1 Introduction The intrinsic calibration of a sensor is typically done by determining a number of parameters in some proposed sensor model that aims to represent the physical reality of the involved hardware Sturm et al (2011); Hartley and Zisserman (2004). Often, this strategy implies non-flexible models with unstable parameter values (Lu and Payandeh (2010); ZC (1993), Chapter 3 in Van Hamme (2016)). In spite of its rich tradition and literature, calibration remains a tedious and time consuming task, to be repeated when conditions change, not always obtaining the required accuracy. The shortcomings of the calibration by matching a rigid physical device model have recently been admitted by leading scientists in the field Schops et al (2019). The inaccuracies and instabilities inherent to the current calibration procedures are troublesome in applications where intrinsic localization, registration and sensor fusion are involved Dimitrievski et al (2019). And last but not least, intrinsic calibration procedures based on a physical model cope with the determination of physical parameters that can rarely be measured directly, and are moreover rather virtual than physical, due to the idealised abstract nature of the model. An alternative strategy is the so-called universal-model-based method, which considers a sensor as a black box that connects its control variables (camera pixel coordinates, mirror angles for laser reflection,...) to the observed world. The calibration of this mapping is established by a data-driven procedure, requiring the availability of sufficiently large datasets that allow interpolation Schops et al (2019); Peternell and Pottmann (1999), or lookup tables Cao et al (2020), or the training of neural networks or Gaussian processes Roitsch et al (2019); Wissel et al (2015); Sagan et al (2022); Mallasto and Feragen (2018); Boi et al (2022). An important issue of this approach is that it requires world point clouds with reliable coordinates, which significantly cover the work space. In this article we make use of geometric sensor models, assigning a world line for each sensor query Grossberg and Nayar (2005); Ye and Yu (2014); Ponce et al (2017a); Breiding et al (2018). Lines naturally present the way how many sensors observe the world (beams of light). Our approach still assumes a specific model, given by a _line variety_ (in the algebraic geometric sense). But it bypasses the intrinsic physics of the device. The lines that belong to this line model, can be obtained by direct measurements, as opposed to the parameters of a physical model-based calibration. A practical drawback of line measurements might be that they require to determine the position of several collinear points. However, this extra work is awarded with the possibility to reduce noise and outliers for point measurements by means of robust line fitting Fischler and Bolles (1981a). Furthermore, a line model provides stable transformations to other reference frames (extrinsic calibration Sels et al (2018); Fusiello et al (2015); Miraldo and Araujo (2014)). The calibration algorithms as presented in Miraldo and Araujo (2015); Trager et al (2017a); Ponce et al (2017a) are line-model-based, but they still use (non-obvious) parameter models and appear to be too complicated for practical purposes. Alternatively, some authors avoid restrictions on the involved line variety, calibrating a universal line model through a data-driven learning process Tu and Zhang (2018); De Boi et al (2022); Breiding et al (2018). These universal approaches have the advantage that they are not based on geometric assumptions (except for the straight line assumption), but they need the availability of a large set of line measurements and suffer from a lack of theoretical accuracy guarantees. This paper demonstrates the profit of proposing a specific type of line variety as sensor model, supported by natural geometric assumptions. In this way, we compromise between model-line-based and data-line-based approaches. In many applications we use a sensor that corresponds to a two-dimensional line variety (a camera with two pixel coordinates, a laser scanner with two control parameters,...), which is called a _line congruence_Trager et al (2017); Ponce et al (2017); Tas and Gursoy (2018). In this article we elaborate line-model sensors with rotational symmetry, as it is the case for scanners with rotating lasers (Lidar) or for a galvanometric laser scanning system where a fixed laser beam is reflected by one or two rotating mirrors. We prove that the corresponding line varieties are covered by ruled quadratic surfaces of revolution As an important application, we present a novel, fast and efficient line-based calibration procedure for a two-mirror galvanometric laser scanner (2M-GLS) (Figure 2). These laser scanners appear in several applications Stafne et al (2000); Duma and Duma (2017); Li (2008); Pokorny (2014) due to their "good characteristics of high deflection speed, high positioning repeatability and concise structure" Tu and Zhang (2018). For the majority of the publications, the authors restrict to situations where a 2M-GLS measures a plane or a two-dimensional surface Mao et al (2018). In such situations there is no need to go beyond point-based calibrations. However, for a complete 3D-range, sensor calibration must provide the 3D-line (in some reference frame) for each selected pair of rotation angles of the two mirrors that reflect a fixed incoming laser beam. Model-based methods for the 3D-calibration of a 2M-GLS are given by Manakov et al (2011); Cui et al (2009). However, these methods have to determine (too) many parameters of (a model of) the device geometry. They cope with the disadvantages that are listed at the beginning of the introduction (unstable and tedious to implement), giving rise to non-convex optimization problems that suffer from local minima. In Tu and Zhang (2018); De Boi et al (2022) the authors propose to calibrate a 2M-GLS by a data set of line measurements, which is more related to our approach. However, their method completely differs from the proposed procedure, because they calibrate a universal line model through a statistical learning process, without bothering about the algebraic and geometric structure of the involved line congruence. In order to present the mathematical tools for this article in a self-contained manner, we provide the complete description of the hyperboloids or cones of revolution that are obtained by the laser reflections in the case of one rotating mirror (Section 2). This leads to the specific Plucker coordinates of these laser reflections as presented in Section 3. As a collateral application, we present a robust algorithm in the Appendix for recovering a ruled surface of revolution from noisy line data. An important contribution and innovation in this article appears in Section 5, where we derive a representation of the lines of one hyperboloid of revolution as a stable one-parameter combination of three generating lines, directly related to the angular variable that controls the mirror rotation. This result was accomplished thanks to the rational parameterization of affine combinations on the circle as presented in Section 4. In Section 6 we show how this result yields an algorithm to predict laser reflections, first for one rotating mirror, and then extended to the concept of a three by three hyperboloid grid for modelling and calibrating a two-mirror galvanometric laser scanner (Section 7). We believe that this article offers a novel and fundamental contribution to the field of sensor modelling and calibration, especially useful for laser scanners with rotational components. We show how certain sensors can be represented by line congruences that on their turn can be represented by a limited base set of lines. For example, a galvanometric laser scanner with one mirror can be represented by 3 lines, and in the case of two mirrors by 9 lines. We discovered how to generate the whole line congruence from these bases by a linear line calculus that is directly related to the angular control parameters. The correctness of our _hyperboloid grid method_ is validated by mathematical proofs, the accuracy and stability by the synthetic experiments in Section 8, by means of a virtual 2M-GLS that simulates real world hardware. We observe a very accurate and precise performance, as well as a favourable comparison with the data-based calibration of Boi et al (2022) that is known to outperform physical parameter models and to match other statistical learning models. This success is mainly explained by the stability of the calculus on hyperboloid grids introduced in Section 5 (validated in Section 8), due to the use of a stable rational parameterization to represent the mirror rotation. In addition, the line-based algorithm for fitting ruled quadrics of revolution, as presented in the Appendix, definitely improves the robustness of the proposed calibration method. ## 2 Line reflections by rotating mirrors This section describes the well known geometry of the reflections of a fixed incoming laser beam with a mirror that rotates about a fixed axis, offering the opportunity to introduce our terminology. We assume that this rotation axis \(A\), the laser beam \(L\) and its reflections can be modelled by (straight) spatial lines, and the mirror by a (flat) plane that contains the axis \(A\). Typically, only one side of this rotating performs mirror reflection, such that it makes no sense to allow a rotation angle range that exceeds \(180^{\circ}\). For most physical devices, this range is even more restricted. A particular position of the rotating mirror \(\mathcal{M}(n)\) is determined by its unit normal \(n\), which is supposed to point in the sense of mirror reflection. So, if the laser \(L\) is directed by the unit vector \(r_{L}\), compatible with the incoming orientation, then the reflected line \(R(n)\) has direction vector \(r_{n}\) (according to the orientation of reflection): \[r_{n}=r_{L}-2(r_{L}\cdot n)n. \tag{1}\] Further, let us agree that the normalised direction of the mirror rotation axis \(A\) is denoted by \(r_{A}\), and the plane through the origin and perpendicular to \(A\) by \(\mathcal{D}_{0}\) (Figure 1). Notice that \(\mathcal{D}_{0}\) differs from the plane containing \(r_{L}\) and \(r_{n}\), unless the incoming laser happens to be orthogonal to \(A\). For this reason we decompose (incident and reflected) line directions \(r\) in a component along \(A\) and a component perpendicular to \(A\) (in \(\mathcal{D}_{0}\)): \[r=(r\cdot r_{A})r_{A}+r^{\perp}=r^{\parallel}+r^{\perp}. \tag{2}\] We will always assume that the incident laser hits (is not parallel to) the mirror, so \(r^{\perp}\) is not the zero vector. Let \(R(n_{1})\) and \(R(n_{2})\) be reflections of the same incident laser \(L\) for different mirror positions \(\mathcal{M}(n_{1})\) and \(\mathcal{M}(n_{2})\) during the rotation about axis \(A\). Let \(r_{1}\) and \(r_{2}\) abbreviate \(r(n_{1})\) and \(r(n_{2})\) respectively (Figure 1). Proposition 1: 1. \(r_{1}^{\parallel}=r_{2}^{\parallel}\)_._ 2. \(R(n_{1})\)_,_ \(R(n_{2})\) _and_ \(L\) _cross_ \(A\) _at equal distance, sharing a common closest point_ \(p\) _on_ \(A\)_. So, if_ \(\mathcal{D}_{p}\) _denotes the plane through_ \(p\) _and perpendicular to_ \(A\)_, then_ \(p\) _is the center of a circle_ \(\mathcal{C}_{p}\) _in_ \(\mathcal{D}_{p}\)_, intersecting_ \(L\)_,_ \(R(n_{1})\) _and_ \(R(n_{2})\) _in_ \(q\)_,_ \(q_{1}\) _and_ \(q_{2}\)_, respectively. Furthermore:_ \[\left\langle q_{1}-p,q_{2}-p\right\rangle=\left\langle r_{1}^{\perp},r_{2}^{ \perp}\right\rangle=2\left\langle n_{1},n_{2}\right\rangle.\] (3) Proposition 1 implies that all line reflections of a fixed laser by a continuously rotating mirror (over some angle range) can be equally well obtained by the continuous rotation of the first reflected line (over the double range). It is a well known geometric fact that the rotation of a line around a given fixed axis \(A\) sweeps a one-sheeted hyperboloid of revolution \(\mathcal{H}\)Odehnal et al (2001). \(\mathcal{H}\) can be considered as a union of lines but equally well as the union of circles (perpendicular to \(A\)). The smallest of these circles, \(\mathcal{C}_{p}\) in Proposition 1, is called the _gorge circle_ of this surface of revolution. We conclude in the following theorem, where we take care for the singular situations: **Theorem 1**.: _If the incoming laser beam \(L\) is not perpendicular to the mirror rotation axis \(A\), and if \(L\cap A=\emptyset\) then the reflected lines belong to one system of rulers of a one-sheeted hyperboloid of revolution, \(\mathcal{H}(L,A)\), completely determined by \(L\) and \(A\). Indeed, the gorge circle of \(\mathcal{H}(L,A)\) is given by \(\mathcal{C}_{p}\), and its pitch\(\rho\) by:_ \[\rho=r_{n}\cdot r_{A}=-r_{L}\cdot r_{A}.\] _The incoming laser \(L\) belongs to the second system of rulers on \(\mathcal{H}(L,A)\). If \(L\) intersects \(A\), then \(\mathcal{H}(L,A)\) degenerates into a cone, or even into a flat pencil if \(L\) happens to intersect \(A\) perpendicularly. Finally, if \(L\perp A\) and \(L\cap A=\emptyset\), then \(\mathcal{H}(L,A)\) degenerates into the set of tangents to \(\mathcal{C}_{p}\) in \(\mathcal{D}_{p}\)._ Figure 1: Both the incoming as the reflected laser are rulers of the same hyperboloid of revolution. The next step is to consider a 2M-GLS, a sensor consisting of a single fixed laser \(L\) that is internally reflected by two sequential mirrors, each rotating about an individual axis, denoted by \(A\) and \(B\) in order of reflection. The control of the individual rotating mirrors is typically galvano-driven, allowing two independent user parameters, denoted by \(\alpha\) and \(\beta\) respectively (Figure 2). Note that we only observe the outgoing lasers of the galvanometer after the second reflection by the mirror \(\mathcal{M}(B,\beta)\) that rotates about the axis \(B\). An arbitrary value of the parameter \(\alpha\) that controls the position of the first mirror \(\mathcal{M}(A,\alpha)\), generates a reflection \(L(\alpha)\) of the initial laser \(L\), which is on its turn the incident laser for the rotating mirror \(\mathcal{M}(B,\beta)\). Because the laser line that is generated by a 2M-GLS is determined by a pair of angle settings \((\alpha,\beta)\), it can be denoted by \(R(\alpha,\beta)\). Theorem 1 translates into: **Theorem 2**.: _The outgoing lasers \(R(\alpha,\beta)\) of a 2M-GLS lie on a family of (possibly degenerate) co-axial hyperboloids of revolution \(\mathcal{H}(L(\alpha),B)\), each of which is generated by an individual laser \(L(\alpha)\) that is reflected by rotating the second mirror \(\mathcal{M}(B,\beta)\)._ Varieties of lines with two degrees of freedom, such as the line reflections produced by two rotating mirrors, are called _line congruences_Tas and Gursoy (2018). In our case we coin the name _two-mirror congruence_. **Warning:** The centres of the different hyperboloids \(\mathcal{H}(L(\alpha),B)\), being the points \(p(\alpha)\) on \(B\) with minimal distance to \(L(\alpha)\), are not equal (except in degenerate cases). Therefore, the congruence of laser lines emitted by a 2M-GLS does not constitute a _linear line congruence_Pottmann and Wallner (2001); Pottmann et al (1999); Ponce et al (2017); Tas and Gursoy (2018). If we consider the intermediate state of the sensor, after the first rotating mirror \(\mathcal{M}(A,\alpha)\), then the reflected beams of the incoming laser \(L\) also lie on a hyperboloid, Figure 2: The setup of a two-mirror galvanometric laser scanner (2M-GLS). \(\mathcal{H}(L,A)\). If we fix the second mirror at position \(\beta_{1}\), \(\mathcal{M}(B,\beta_{1})\), while rotating the first mirror, then we observe the sensor emitting a mirror reflection of \(\mathcal{H}(L,A)\) by \(\mathcal{M}(B,\beta_{1})\). Of course, this mirror image is also a one-sheeted hyperboloid of revolution, denoted by \(\mathcal{H}(L,A,B,\beta_{1})\), containing the doubly reflected laser beams \(R(\alpha,\beta_{1})\) (with varying \(\alpha\)). Consequently, we can be more specific about the description of the two-mirror congruence as given by Theorem 2. **Corollary 1**.: _The outgoing lasers \(R(\alpha,\beta)\) of a 2M-GLS belong to a congruence that can be considered as the disjoint union of either of the following two systems of hyperboloids of revolution:_ 1. _A system of (co-axial) hyperboloids, each of them determined by lines_ \(R(\alpha_{1},\beta)\) _with constant_ \(\alpha_{1}\)_._ 2. _A system of hyperboloids with each of them determined by lines_ \(R(\alpha,\beta_{1})\) _with constant_ \(\beta_{1}\)_._ Observe that the axes of the second system of hyperboloids in Corollary 1 sweep an additional hyperboloid of revolution, not participating in the two-mirror congruence, but sharing its axis with the hyperboloids of the first system. ## 3 Plucker coordinates of reflections of a single laser by a rotating mirror We refer to Pottmann and Wallner (2001) for an introduction to line coordinates and line geometry in a projective geometric setting, or to Odehnal et al (2001) for a Euclidean definition of line coordinates. In our context it is natural to work over the real numbers \(\mathbb{R}\) as a base field. A line \(R\) in Euclidean 3-space is determined by its direction vector \(r\) and a point \(q\). In order to get rid of the randomness in selecting \(q\) on \(R\), we replace \(q\) by the moment \(m=q\times r\), which is independent of the choice of \(q\) on \(R\), and only depends on the scale of \(r\). Observe that \(q\times kr=k(q\times r)\), so the sixtuple \((r,m)\) gives well defined homogeneous coordinates for \(R\), _Plucker coordinates_, mapping this line in 3-space to a point \(\pi(R)\) in \(\mathbb{P}^{5}\). Furthermore, since \(r\cdot m=0\), this point belongs to the so-called _Klein quadric_\(\mathcal{K}\) in \(\mathbb{P}^{5}\): \[\mathcal{K}=\{(x_{1}:x_{2}:x_{3}:x_{4}:x_{5}:x_{6})\in\mathbb{P}^{5}\,|\,x_{1 }x_{4}+x_{2}x_{5}+x_{3}x_{6}=0\}.\] It can be proven that every point of \(\mathcal{K}\) either represents the Plucker coordinates of a Euclidean line, or it represents a "line at infinity" (where \(x_{1}=x_{2}=x_{3}=0\)). Finally, for a Euclidean line \(R\), we can tie down the random homogeneous factor by normalizing its direction vector: \(||r||=1\). To avoid the final ambiguity, we will always assume that each line \(R\) has a given orientation. A major objective of this paper is to control the Plucker coordinates of the laser reflections by means of the rotation angle of the mirror. In order to present the algebraic calculus of laser reflections more easily, we will assume for the moment that the origin coincides with the point \(p\in A\) that has minimal distance to the incoming laser \(L\), implying that \(\mathcal{D}_{0}=\mathcal{D}_{p}\) (Section 2). Later we will see that this choice does not affect the derived formulas. Recall form Proposition 1 that the laser reflections \(R(n)\) corresponding to different positions \(\mathcal{M}(n)\) of the rotating mirror share an identical pitch \(\rho=r_{n}\cdot r_{A}\), where we assume that the direction vectors \(r_{n}\) (of \(R(n)\)) and \(r_{A}\) (of the rotation axis \(A\)) are normalised and oriented such that \(\rho>0\). Consequently, the projections \(r^{\perp}\) on \(\mathcal{D}_{0}\) of the reflected directions \(r\) all have identical norm \(||r^{\perp}||=\sqrt{1-\rho^{2}}\). Furthermore, Proposition 1 implies that each \(r^{\perp}\) is perpendicular to \(q_{n}-p=q_{n}=R(n)\cap\mathcal{D}_{0}\) (\(=\) closest point of \(R(n)\) to the axis \(A\)). If \(L\) does not intersect \(A\), all these points \(q_{n}\) belong to the gorge circle \(\mathcal{C}_{0}\) of the hyperboloid \(\mathcal{H}(L,A)\) with radius \(\sigma_{0}=||q||\) (Figure 1). Finally, recall that the relative (oriented) angles of rotation between the reflected lines \(R(n)\) are determined by the mirror rotation \(\mathcal{M}(n)\): \[\left\langle r_{1}^{\perp},r_{2}^{\perp}\right\rangle=\left\langle q_{1},q_{2 }\right\rangle=2\left\langle n_{1},n_{2}\right\rangle.\] Our next observation is that the moments \(m_{n}=q_{n}\times r_{n}\) of the reflected lines \(R(n)\) appear to behave in a similar way as the directions. Except for the special case where \(L\) intersects \(A\) (in \(p=q_{n}=\)the origin), implying that \(m_{n}\) is the zerovector. **Proposition 2.**_Assume the previous notations and assumptions, in particular the origin is located at \(p\in A\), and assume that \(L\) does not intersect \(A\). Then the laser reflections \(R(n)\) corresponding to different positions \(\mathcal{M}(n)\) of the rotating mirror share an identical moment pitch \(\mu=m_{n}\cdot r_{A}\). Furthermore, if \(m_{n}^{\perp}=m_{n}-\mu r_{A}\) denotes the moment projection on \(\mathcal{D}_{0}\), then \(m_{n}^{\perp}\) is parallel to \(r_{n}^{\perp}\) with \(||m_{n}^{\perp}||=\sigma_{0}\cdot\rho\) (where \(\sigma_{0}=||q_{n}||=||q||\) is the radius of \(\mathcal{C}_{0}\))._ **Proof.** Due to our choice of the origin, \(q\) belongs to \(\mathcal{D}_{0}\), where it is orthogonal to \(r_{n}^{\perp}\). Recall that \(R(n)\) is oriented by \(r_{n}\) conform to the sense of the reflection, and that the mirror axis \(A\) is orientated by \(r_{A}\) such that \(\rho=r_{n}\cdot r_{A}=-r_{L}\cdot r_{A}>0\). Notice that in case the skew oriented lines \(A\) and \(R(n)\) cross "positively", which means that the undercrossing line passes the overcrossing from left to right, \(r_{n}^{\perp}\) is obtained by a clockwise quarter turn from in \(q_{n}\) in \(\mathcal{D}_{0}\) as viewed from \(r_{A}\). Also note that for each mirror position \(\mathcal{M}(n)\) the crossing sign of \(R(n)\) relative to \(A\) is the same, namely the opposite of the crossing sign of \(L\) and \(A\). So, due to the right-hand-rule for the orientation of the cross product \(m_{n}=q_{n}\times r_{n}\), and because \(r_{n}\cdot r_{A}>0\), we see that \(\mu=m_{n}\cdot r_{A}<0\) if and only if \(R(n)\) crosses \(A\) positively. Because both \(r_{n}\perp q_{n}\) and \(m_{n}\perp q_{n}\) the projections \(m_{n}^{\perp}\) and \(r_{n}^{\perp}\) are aligned in \(\mathcal{D}_{0}\). So, \(m_{n}^{\perp}=kr_{n}^{\perp}\), where \(k>0\) if and only if \(\mu<0\). We conclude that the sign of the moment pitch is the same for every laser reflection \(R(n)\). Let us now compute the size of the moment pitch: \[\mu = (q_{n}\times r_{n})\cdot r_{A}\] \[= (q_{n}\times(r_{n}^{\perp}+\rho r_{A}))\cdot r_{A}\] \[= (q_{n}\times r_{n}^{\perp})\cdot r_{A}\] \[= \pm||q_{n}\times r_{n}^{\perp}||\] where we used that \((q_{n}\times r_{A})\perp r_{A}\) and \((q_{n}\times r_{n}^{\perp})\parallel r_{A}\). But \(q_{n}\perp r_{n}^{\perp}\), so \[|\mu|=||q_{n}||\cdot||r_{n}^{\perp}||=||q_{n}||\sqrt{1-\rho^{2}},\] which finishes the proof that \(\mu\) is independent from the mirror position. In addition, \(||m||=||q_{n}||\cdot||r_{n}||=||q_{n}||=||q||\), whence \[||m_{n}^{\perp}||^{2}=||m_{n}^{2}||^{2}-|\mu|^{2}=||q||^{2}\rho^{2}.\] \(\blacksquare\) Proposition 2 immediately implies (the left of Figure 3): **Corollary 2**.: _If the origin is chosen to be the point on the mirror axis A that is closest to the skewly incoming laser beam \(L\), and if we denote the Plucker coordinates of two laser reflections by \(\pi(R(n_{1}))=(r_{1},m_{1})\) and \(\pi(R(n_{2}))=(r_{2},m_{2})\) then_ \[r_{1}\cdot r_{A} = r_{2}\cdot r_{A}(=\rho),\] \[m_{1}\cdot r_{A} = m_{2}\cdot r_{A}(=\mu),\] \[\left\langle m_{1}^{\perp},m_{2}^{\perp}\right\rangle = \left\langle r_{1}^{\perp},r_{2}^{\perp}\right\rangle,\] _as oriented angles (viewed from \(r_{A}\))._ ## 4 Affine combination of cocircular points Using the assumptions and notations of Section 3, we have shown that for different mirror positions \(\mathcal{M}(n_{1}),\mathcal{M}(n_{2}),\mathcal{M}(n_{3}),\ldots\) we can consider three circles in the plane \(\mathcal{D}_{0}\), centered at the origin (the left of Figure 3): Figure 3: **Left:** The relative angles of the shown points are the same for each of the three circles. They represent the laser reflections \(R(n_{i})=(r_{i},m_{i})\) by their projected directions \(r_{i}^{\perp}\) (norm \(\sqrt{1-\rho^{2}}\)), by their throat points \(q_{i}\) (throat radius \(\sigma_{0}\)), and by their projected moments \(m_{i}^{\perp}\) (norm \(\sigma_{0}\rho\)). **Right:** Four points with the same relative angles as in the left diagram, prepared for Theorem 3. If \(M=tB+(1-t)C\) then \(t\) parametrizes the affine combination of \((A,B,C)\) that yields \(D\). * containing the points \(q_{1},q_{2},q_{3},\ldots\) * containing the direction projections \(r_{1}^{\perp},r_{2}^{\perp},r_{3}^{\perp},\ldots\) * containing the moment projections \(m_{1}^{\perp},m_{2}^{\perp},m_{3}^{\perp},\ldots\) Furthermore, on each circle we observe identical oriented angles between points that correspond to the same laser reflections \(R(n_{i})\) and \(R(n_{j})\), which is determined by the (rotation) angle between \(n_{i}\) and \(n_{j}\) (by factor 2). As we will see, this implies that we can use the same _affine combinations_ for all these circles. In the next section we will prove that these affine combinations on the circle can moreover be copy pasted to the Plucker coordinates of the reflected lines. Let \(A,B,C\) be three non-collinear points in some plane, then we can uniquely express each point \(D\) in the (this) plane as an affine combination of \(A,B,C\): \[D=xA+yB+zC,\ \ \ \text{with}\ x+y+z=1.\] Because \(z=1-x-y\) we count 2 dof for these combinations, which meets the number of dimensions of the plane. Notice that \(A,B,C\) determine a circumscribing circle \(\mathcal{C}\). Now we will restrict ourselves in generating only points \(D\) on this circle \(\mathcal{C}\), leaving us with only 1 dof for the coefficients \((x,y,z)\). In this section we will express these coefficients as rational functions in a parameter that is explicitly determined by the relative angles between \(A,B,C,D\). The fundamental idea leading to our formulas is to parametrize the affine coefficients by the location of the point of intersection \(M\) of the lines \(AD\) and \(BC\) (the right of Figure 3). **Theorem 3**.: _Let \(a=|BC|\), \(b=|AC|\) and \(c=|AB|\) denote the edges of the triangle \(ABC\), and let \(D=xA+yB+zC\) be a point on the circumscribing circle \(\mathcal{C}\) of this triangle, with \(x+y+z=1\). If \(M=AD\cap BC=tB+(1-t)C\) then_ \[(x\ \ y\ \ z)=\frac{(1\ \ t\ \ t^{2})\cdot T}{(1\ \ t\ \ t^{2})\cdot N}, \tag{4}\] _where_ \[T=\left(\begin{array}{ccc}0&0&-b^{2}\\ a^{2}&-b^{2}&2b^{2}-c^{2}\\ -a^{2}&b^{2}-c^{2}&c^{2}-b^{2}\end{array}\right)\ \ \ \text{and}\ \ \ N=\left(\begin{array}{c}-b^{2}\\ a^{2}+b^{2}-c^{2}\\ -a^{2}\end{array}\right). \tag{5}\] **Proof:** It can be proven that the necessary and sufficient condition on the barycentric coordinates \((x,y,z)\) for \(D\) to lie on the circumcircle \(\mathcal{C}\) is given by (Fact 4 in Volenec (2004)): \[a^{2}yz+b^{2}xz+c^{2}xy=0. \tag{6}\] Because \(M=AD\cap BC\), this point can be given barycentric coordinates w.r.t. \(\{A,D\}\) as well as \(\{B,C\}\) (Figure 3): \[M=tB+(1-t)C=sA+(1-s)D.\] Eliminating \(M\) and solving for \(D\) we obtain: \[D=\frac{-s}{1-s}A+\frac{t}{1-s}B+\frac{1-t}{1-s}C. \tag{7}\] Since the sum of these coefficients equals \(1\), they must be equal to the barycentric coordinates \((x,y,z)\), expressed in function of \(t\) and \(s\). Substituting the barycentric coordinates of \(D\) as given by Eqn. 7 in the circle condition of Eqn. 6, we can solve for \(s\): \[s=\frac{a^{2}(t^{2}-t)}{(b^{2}-c^{2})t-b^{2}}. \tag{8}\] Finally, substituting this expression for \(s\) in Eqn. 7 yields the aimed claimed formula in Eqn. 6. \(\blacksquare\) Observe that we do not lose generality by assuming that \(\mathcal{C}\) equals the unit circle. Indeed, the affine coefficients remain invariant under scaling and translations: \[D=xA+yB+zC\Rightarrow wD+Z=x(wA+Z)+y(wB+Z)+z(wC+Z).\] Furthermore, it can be easily seen that this affine combination is also not affected by rotations, such that we can choose \(A=(1,0)\). Consequently, the computation of the coefficients \((x,y,z)\) in Eqn. 4 only depends on the relative angles between the points. ## 5 Affine combination of reflected lines of a single laser by a rotating mirror Consider four laser reflections \(R(n_{i})\) by four mirror positions \(\mathcal{M}(n_{i})\)\((i=1,\ldots,4)\). From Theorem 1 in Section 2 we know that the lines \(R(n_{i})\) belong to a ruled surface of revolution, a one-sheeted hyperboloid in general, or one of its degenerations in singular cases. If \(\pi(R(n_{i}))=(r_{i},m_{i})\) denote the Plucker coordinates, and if \(r_{i}^{\perp}\) denote the projections of \(r_{i}\) on \(\mathcal{D}_{0}\), then the relative angle of revolution between \(R(n_{i})\) and \(R(n_{j})\) can be written as \[\theta_{ij}=\left\langle r_{i}^{\perp},r_{j}^{\perp}\right\rangle=2\left\langle n _{i},n_{j}\right\rangle. \tag{9}\] **Theorem 4**.: _Let us represent the rotation angles of four laser reflections by points \(P_{1},\ldots,P_{4}\) on a (unit) circle, that is, the arc between \(P_{i}\) and \(P_{j}\) equals \(\theta_{ij}\). Then, the affine combination \(P_{4}=x_{1}P_{1}+x_{2}P_{2}+x_{3}P_{3}\) (with \(x_{1}+x_{2}+x_{3}=1\)) also applies to the Plucker coordinates of the reflected lines:_ \[\pi(R(n_{4}))=x_{1}\pi(R(n_{1}))+x_{2}\pi(R(n_{2}))+x_{3}\pi(R(n_{3})). \tag{10}\] **Proof.** Let us first assume the origin at the centre \(p\) of the \(\mathcal{H}(L,A)\), which is a hyperboloid in general, or a cone in case \(L\) intersects \(A\). For now, we exclude the degenerate case where \(L\) intersects \(A\) perpendicularly, implying that all reflections belong to the same plane. In Corollary 2 it is stated that the relative angles of the projected moments of the reflected lines \(R(n_{i}))\) are identical to the relative angles of revolution (Section 3): \[\left\langle m_{i}^{\perp},m_{j}^{\perp}\right\rangle=\theta_{ij}=\left\langle r _{i}^{\perp},r_{j}^{\perp}\right\rangle. \tag{11}\] So, \[r_{4}^{\perp} = xr_{1}^{\perp}+yr_{2}^{\perp}+zr_{3}^{\perp}\] \[m_{4}^{\perp} = xm_{1}^{\perp}+ym_{2}^{\perp}+zm_{3}^{\perp}\] Furthermore, \(\pi(R(n_{i}))=(r_{i}^{\perp}+\rho r_{A},m_{i}^{\perp}+\mu r_{A})\). Using \(x+y+z=1\): \[x\pi(R(n_{1}))+y(\pi(R(n_{2}))+z\pi(R(n_{3})) =\] \[(xr_{1}^{\perp}+yr_{2}^{\perp}+zr_{3}^{\perp}+(x+y+z)\rho r_{A}, xm_{1}^{\perp}+ym_{2}^{\perp}+zm_{3}^{\perp}+(x+y+z)\mu r_{A}) =\] \[(r_{4}^{\perp}+\rho r_{A},m_{4}^{\perp}+\mu r_{A}) =\] \[\pi(R(n_{4}))\] In case \(L\) intersects \(A\) perpendicularly, things become more simple. Then, the reflected lines belong to a flat pencil, all assumed to intersect in the origin. In this case, \(\pi(R(n_{i}))=(r_{i}^{\perp}+\rho r_{A},0,0,0)\), and hence the previous argument still holds, restricted to the first three Plucker coordinates. Next, we drop the assumption about the location of the origin in 3-space. The general situation can be transformed to the special situation (as described above) by a translation, which is a linear transformation \(T_{4}\) of \(\mathbb{P}^{3}\) (represented by a \(4\times 4\) matrix). One can prove that this induces a linear transformation \(T_{6}\) for the line coordinates \(\pi(L)\) (represented by a \(6\times 6\) matrix) Pottmann and Wallner (2001). The proof now is finished by the fact that linear transformations preserve affine combinations. \(\blacksquare\) ## 6 Data-driven calibration of rotating laser reflections The previous explanation enables to predict a laser reflection by a rotating mirror \(\mathcal{M}(n))\), once three line reflections are known for three mirror positions. Notice that we bypass the geometry of the incoming laser beam \(L\) relative to the mirror axis \(A\), neither do we need the spatial position of the mirror plane that corresponds to an (initial) angle. Notice that the described procedure equally well applies to devices with rotating lasers instead of rotating mirrors. **input:** relative angles \(\left\langle n_{i},n_{j}\right\rangle\) for three mirror positions \(\mathcal{M}(n_{1}),\mathcal{M}(n_{2}),\mathcal{M}(n_{3})\), and the coordinates of the corresponding laser reflections: \(\pi(R(n_{1})),\pi(R(n_{2})),\pi(R(n_{3}))\). **query:**\(n_{4}\), or rather \(\left\langle n_{i},n_{4}\right\rangle\) for some \(i=1,2,3\). **output:**\(\pi(R(n_{4}))\). **The algorithm:** 1. Transform the mirror positions to rotation angles of the reflected lines: \[\theta_{ij}=\left\langle r_{i}^{\perp},r_{j}^{\perp}\right\rangle=2\left\langle n _{i},n_{j}\right\rangle.\] 2. Compute \(T\) and \(N\) as stated in Theorem 3 (Eqn. 5). This can be done by representing the three base angles and the fourth query angle on a (unit) circle, or directly in terms of \(\cos(\theta_{ij})\) and \(\sin(\theta_{ij}))\). 3. Compute parameter \(t\). Combine \(t\), \(T\) and \(N\) to obtain the affine coefficients \(x,y,z\) (Eqn. 4). 4. Return \(\pi(R(n_{4}))=x\pi(R(n_{1}))+y\pi(R(n_{2}))+z\pi(R(n_{3}))\). **Algorithmic details:** * When this algorithm is applied in a real world situation, we assume only small deviations from the mathematical conditions: the laser beam is always kept fixed, the rotation axis for the mirror is always kept fixed, the mirror shape is close to a plane, the mirror reflection is close to perfect (hardly damaged by scratches and holes). * The first step of the algorithm may be more involved in certain practical situations. Indeed, the control of the rotating mirror is done by user parameters \(\omega_{i}\) that are not necessarily equal to the geometric angles between the \(n_{i}\). For instance, the mirror rotation might be galvanic driven, requiring input control in volts. The transformation from voltages to geometric angles might or might not be linear. Even if the user is allowed to use angular values for the input parameters, they are not necessarily identical to the geometric angles due to system noise. In this case, we obtain the angles directly from the measured reflection lines: \(\theta_{ij}=\left\langle r_{i}^{\perp},r_{j}^{\perp}\right\rangle\). The transformation \(\omega_{ij}=\omega_{i}-\omega_{j}\mapsto\theta_{ij}\) can be obtained by analytic or probabilistic interpolation. * For the computation of the parameter \(t\) it is recommended to permute \(\{A,B,C\}\) in Theorem 3 if needed, such that the chords \(AD\) and \(BC\) intersect inside the circle: \(M=AD\cap BC=tB+(1-t)C\). This guarantees that \(t\in[0,1]\) and significantly improves the stability. ## 7 The 3 by 3 line grid calibration of a 2M-GLS This section is motivated by a _galvanometer_, a sensor consisting of a single fixed laser \(L\) that is internally reflected by two sequential mirrors, each rotating about an individual axis, denoted by \(A\) and \(B\) in order of reflection. The control of the individual rotating mirrors is typically galvano-driven, allowing two independent user parameters, denoted by \(\alpha\) and \(\beta\) respectively (Figure 2). As explained in Section 6, we may assume that we can express mirror angles in radians. Assume for the moment that we fix the second mirror at angle \(\beta_{1}\). By means of the algorithm of Section 6, we can predict an outgoing line \(R(\alpha,\beta_{1})\) by means of three observed lines \(R(\alpha_{1},\beta_{1})\), \(R(\alpha_{2},\beta_{1})\) and \(R(\alpha_{3},\beta_{1})\) corresponding to three positions of the first rotating mirror \(\mathcal{M}(A,\alpha)\): \[\pi(R(\alpha,\beta_{1}))=x_{\alpha}\pi(R(\alpha_{1},\beta_{1}))+y_{\alpha}\pi(R( \alpha_{2},\beta_{1}))+z_{\alpha}\pi(R(\alpha_{3},\beta_{1})), \tag{12}\] where the affine coefficients \((x_{\alpha},y_{\alpha},z_{\alpha})\) are computed by Formula 4. Note that these coefficients do not depend on the specific choice \(\beta_{1}\) for the position of the second mirror. Indeed, the relative angle \(\theta_{ij}\) between \((R(\alpha_{i},\beta_{1})\) and \((R(\alpha_{j},\beta_{1})\) is the opposite of the corresponding relative angle on \(\mathcal{H}(L,A)\). More precisely, \[|\theta_{ij}|=2|\alpha_{j}-\alpha_{i}|.\] **Theorem 5**.: _A two-mirror galvanometric laser scanner is intrinsically calibrated by the knowledge of \(3\times 3\) emitted lasers \(R(\alpha_{i},\beta_{j})\) corresponding to a grid of \(3\times 3\) combinations of mirror pairs \((\alpha_{i},\beta_{j})\) (\(i=1,2,3\) and \(j=1,2,3\))._ **Proof.** We show that for each given query pair \((\alpha,\beta)\), we can predict the corresponding double reflected laser \(R(\alpha,\beta)\) by means of the given laser grid. To this end, we first compute the affine coefficients \((x_{\alpha},y_{\alpha},z_{\alpha})\) for a fixed \(\beta_{j}\). In principle, the resulting coefficients are identical for each choice of \(\beta_{j}\) (\(j=1,2,3\)). Consequently, we obtain: \[\pi(R(\alpha,\beta_{1})) = x_{\alpha}\pi(R(\alpha_{1},\beta_{1}))+y_{\alpha}\pi(R(\alpha_{ 2},\beta_{1}))+z_{\alpha}\pi(R(\alpha_{3},\beta_{1})).\] \[\pi(R(\alpha,\beta_{2})) = x_{\alpha}\pi(R(\alpha_{1},\beta_{2}))+y_{\alpha}\pi(R(\alpha_{ 2},\beta_{2}))+z_{\alpha}\pi(R(\alpha_{3},\beta_{2})).\] \[\pi(R(\alpha,\beta_{3})) = x_{\alpha}\pi(R(\alpha_{1},\beta_{3}))+y_{\alpha}\pi(R(\alpha_{ 2},\beta_{3}))+z_{\alpha}\pi(R(\alpha_{3},\beta_{3})).\] These three laser lines belong to a system of rulers of the hyperboloid \(\mathcal{H}(L(\alpha),B)\), defined by the mirror axis \(B\) and the incoming laser beam \(L(\alpha)\), which is the reflection of \(L\) by the \(\alpha\)-position of the first mirror. Applying the algorithm van Section 6 once more, we obtain: \[\pi(R(\alpha,\beta))=x_{\beta}\pi(R(\alpha,\beta_{1}))+y_{\beta}\pi(R(\alpha, \beta_{2}))+z_{\beta}\pi(R(\alpha,\beta_{3})).\] \(\blacksquare\) ## 8 Experiments In order to validate our hyperboloid grid model, we apply the method of Section 7 to synthetically generated data. The aim is to predict the set of Plucker coordinates for a given pair of mirror rotations. The benefit of working with synthetic data is that an exact underlying ground truth can be established. To this end, we built a setup in a virtual environment in the game engine Unity (version 2020.2.5f1). We placed two rotating mirrors and a laser in a configuration that can also be found in for instance a Polytec PSV-400 laser Doppler vibrometer (Figure 4). A real time demonstration of the setup in which the mirrors rotate to reflect an incoming laser beam can be seen in [https://youtu.be/GNTjmJvdTpw](https://youtu.be/GNTjmJvdTpw). We generated laser beams for twelve rotation angles for the first mirror and sixteen for the second mirror, resulting in a \(12\times 16\) grid of 192 lines. To measure the Plucker coordinates of those (reflected) laser beams, we placed a detection plane in front of the setup and recorded where the laser beams intersect that plane. All reflections and the detection of intersections are handled by the built-in Unity physics engine. An overview of the virtual setup can be found in Figure 4. The reflected laser beams for a set of co-axial hyperboloids are visualised in detail in Figure 5. The detection plane is then moved and rotated in eight positions. The simulation scale is chosen such that the distances of the detection planes vary from approximately 1000 to 2600 millimetre. Consequently, for each pair of mirror rotation angles (which uniquely generate a single laser beam), we obtain eight points. Strictly speaking, we only need to put the detection plane in two positions. However, to simulate real world conditions, we added Gaussian noise to the 3D coordinates of the detected points. We performed our simulations at seven noise levels with standard deviations equal to 0, 1, 2, 4, 6, 8 or 10 millimetres. For each of the noise levels, we generated 50 sets of lines. We average our findings over those 50 sets to eliminate statistical artefacts in the noise of the data. We perform a best fit method as described in Lesueur and Nozick (2014) to calculate the Plucker coordinates for the straight line generated by the mirror rotation pairs. For each noise level we select a _basegrid_, being a subgrid of lines from the \(12\times 16\) dataset. We consider the following basegrid sizes: \(3\times 3\), \(4\times 4\), \(6\times 6\), \(8\times 8\) and \(8\times 11\). This allows us to investigate the influence of the number of training lines on the accuracy of the calibration. To avoid unnecessary numerical problems, the angles in these subgrids are (uniformly) spread out in the range of the rotation angles of the sensor mirrors. The angle pairs of the data sets that do not participate in the basegrid provide a test Figure 4: The virtual setup. A laser beam is reflected by two rotating mirrors. The reflected laser beams hit a detection plane. The 3D coordinates of the points (the pink dots) are recorded. set, for which we use the Unity-generated lasers with zero noise as ground truth. The aim now is to predict the lines in the test sets when only the two mirror rotation angles are given. The proposed method uses the base grid to recover the two-mirror congruence as a double system of hyperboloids of revolution (Corollary 1). This line congruence is compactly represented as a \(3\times 3\) grid that enables laser predictions by means of affine grid combinations (Theorem 5). A procedure for a robust fitting of a hyperboloid grid to a basegrid of noisy line measurements is described in Appendix A. Because the quality of this fitting has a significant share in the accuracy of our method, we present it here as an intermediate validation in the framework of the previously described synthetic experiment. The results are shown in Figure 6. The gain (noise reduction) is most apparent for grids ranging \(6\times 6\) and up. We compare our method to the semi-data driven method described in Boi et al (2022), where the authors validate the performance and feasibility of semi-data driven approaches by means of Gaussian processes. The method of Boi et al (2022) outperforms current state-of-the-art physical-based calibrations, and performs at least equally well as other existing statistical or machine learning methods, which makes it an appropriate reference to compare our method with. Following the procedure in Section D of Boi et al (2022), a Gaussian process is trained for each of the six components of the Plucker coordinates Rasmussen (2004). In the implementation of the Gaussian processes, we used a periodic kernel with Figure 5: Lines rotated around a central axis form a hyperboloid. In a galvanometric setup, the first mirror rotation defines which hyperboloid, while the second mirror rotation determines the line on that hyperboloid. automatic relevance determination as suggested by Boi et al (2022): \[\begin{split} k_{PER}(\mathbf{x},\mathbf{x}^{\prime})&= \sigma_{f}^{2}\exp\left(-\frac{2}{l_{\alpha}^{2}}\sin^{2}\left(\frac{|\alpha- \alpha^{\prime}|}{2}\right)\right)\\ &\qquad\cdot\exp\left(-\frac{2}{l_{\beta}^{2}}\sin^{2}\left( \frac{|\beta-\beta^{\prime}|}{2}\right)\right).\end{split} \tag{13}\] In order to evaluate the prediction quality of any method, we need a measure for the difference between two spatial lines. In our experiments we worked with several distance measures, but they appeared to agree with respect to the final conclusions. In the presentation of our results, we use the line distance measure as suggested by Pottmann and Wallner (2001). For the computation of this measure, we need to define two fixed parallel planes, with the certitude to limit our region of interest. As a matter of fact, we chose them perpendicular to the Z-axis (more or less the direction of the outgoing beams), one through the origin, the other at a distance of 10 metres. Two lines intersect these two planes in four points \(\mathbf{g_{1}},\mathbf{g_{2}},\mathbf{h_{1}}\) and \(\mathbf{h_{2}}\) (same indices for the same line, some letters for the same plane). We calculate the so called _line segment Figure 6: The means of the line segment errors with respect to the ground truth, for the measured lines as well as for the corrected lines (by hyperboloid grid fitting). The boxplots are grouped by five grid sizes and within each group ordered by three noise levels during the measurement of the 8 points (at a distance of at most 3 m) that are used for the line measurements: standard deviations of 1, 6 and 10 mm. distance_\(d\) as follows: \[d^{2}=||\mathbf{g_{1}}-\mathbf{g_{2}}||^{2}+||\mathbf{h_{1}}-\mathbf{h_{2}}||^{2 }+(\mathbf{g_{1}}-\mathbf{g_{2}})\cdot(\mathbf{h_{1}}-\mathbf{h_{2}}). \tag{14}\] The line segment distance is computed to evaluate the error of each predicted line with respect to its ground truth. We took the average for all the lines per test set. This results in 50 averages per combination of grid size and noise level. This is done for the proposed method as well as for the GP-method. An overview of the results can be found in Figure 7 and in Figure 8. Note that the line segments that we used in our error measure have a length of at least 10 metre, which should be taken into account in the interpretation of the prediction error on the vertical axis of the figures (expressed in metre) For example, an lsd-error of 0.1 m for a predicted line is a line segment deviation of at most 1 cm per metre. The runs with zero noise confirm that the proposed hyperboloid grid calibration is an exact method, even when using a minimal \(3\times 3\) grid. We also notice that for a measurement noise expressed by a standard deviation of \(\sigma\) mm (within a work space of 2 to 3 m), the line prediction error appears to be lower than \(2\sigma\) mm (per metre) assuming a basegrid size of at least \(4\times 4\), and even bounded by \(\sigma\) mm (per metre) if Figure 7: The means of the line segment distances between the predicted test lines (by the proposed method) and the ground truth. The boxplots are grouped by five grid sizes and within each group ordered by seven noise levels during the measurement of the 8 points (at a distance of at most 3 m) that are used for line fitting: standard deviations of 0, 1, 2, 4, 6, 8 and 10 mm. we use basegrids of size \(6\times 6\) or larger. From our experiments there seems to be no convincing motivation to use basegrid sizes larger than \(8\times 8\). On the other hand, we observe that boxes are stretched out (between first and third quartiles) in cases where point measurements suffer from large noise levels (\(\sigma>7\) mm within the workspace region). This is explained by the fact that the basegrid data is corrected and fixed by a hyperboloid grid fit, such that the prediction errors for every test line are determined by the quality of this fit (Appendix A), which can be an unlucky estimate if the data noise happens to be unfortunate. If we investigate the results of the GP-method for the same Unity-data (Figure 8), then we observe that \(3\times 3\) grids are too small to teach a useful Gaussian process. Its performance takes over the proposed method from the moment the GP is trained by basegrids larger than \(8\times 8\). In case of larger measurement noise, the variance of the GP results appears to be smaller than for the proposed method. This is due to the fact that a Gaussian process keeps on balancing the measurement noise during the prediction of the test lines. The datasets generated and analysed during the current study are publicly available in the github repository [https://github.com/IvanDeBoi/Line-Calculus-on-Hyperboloids](https://github.com/IvanDeBoi/Line-Calculus-on-Hyperboloids). Figure 8: The means of the line segment distances between the predicted test lines (using a GP) and the ground truth. The boxplots are grouped by five grid sizes and within each group ordered by the seven noise levels. For the minimal training set (\(3\times 3\)), the GP-model predicts values so far from the ground truth that they are no longer of any significance. The data has become to sparse to work with. ## 9 Conclusions and further research This paper offered a completely new method for the modeling and 3D calibration of a galvanometric laser scanner with two mirrors. As a matter of fact, the proposed calibration paradigm applies to any laser scanner with rotational symmetry, such as other galvanometric systems or a Lidar, sensors with a rapidly growing number of applications. Our study provides a deeper understanding how many sensors can be naturally represented as a specific line variety, and how it pays off to discover the type of this variety by a mathematical analysis. The proposed line model is more specific than previously published general line models, but the calibration merely consists of measured line data, and does not need to recover intrinsic parameters of a physical model. As a main contribution we model a two-mirror-GLS as a _hyperboloid grid congruence_ that can be represented in a compressed way by a \(3\times 3\) basegrid of data lines. This is a significant simplification compared to the use of lookup tables commonly used in the 2D or 3D calibration of a GLS. We derived a formula that translates angular control parameters into simple affine combinations of these \(3\times 3\) grid lines, enabling our calibration model to make fast predictions. In a follow-up article we describe how this formula allows us to find an analytic solution for the reverse engineering problem: how to determine the pair of mirror angles that generate the laser reflection that hits a given 3D target point. The hyperboloid grid model for a two-mirror galvanometric laser scanner and the affine combination formula for the \(3\times 3\) grid is an interesting theoretical result, but in order to validate its practical performance, and in order to compare it to a statistical training model (GP-method), we chose to fit a hyperboloid grid on larger training grids. To this end, we designed a new algorithm for the robust fitting of a hyperboloid of revolution on given rulers. As it is the case for every regression model, this choice implies the advantage of noise reduction, but the disadvantage of neglecting noise. Fitting on training grids of size at least \(6\times 6\) causes line prediction errors that are comparable or smaller than the point measure errors. The results of the GP-method are inferior to the proposed methods for small grid sizes and for limited point measure noise levels. If the noise level is represented by a standard deviation of 8 mm or more (in the work space region up to 3 m), and if a basegrid is used of size at least \(8\times 8\), the GP-method performs more accurately and more precisely. This is due to the fact that a Gaussian process can be seen as a universal smoother, excellent at filtering out noise. On the other hand, the GP-method trains 6 separate line coordinates, and most often they do not satisfy the Grassmann-Plucker relation. Consequently, it fails to deliver an effective line. This can be taken care of by post-corrections, or by using a GP with manifold constraints, but it is an additional complication. In Boi et al (2022) it is shown that the violation of the Grassmann-Plucker relation becomes less apparent when using larger training sets. Also, if the mirror quality of a real galvanometric laser scanner significantly deviates from our ideal mathematical assumptions, the GP-predictions will be more accurate than the idealised hyperboloid grid predictions. On the other hand, discrepancies between the ideal predictions of our method and an observed laser from the real world sensor can detect defects or flaws in the device. This suggests that our line model can also be applied as a tool for quality control. ## Appendix A Fitting a one-sheeted hyperboloid of revolution to given noisy rulers Examples of approximation methods for ruled surfaces are presented in Pottmann and Randrup (1998); Hofer et al (2005); Pottmann and Wallner (1999). However, in these approaches, the ruled surfaces are fitted to given point data, rather than to measured lines, as is the case in our situation. Let \(\mathcal{L}=\{L_{1},L_{2},\ldots,L_{n}\}\) be the noisy data lines that are supposed to be rotated images of some (unknown) line around some (unknown) axis \(A\). We assume that the \(l_{i}\) are presented by their normalised Plucker coordinates: \(L_{i}=(r_{i},m_{i})\). **Step 1:** Considered as points, the correct normalised directions \(r_{i}\) belong to a circle centred at a point \(x\in A\), in a plane \(\mathcal{D}_{x}\) perpendicular to \(A\). So, the direction vector \(r_{A}\) of \(A\) can be recovered as the normal of a fitting plane. This plane \(\mathcal{D}_{x}\) can be approximated by a robust technique such as RANSAC or MLESAC Fischler and Bolles (1981b); Torr and Zisserman (2000). Observe that at this stage we only need to recover the normal \(r_{A}\) of \(\mathcal{D}_{x}\). In Section 7 we have measurements of rulers of several hyperboloids \(\mathcal{H}(L(\alpha),B)\) at our disposal, all sharing the same axis \(A\) of revolution, enabling us to increase the accuracy of the direction of this axis by computing the median or a trimmed mean of the computed \(r_{A}\) of the individual hyperboloids (\(||r_{A}||=1\)). It is an option to neglect from now on data lines \(L_{i}\) of which the directions \(r_{i}\) have been considered as outliers in the previous step. **Step 2:** Once we have found a reliable \(r_{A}\), we can recover the pitch \(\rho\) of the hyperboloid as the mean of the dot products \(r_{i}\cdot r_{A}\). We ensure that the directions are oriented such that all these dot products have positive signs. In this way, we can reduce the noise on the axial component \(r_{i}^{\parallel}=(r_{i}\cdot r_{A})r_{A}\) of the line directions \(r_{i}\): \[r_{i}^{\parallel}\mapsto\rho r_{A}.\] (A1) **Step 3:** We can also "correct" the rotational components \(r_{i}^{\perp}=r_{i}-r_{i}^{\parallel}\), provided that we know the relative angles \(\theta_{ij}\) between each pair, what we normally do in case of a reliable control of the involved galvanometric device. We proceed as follows. Based on the measured data, and the previously recovered axis direction \(r_{A}\), we can consider the noisy projections \(r_{i}^{\perp}=r_{i}-(r_{i}\cdot r_{A})r_{A}\). The norms \(||r_{i}^{\perp}||\) will be all corrected as \(\sqrt{1-\rho^{2}}\), so we can focus on their directions \(d_{i}=r_{i}^{\perp}/||r_{i}^{\perp}||\), points on the unit circle in the plane \(\mathcal{D}_{o}\) perpendicular to \(r_{A}\). We can represent them by a radian parameter \(pr_{i}\). The direction noise of the projections \(r_{i}^{\perp}\) is reduced by solving a constrained optimization problem in \(n\) unknowns \(cpr_{i}\), representing the corrected radian parameters \(pr_{i}\). More precisely, we minimise the sum of squared distances between \(cpr_{i}\) and the noisy \(pr_{i}\) constrained by the given relative angles \(\theta_{ij}\). Actually, a closed form solution for the \(cpr_{i}\) is obtained by means of Lagrange multipliers. Finally, we map the \(cpr_{i}\) back to unit vectors \(cd_{i}\) in the plane perpendicular to \(r_{A}\), The correction of the noisy directions \(r_{i}\) of the data lines is done as follows: \[r_{i}\mapsto\sqrt{1-\rho^{2}}cd_{i}+\rho r_{A}.\] (A2) **Step 4:** Having a reliable direction \(r_{A}\) at our disposal also facilitates the recovery of the complete axis \(A=(r_{A},m_{A})\). To this end, we lean on the property that the exact normalised rulers \(EL=(er,em)\) of the same regulus of a ruled surface of revolution have a constant _bilinear product_ with the exact normalized axis \(EA=(er_{A},em_{A})\), given by Pottmann and Wallner (2001) \[\Omega(EL,EA)=er\cdot em_{A}+em\cdot er_{A}.\] (A3) This motivates us to find the corrected \(m_{A}\) as a Least-Squared Approximation for the equations \[0=\Omega(L_{i},A)-\Omega(L_{j},A)=(m_{i}-m_{j})\cdot r_{A}+(r_{i}-r_{j})\cdot m _{A},\] (A4) augmented with the Grassmann-Plucker relation for the line \(A\): \(r_{A}\cdot m_{A}=0\), which might be multiplied by a weight factor if one needs to increase the importance to deliver a real line \(A\). All these equations are linear, because we assume that \(r_{A}\) is known. In composing this overdetermined system of equations, we use the noisy data \((r_{i},m_{i})\) of the measured inliers \(L_{i}\), and the recovered direction \(r_{A}\). Notice that we really need to obtain \(r_{A}\) in a previous step, because the equations \(\Omega(L_{i},A)-\Omega(L_{j},A)=0\) are not sufficient to determine \(A\). For example, rulers \(R\) in the second regulus of the same hyperboloid satisfy \(\Omega(EL,R)=0\) for each ruler \(EL\) of the first regulus. Because a set of three rulers \(\{L_{1},L_{2},L_{3}\}\) can serve for a minimal solver that recovers the moment \(m_{A}\) of \(A\), we encounter here another opportunity to integrate a robust consensus procedure by random sampling, now eliminating rulers \(L_{i}\) with moment ouliers. Furthermore, to avoid numerical instabilities, we recommend selecting the pairs \(\{L_{i},L_{j}\}\) for the equations \(\Omega(L_{i},A)-\Omega(L_{j},A)=0\) such that we maximize \(||r_{i}-r_{j}||\). **Step 5:** Next, for each of the data lines \(L_{i}\), we can compute its (perpendicular) distance \(\sigma_{i}\) to the recovered axis \(A\), and its closest point \(p_{i}\) on \(A\). Without noise, all these points \(p_{i}\) coincide with the centre \(p\) of the gorge circle of the hyperboloid, and all these distances \(\sigma_{i}\) are equal to its radius \(\sigma\). So, we recover \(p\) and \(\sigma\) as the means of the \(p_{i}\) and the \(\sigma_{i}\) respectively. In Section 7 we have measurements of rulers of several hyperboloids \(\mathcal{H}(L(\alpha),B)\) at our disposal, all sharing the same axis, enabling us to apply the previous steps for each of them, yielding a gorge centre on the recovered axes of these hyperboloids. The mean of these gorge centres provides a stable point \(p_{s}\) on the common axis, that gives rise to a more accurate computation for the moment as \(m_{A}=p_{s}\times r_{A}\). In any case, we use the reconstructed axis \(A\) to approximate the centre \(p\) and the radius \(\sigma\) of the gorge circle \(\mathcal{C}_{p}\), enabling us to recover the gorge points of the exact rulers \(EL_{i}\): \(q_{i}=\mathcal{C}_{p}\cap EL_{i}\). Indeed, the directions \(pq_{i}\) are exactly the quarter turns of the corrected projected directions \(cd_{i}\) (Proposition 1), which together with the condition \(||pq_{i}||=\sigma\) determines the location of \(q_{i}\). **Step 6:** Finally, we correct the moments \(m_{i}\) of \(L_{i}\) by \(q_{i}\times r_{i}\), where we use the corrected \(r_{i}\). ## Disclosures The authors declare no conflicts of interest. ## Data availability A real time demonstration of the virtual experimental setup can be seen in [https://youtu.be/GNTjmJvdTpw](https://youtu.be/GNTjmJvdTpw). The datasets generated and analysed during the current study are publicly available in the github repository [https://github.com/IvanDeBoi/Line-Calculus-on-Hyperboloids](https://github.com/IvanDeBoi/Line-Calculus-on-Hyperboloids). ## Dedication This article is dedicated to the memory of Henry Crapo, who introduced the first author to the fascinated world of projective line geometry.
2306.11738
Water-assisted electron capture exceeds photorecombination in biological conditions
A decade ago, an electron-attachment process called interatomic Coulombic electron capture has been predicted to be possible through energy transfer to a nearby neighbor. It has been estimated to be competitive with environment-independent photorecombination, but its general relevance has yet to be established. Here, we evaluate the capability of alkali and alkaline earth metal cations to capture a free electron by assistance from a nearby water molecule. We introduce a characteristic distance $r_{IC}$ for this energy transfer mechanism in equivalence to the F\"orster radius. Our results show that water-assisted electron capture dominates over photorecombination beyond the second hydration shell of each cation for electron energies above a threshold. The assisted capture reaches distances equivalent to a fifth to seventh solvation shell for the studied cations. The far reach of the assisted electron capture is of significant general interest to the broad spectrum of research fields dealing with low-energy electrons, in particular radiation-induced damage of biomolecules. The here introduced distance measure will enable quantification of the role of the environment for assisted electron attachment.
Axel Molle, Oleg Zatsarinny, Thomas Jagau, Alain Dubois, Nicolas Sisourat
2023-06-15T22:19:45Z
http://arxiv.org/abs/2306.11738v1
# Water-assisted electron capture exceeds photorecombination ###### Abstract A decade ago, an electron-attachment process called _interatomic coulombic electron capture_ has been predicted to be possible through energy transfer to a nearby neighbour. It has been estimated to be competitive with environment-independent photorecombination but its general relevance has yet to be established. Here, we evaluate the capability of alkali and alkaline earth metal cations to capture a free electron by assistance from a nearby water molecule. We introduce a characteristic distance \(r_{\text{IC}}\) for this energy transfer mechanism in equivalence to the Forster radius. Our results show that water-assisted electron capture dominates over photorecombination beyond the second hydration shell of each cation for electron energies above a threshold. The assisted capture reaches distances equivalent to a 5th to 7th solvation shell for the studied cations. The far reach of assisted electron capture is of significant general interest to the broad spectrum of research fields dealing with low-energy electrons, in particular radiation-induced damage of biomolecules. The here introduced distance measure will enable to quantify the role of the environment for assisted electron attachment. 13\({}^{\text{th}}\) December 2022, revised 1st Feb 2023 ## 1 Introduction Water is the vital prerequisite for life on Earth. Understanding its interaction with the respective solute is therefore essential. An important class of solutes is minerals that are dissolved in their ionic form and can be classified thereby. The alkali metals lithium, sodium and potassium are singly positively charged in their dissolved form in water, the alkaline earth metals beryllium, magnesium and calcium are doubly charged. The right oxidation number is important for many chemical reactions within biological organisms. Changing the oxidation state of an ion to a biochemically more advantageous form is not straightforward for the organism. Radiation experienced for instance from the sun, xrays or radioactive material can change directly or indirectly the oxidation state of irradiated elements. The _interatomic coulombic electron capture_ (ICEC) effect is a less-investigated example among the various processes that can lead to a change of oxidation number. Contrary to resonant electron thermalisation, ICEC works in support of an electron attachment by assistance of surrounding atoms and molecules. We show in this work that the mere presence of a solvent water molecule can make a significant contribution to an increased attachment probability of slow electrons to dissolved nutrient ions. This can have a considerable effect on bioavailability of the nutrients as well as on the propagation of free charges through the organism in the wake of initial ionising irradiation. Interatomic coulombic electron capture is a non-local energy transfer process facilitating recombination of a free electron with an ion by ionisation of a neighbour. Schematically depicted in Figure 1, an electron can attach for instance to a magnesium (II) cation by transfer of excess energy to a nearby water molecule. Water then releases another electron in order to rid itself from the energy. For these reaction partners, this process leaves both species positively charged and emits the propagating electron faster than the initial one. In context of a dissolved alkali or alkaline earth metal \(A\) which appears in its ionic form of charge (\(q+1\)) in water, interatomic coulombic electron capture can generally be expressed as \[e^{-}+A^{+1(+q)}\ \ +H_{2}O\longrightarrow A^{0(+q)}\ \ +H_{2}O^{+}+e^{-} \tag{1}\] where \(A\) can be lithium _Li_, sodium _Na_ or potassium \(K\) for alkali metals with \(q=0\), and beryllium _Be_, magnesium _Mg_ or calcium _Ca_ for alkaline earth metals with q=1. Particularly since it is aided by a molecule of the solvent agent water here, one may call the process similarly _environment-assisted electron capture_.[1] ICEC is emerging as a research field.[2] However, experimental investigations are yet lacking. Furthermore, the computation of ICEC observables is a challenge. The virtual-photon approximation is a robust asymptotic formula, that allowed the initial postulation of the existence of ICEC in the year 2009.[1, 3] So far, this is the only approach that is technically able to handle systems relevant to biology. The molecular R-matrix approach is being explored and has been successful for very small molecular systems.[4] It has shown that the virtual-photon approximation provides a lower limit and provides the correct trend but omits overlap and interference of wavefunctions as well as molecular distortions from intermolecular close-range interactions.[5, 6] Beyond the molecular aspect, electronic dynamics of ICEC have been investigated successfully in relation to nanowires and embedded quantum dots.[7, 8, 9, 10, 11] In a mean-field approach, it has been proposed possible in macroscopic trapped cold-atom systems.[11] Thereby manifesting a fundamental process in various fields of interest, the original naming of the process as 'interatomic coulombic electron capture' has seen its expansion to include 'intermolecular', 'inter-quantum dot', or in an attempt to generalize the term inclusively to any two subsystems, to 'interparticle coulombic electron capture' under the same acronym of ICEC. In this work, we investigate the ICEC process in microhydrated cations using the virtual-photon approximation. Based on this approach, our results show that the presence of water molecules increases significantly the electron attachment cross sections to the cations due to ICEC. Furthermore, we introduce a characteristic distance \(r_{\mathrm{IC}}\) for ICEC and demonstrate that the latter dominates over photorecombination beyond the second hydration shell of each cation for electron energies above a threshold. The paper is organized as follows: In section 2, we present the theoretical approach employed in this work. The computational details are provided in section 3. In section 4, we discuss the results for alkali monocations and alkaline earth dications. Finally, the conclusions of this work are reported in section 5. Figure 1: Schematic reaction of interatomic coulombic electron capture (ICEC) by a magnesium (II) cation _Mg\({}^{2+}\)_ through ionisation of a nearby water molecule \(H_{2}O\). The cation recombines with a free electron \(e^{-}\) to form a singly charged **ion**. The excess energy is transferred to the water molecule which is ionised. This leads to a water molecule cation and a free electron emitted from the water molecule with different velocity. Theoretical Derivation Intermolecular coulombic electron capture has been first investigated in the virtual photon approximation [1, 3]. Within this approach and for the systems considered here, the corresponding cross section for the electron capture into a specific state \(i\) of the metal ion is given by \[\sigma_{\mathrm{IC}_{i}}=\frac{3}{4\pi r^{6}}\,\left(\frac{\hbar c}{h\nu} \right)^{4}\sigma_{\mathrm{H\!O}}\,\sigma_{A^{+}\to A_{i}^{0}}. \tag{2}\] In the above equation, \(\sigma_{A^{+}\to A_{i}^{0}}\) is the partial photorecombination cross section of \(A^{+}\) and \(\sigma_{\mathrm{H\!O}}\) is the water photoionisation cross section. The exchanged energy \(h\nu\) is the sum of the free electron energy \(\epsilon\) and the ionisation potential \(V_{A_{i}^{0}}\) of the capturing state, \(h\nu=\epsilon+V_{A_{i}^{0}}\), due to energy conservation. The distance between the two partners is noted \(r\). The total ICEC cross section is thus \[\sigma_{\mathrm{IC}}=\sum_{i}\sigma_{\mathrm{IC}_{i}} \tag{3}\] where the sum runs over all open ICEC channels. Note that these equations accommodate the possibility of local resonances in the molecular subsystems described by the respective cross section \(\sigma_{\mathrm{H\!O}}\) for the assisting partner, and \(\sigma_{A^{+}\to A_{i}^{0}}\) for the recombining cation. However, the interactions between the partners are neglected. In its assumption of distinguishable subsystems, it can be characterised as an asymptotic formula. Interpreting the partial cross section for assisted capture \(\sigma_{\mathrm{IC}_{i}}\) as function of the photorecombination cross section \(\sigma_{A^{+}\to A_{i}^{0}}\), their ratio (\(\sigma_{\mathrm{IC}_{i}}/\sigma_{A^{+}\to A_{i}^{0}}\)) expresses the amplification factor arising from the assisting effect of the mitigating partner, in this case the water molecule. This ratio is by its nature, a numerical coefficient without physical unit. By rearrangement of the quantities in Eq. (2), \[\left(\frac{\sigma_{\mathrm{IC}_{i}}}{\sigma_{A^{+}\to A_{i}^{0}}} \right)r^{6}=\frac{3}{4\pi}\,\left(\frac{\hbar c}{h\nu}\right)^{4}\sigma_{ \mathrm{H\!O}}\doteq(distance)^{6}, \tag{4}\] we therefore identify a quantity representing a length-scale purely on the argumentative grounds of consistent physical units. Implicitly, the transferred energy \(h\nu\) and consequently the argument to the photoionisation cross-section \(\sigma_{\mathrm{H\!O}}(h\nu)\) both depend on the energy released by the specific capturing state \(i\). This fact shall be indicated in the following by explicitly stating the index \(i\) on those quantities. We can interpret the identified distance of Eq. (4) as a characteristic length \[r_{\mathrm{IC}_{i}}:=\left(\frac{3}{4\pi}\,\left(\frac{\hbar c}{h\nu_{i}} \right)^{4}\sigma_{\mathrm{H\!O}}^{(i)}\right)^{\frac{1}{4}} \tag{5}\] for water-assisted electron capture into the capturing state \(i\). The ratio of this parameter \(r_{\mathrm{IC}_{i}}\) over a particular distance \(r\) between the recombining partner and the assisting water molecule, (\(r_{\mathrm{IC}_{i}}/r\)), is then equivalent to the amplification factor (\(\sigma_{\mathrm{IC}_{i}}/\sigma_{A^{+}\to A_{i}^{0}}\)) in terms of the respective cross sections as \[\frac{\sigma_{\mathrm{IC}_{i}}}{\sigma_{A^{+}\to A_{i}^{0}}}=\left(\frac{r_{ \mathrm{IC}_{i}}}{r}\right)^{6}. \tag{6}\] In this sense, the present water molecule can be seen as _stimulating_ the recombination of cation and free electron. The specific characteristic length \(r_{\mathrm{IC}_{i}}\) introduced here is thereby indicating the intermolecular distance at which the partial cross section for assisted capture into a specific capturing state \(i\) is of equal magnitude to the partial cross section of photorecombination into the same capture state. Note that this characteristic length \(r_{\mathrm{IC}_{i}}\) does not depend on the photorecombination cross section itself. The additivity in the individual cross sections allows to similarly define a total characteristic distance \(r_{\mathrm{IC}}\) for water-assisted capture. Introducing the partial photorecombination cross sections as statistical weights \[w_{i}:=\frac{\sigma_{\mathcal{A}^{+}\to A_{i}^{0}}}{\left(\sum_{j}\sigma_{ \mathcal{A}^{+}\to A_{j}^{0}}\right)}\;, \tag{7}\] such that each \(w_{i}\leq 1\) for any \(i\) and their sum \(\sum_{i}w_{i}\equiv 1\) for any electron energy, we find an overall total characteristic length \[r_{\mathrm{IC}}:=\left(\sum_{i}w_{i}\left(r_{\mathrm{IC}_{i}}\right)^{6} \right)^{\frac{1}{6}}. \tag{8}\] As a consequence, the competitive impact of the overall water-assisted electron capture with respect to the environment independent photo-recombination can thus be expressed by the ratio \[\frac{\sigma_{\mathrm{IC}}}{\sigma_{\mathrm{PR}}}=\left(\frac{r_{\mathrm{IC}} }{r}\right)^{6} \tag{9}\] between the total ICEC cross section and the total photorecombination cross section. If the capturing cation is closer to the water molecule than the distance \(r_{\mathrm{IC}}\) which is a function of incident energy \(\epsilon\), then ICEC dominates the environment-independent photorecombination. The quantum yield, or efficiency of the environment-assisted electron capture with respect to the total electron capture from both processes is thus distance-dependent as \[\eta_{\mathrm{IC}}=\frac{\sigma_{\mathrm{IC}}}{\sigma_{\mathrm{PR}}+\sigma_{ \mathrm{IC}}}=\frac{r_{\mathrm{IC}}^{6}}{r^{6}+r_{\mathrm{IC}}^{6}}\;. \tag{10}\] The characteristic distance \(r_{\mathrm{IC}}\) can be interpreted analogously to the Forster radius in the case of intermolecular energy transfer between two fluorescent molecules. Known as _Forster resonant energy transfer_ (FRET) for bound electronic excitations, the same distance dependence with respect to a characteristic length arises there.[12] The characteristic distance \(r_{\mathrm{IC}}\) is thereby exactly that distance between one electron capture and its reaction partner at which the efficiency \(\eta_{\mathrm{IC}}\) of ICEC measures 50%. This means the partner-assisted capture cross section at this distance has equal magnitude to that of the environment-independent photorecombination. Note that this is a definition for a single reaction partner. In an environment with multiple independent partner molecules counted with index \(N\), each will contribute to the overall many-partner ICEC cross section in the environment, \(\sum_{N}\sigma_{\mathrm{IC}}\left(r_{N}\right)\), in dependence on its individual distance \(r_{N}\) from the energy donor. The introduced characteristic distance \(r_{\mathrm{IC}}\) remains a pairwise measure independent of the individual partner distance. In the following, we consider the cases of only one partner molecule. The cross sections and ICEC radius reported represent therefore lower limits. While photorecombination data is sometimes hard to come by, photoionisation cross sections are often well tabulated. Let \(g_{\mathcal{A}^{+}}\) denote therefore the statistical weight of the alkali cation describing the number of electronic states equivalent to the initial state, and let \(g_{A_{0}^{0}}\) be the equivalent multiplicity of the \(i\)th capturing state, then the incident electron energy and emitted photon energy relate the photorecombination cross section as to the photoionisation cross section \(\sigma_{\mathcal{A}^{+}\leftarrow A_{i}^{0}}\) through the _principle of detailed balancing_,[13] \[\left(2m_{e}c^{2}\right)\epsilon\,g_{\mathcal{A}^{+}}\;\sigma_{\mathcal{A}^{+ }\rightarrow A_{i}^{0}}=\left(h\nu_{i}\right)^{2}g_{\mathcal{A}^{0}}\,\sigma_{ \mathcal{A}^{+}\leftarrow A_{i}^{0}}\;, \tag{11}\] known in this case as the Milne relation.[14] This relation has also been used by Gokhberg and Cederbaum to reformulate the partial cross section for assisted electron-capture in terms of the more accessible photoionisation cross section of (excited) state \(i\).[1] The discussed quantities represent functions of the energy, either directly through the continuum energy \(\epsilon\) of the captured electron and the ionisation threshold \(V_{A_{i}^{0}}\) associated with the capturing state \(i\), or indirectly through their energy difference transferred as photon energy \(h\nu\). Here, \(V_{A_{i}^{0}}\) represents a discrete set of energies and \(\epsilon\) a particular value within the energy continuum. Numerically, however, \(\epsilon\) is usually represented by a finite discrete collection of values together with partial photorecombination cross sections at that value \(\sigma_{A^{\pm}\to A^{0}_{i}}(\epsilon)\), or as associated photoionisation cross sections \(\sigma_{A^{+}\to A^{0}_{i}}(\epsilon+V_{A^{0}_{i}})\). In the following, we therefore indicate the composite index \((\hat{\imath},\epsilon)\) to remind the reader of the implicit energy dependence on both incident-electron energy \(\epsilon\) as well as ionisation threshold \(V_{A^{0}_{i}}\). Taking care of the interdependence of each photoionisation cross section through the exchanged energy \(h\imath_{4,\epsilon}\) the total assisted capture cross section itself may be estimated as a weighted sum \[r_{\text{IC}_{\epsilon}}=\left(\frac{3\,(\hbar c)^{4}}{4\pi}\,\sum_{i}u_{ \epsilon,\epsilon}\,\frac{\sigma_{H\!O}^{(\hat{\imath},\epsilon)}}{(h\imath_ {\hat{\imath},\epsilon})^{4}}\right)^{\frac{1}{6}} \tag{12}\] where the statistical weights in terms of partial photoionisation cross sections take the form \[u_{\hat{\imath},\epsilon}=\frac{(h\imath_{4,\epsilon})^{2}\,g_{A^{0}_{j}}\, \,\sigma_{A^{\pm}\gets A^{0}_{,\epsilon}}}{\sum_{j}\,(h\imath_{j, \epsilon})^{2}\,g_{A^{0}_{j}}\,\,\sigma_{A^{\pm}\gets A^{0}_{,\epsilon}} }\,. \tag{13}\] These weigths are in themselves independent of the reaction partner and mix the available capturing states to form the sum \(\sum_{i}u_{\hat{\imath},\epsilon}\equiv 1\) for any incident electron energy \(\epsilon\) while the fraction of ionisation cross section \(\sigma_{H\!O}^{(\hat{\imath},\epsilon)}\) of the reaction partner over the fourth power of transferred energies \(h\imath_{4,\epsilon}\) determine the scale of characteristic distance \(r_{\text{IC}}\). Although Eq. (7) appears simpler, it represents the same quantity as Eq. (13) linked through the Milne relation and the latter has been used in conjunction with the available photoionisation data to compute \(r_{\text{IC}}\). ### Relevant Quantitative Limits In the following, we examine the upper bound in magnitude, as well as the near-threshold and the large-energy behaviour of the introduced characteristic distance \(r_{\text{IC}}\). These findings are of value to estimate more generally the viability of a potential experimental investigation: What length scale is to be expected, how does it behave for very low electron energies, what is to be expected for high electron energies? These questions arise immediately when evaluating whether a certain experimental setup may allow to measure ICEC. 1. **Upper bound:** The specific characteristic distance \(r_{\text{IC}_{i}}\) for assisted capture into state \(i\) as given by Eq. (5) is independent of the electron-capturing species, i.e. it solely depends on the photoionisation cross section \(\sigma_{H\!O}\) of the assisting water molecule as function of photon energy \(h\nu\). In general, this photoionisation cross section is a finite positive quantity. It vanishes for energies below the ionisation threshold. For the water molecule, this ionisation threshold is 12.6 eV [15]. Similarly, the photoionisation cross section also tends to approach zero with increasing energy in the high-energy regime. This implies, there exists a global maximum. More particularly, we are interested in the global maximum of the (auxiliary) function \[f(h\nu):=\frac{\sigma_{H\!O}(h\nu)}{(h\nu)^{4}}\,,\text{ since }r_{\text{IC}_{i}}(h\nu)=\left(\frac{3(\hbar c)^{4}}{4\pi}\,f(h\nu)\right)^{ \frac{1}{6}}\,.\] (14) We can therefore define the length \[r_{\text{max}}:=\left(\frac{3(\hbar c)^{4}}{4\pi}\,\max_{h\nu}\left[f(h\nu) \right]\right)^{\frac{1}{6}}\] (15) through the global upper bound of the function \(f\). This upper bound is by its definition independent of the kinetic energy of the electron incident on the cation. Independent of the particular index \(i\) for the electron-capturing state, any specific characteristic distance \(r_{\text{IC}_{i}}\) is thereby bound from above, as \[r_{\text{IC}_{i}}(h\nu)\leq r_{\text{max}}\,.\] (16) This has a direct implication on the total characteristic distance which represents according to Eq.(8) a weighted sum over the bound quantities \(\{r_{\text{IC}_{i}}\}\) as \[(r_{\text{IC}})^{6}=\sum_{i}w_{i}\,(r_{\text{IC}_{i}})^{6}\leq(r_{\text{max}}) ^{6}\sum_{i}w_{i}\,.\] (17) By their definition, the sum over weights \(\{w_{i}\}\) is unity at any (photon) energy where at least one electron-capture channel is open, \[\sum_{i}w_{i}=\begin{cases}1,\,\text{if }\exists i\in\mathbb{N}\mid w_{i}\neq 0 \\ 0,\,\text{if }w_{i}=0\;\forall\;i\end{cases}\quad.\] (18) Hence, the total characteristic distance \(r_{\text{IC}}\) is bound from above, \[r_{\text{IC}}(\epsilon)\leq r_{\text{max}}\,,\] (19) for all energies by the same quantity \(r_{\text{max}}\) as each individual specific distances \(r_{\text{IC}_{i}}\) associated with electron-capture state \(i\). Note that \(r_{\text{max}}\) is independent of the particular electron-captor. This suggests \(r_{\text{max}}\) as an easily accessible quantity to estimate the length scale associated to assisted electron-capture for any given assisting partner species. The bound on the length-scale of assisted capture is defined solely by the stimulating partner. 2. **Low energies:** In the context of energy exchange through assisted-electron capture, the photoionisation cross sections of electron captor and assisting partner are linked through the exchanged photon energy \(h\nu\). This quantity is however dependent on the kinetic energy of the incident electron \(\epsilon\) which is a continuous degree of freedom, as well as the ionisation potential \(V_{A_{i}^{0}}\) of the particular capturing state \(i\) which is a discrete degree of freedom, \[h\nu=\epsilon+V_{A_{i}^{0}}\,.\] (20) We do not have a direct control which state captures the incident electron but rather have to be aware that in the general case, there is a set of capturing states \(\{i\}\) with a discrete set of transferred energies \(\{h_{\mathcal{H},\epsilon}\}\) for any fixed incident energy \(\epsilon\). In order to meet the ionisation threshold \(V_{\text{\it H}O}\) of the assisting partner, the kinetic energy of the free electron needs to fulfil the criterion \[\epsilon=h\nu-V_{A_{i}^{0}}\geq V_{\text{\it H}O}-V_{A_{i}^{0}}\] (21) for at least one capture state. The sign of the difference in ionisation potentials indicates whether the energy transfer is endo- or exothermic, in other words, whether the energy accepting electron on the assisting partner is emitted with a lower, or respectively higher kinetic energy than \(\epsilon\). We assume without loss of generality, that the capturing state index \(i\) is ordered by the respective ionisation potential with \(V_{A_{i}^{0}}\leq V_{A_{i}^{0}}\leq...\). Then \(i=0\) marks an electron-capture into the ground state and the energy difference \[V_{\text{\it H}O}-V_{A_{0}^{0}}=:\epsilon_{0}\] (22) represents the energy threshold for assisted electron capture. Depending on the choice of reaction partners, this quantity can be positive, i.e. \(\epsilon_{0}>0\). Then the free electron needs a kinetic energy of at least \(\epsilon_{0}\) to allow for assisted capture to occur. This is the case for the alkali cations \(Li^{+}\), \(Na^{+}\), and \(K^{+}\) where \(\epsilon_{0}\) ranges from 7.16 eV [16, 17, 18] to 8.20 eV [17, 19, 20, 21, 22, 23] with respect to water. [15, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] In the case where \(\epsilon_{0}\) vanishes, the energy transfer between captor and assisting energy acceptor is energy neutral. Similarly, the energy threshold can be negative, i.e. \(\epsilon_{0}<0\). This is the case for electron-capture by alkaline earth cations \(Be^{2+}\) and \(Mg^{2+}\) in assistance from water. [16, 17, 35, 36] In that case, even an infinitely slow free electron, i.e. \(\epsilon=0\), allows for an assisted capture into at least one capturing state. Moreover, a negative threshold suggests that even pseudo-free electrons, for example from Rydberg states with ionisation potentials smaller than \(|\epsilon_{0}|\), may energetically be captured through the mechanism of assisted electron capture. In that respect, ICEC merges over to a related process known as electron-transfer-mediated decay (ETMD(3)). [37] At an advantageous choice of both reaction partners, the characteristic distance may already have a significant size at vanishing electron energy, \(\epsilon=0\). That means there is in that case at least one capture state \(i\) for which \(u_{\text{i},\epsilon}>0\). Each state-specific characteristic distance is governed by the same function \(f(h\nu)\) of transferred photon energy \(h\nu\). In the limit of an infinitely-slow incident electron, where \(\epsilon=0\), the transferred energy \(V_{A_{i}^{0}}\), reduces to the specific ionisation potential of the respective capturing state \(i\). The characteristic distance for assisted capture is purely determined by the water molecule's photoionisation cross section at the energy of the specific ionisation potential as \[r_{\mathrm{IC}_{i}}(\epsilon=0)=\left(\frac{3(\hbar c)^{4}}{4\pi}\,f(V_{A_{i}^{ 0}})\right)^{\frac{1}{6}}\,.\] (23) Since \(f\) is positive and bound from above, there is a particular capture state \(k\) which represents the biggest value in the discrete set of function values \(\{f(V_{A_{i}^{0}})\}\), such that \[f(V_{A_{i}^{0}})\leq\sup_{i}\left\{f(V_{A_{i}^{0}})\right\}=f(V_{A_{k}^{0}}) \text{ for a particular }k\in\{i\}.\] (24) The specific characteristic distance for every state is therefore limited by that distance \(r_{\mathrm{sup}}=:r_{\mathrm{IC}_{k}}(\epsilon=0)\) of the identified capture state \(k\) such that for any state \(i\) \[r_{\mathrm{IC}_{i}}(\epsilon=0)\leq r_{\mathrm{sup}}\leq r_{\mathrm{max}}\,.\] (25) The total characteristic distance for assisted electron-capture being a weighted sum over the individual capturing states is thereby also bound. Since \[\left(r_{\mathrm{IC}}(\epsilon=0)\right)^{6}=\sum_{i}w_{i}\left(r_{\mathrm{IC }_{i}}\right)^{6}\leq\left(r_{\mathrm{sup}}\right)^{6}\,\sum_{i}w_{i}\,,\] (26) the total characteristic distance is bound by the single biggest contribution \(r_{\mathrm{sup}}\) in the discrete set \(\{r_{\mathrm{IC}_{i}}\}_{i}\), so that also \[r_{\mathrm{IC}}(\epsilon=0)\leq r_{\mathrm{sup}}\leq r_{\mathrm{max}}\,.\] (27) 3. **High energies:** In the context of this work, the regime of high energies is reached if the incident electron energy \(\epsilon\) is much greater than the largest ionisation threshold \(V_{A_{0}^{0}}\) in the energy-ordered set of thresholds \(\{V_{A_{0}^{0}}\}\) respective to the discrete quasi-infinite set of capture states \(i\). The exchanged energy between electron-captor and assisting partner molecule, \[h\nu_{0}(\epsilon\gg V_{A_{0}^{0}})=\epsilon+V_{A_{0}^{0}}\approx\epsilon+V_{ A_{1}^{0}}\approx\ldots\approx\epsilon+V_{A_{i}^{0}}+\ldots\approx\epsilon\] (28) is therefore approximately equal to the pure kinetic energy of the incident electron. This implies that in the high energy limit, every specific characteristic distance \(r_{\mathrm{IC}_{i}}\) asymptotically approaches the same function in energy \(r_{\infty}(\epsilon)\). The overall asymptotic limit is thereby also given by \[\lim_{\{V_{A_{i}^{0}}\}\ll\epsilon}r_{\mathrm{IC}}(\epsilon)=r_{\infty}( \epsilon)=\left(\frac{3(\hbar c)^{4}}{4\pi}\,f(\epsilon)\right)^{\frac{1}{6}} \leq r_{\mathrm{max}}\,.\] (29) That implies that the total characteristic distance for assisted electron-capture becomes independent of the specific captor in the high energy regime. ## 3 Methods We have employed the virtual-photon approximation to compute the ICEC cross sections [1, 3]. This method assumes distinctly separated subsystems. In consistence with this approximation, all quantities used for cations and water molecules are therefore for the systems in the gas phase (i.e. isolated). The evaluation of the characteristic distance and total cross section requires the set of photoionisation data for ground and excited states of the captor according to Eq. (12). The results for the characteristic distance are therefore to be seen as an asymptotic result. We stress again that they include possible local resonances as far as they are covered by the respective photoionisation cross section of the individual reaction partner. Below we report the databases employed in this work and the procedure followed to interpolate and extrapolate the missing data. ### Partial Photoionisation Cross Sections of Metals To our current knowledge, the most extended consistent set of excited state ionisation cross sections of atoms and ions is provided by the topbase[16] dataset which is a purely theoretical database of R-matrix calculations. It allows us to gather data for 25 capturing states of the lithium cation \(Li^{+}\), 33 capturing states for the sodium cation \(Na^{+}\), 25 capturing states for the beryllium cation \(Be^{2+}\), 33 capturing states for the magnesium cation, \(Mg^{2+}\) and 36 states for the calcium cation \(Ca^{2+}\). Potassium is not available within this database. Computations with the Dirac-based B-spline R-matrix[38, 39] have been tried in 2010 outside the topbase project,[22] and were experimentally confirmed later.[23] This allows to use consistent data of 14 capturing states for the potassium cation \(K^{+}\) which would otherwise not be possible. ### Photoionisation Cross Sections of Water For the assisting water molecule, photoionisation cross section data for the ground-state ionisation is sufficient. It is advantageous if the photoionisation cross section is provided as function of photon energy \(h\nu\) instead of emitted electron energy. This allows to treat the ionisation of the water molecule as a single function in accord with Eq. (12). The Leiden database for photodissociation and photoionisation of astrophysically relevant molecules has been used.[24] Within this database, data for the water molecule stem from a considerable number of experimental sources.[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] This dataset has been used to arrive at an estimate for the order of the characteristic distance of the ICEC process assisted by the water molecule according to Eqs. (12) & (15). Therefore, we expect a characteristic distance of the order \[\mathcal{O}(r_{\mathrm{IC}})\leq\max_{\epsilon}\left[\left(\frac{ 3\left(\hbar c\right)^{4}}{4\pi}\,\frac{\sigma_{\hbar O_{\epsilon}}}{\epsilon^ {4}}\right)^{\frac{1}{6}}\right]\\ =1.0091656\,\frac{\mathrm{nm~{}Ryd}^{\frac{4}{6}}}{\mathrm{Mb}^{ \frac{1}{6}}}\,\,\max_{\epsilon}\left(\frac{\sigma_{\hbar O_{\epsilon}}}{ \epsilon^{4}}\right)^{\frac{1}{6}}\,. \tag{30}\] which shows that a natural choice of units for the characteristic distance is nanometres when energies are handled in Rydberg and cross sections in megabarn. These are thus the employed units for the calculations even though we present energies in electron volts throughout the discussion. Note that this estimate is independent of the electron captor itself. As consequence of the available data set, the magnitude of characteristic distance \(r_{\mathrm{IC}}\) for water-assisted ICEC are maximally \(r_{\mathrm{max}}=1.45306\) nm which corresponds to a photon energy of about 13.41 eV.[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] ### Interpolation and Extrapolation The tables for various capturing states may provide data points at differing incident electron energies which lead to mismatching photon energies in the data table for the water molecule. Necessarily, interpolation for intermediate energies between the given data points is necessary. We have interpolated linearly to intermediate points. Where necessary data was missing at the larger energy range, a capturing state's photoionisation cross section has been extrapolated from the last 10% of available data but at least the 10 last data points. For simplicity and in accordance with a high-energy power law of proportionality to \(\epsilon^{-3.5-\ell}\),[40] we extrapolated unavailable data by a simple power law as \(\ln\sigma=a+b\ln\epsilon\) with extrapolation parameters \(a\) and \(b\). ## 4 Results and Discussion The respective characteristic distances \(r_{\mathrm{IC}}\) as function of incident electron energy \(\epsilon\) are depicted in Figure 2 for assisted electron capture by alkali cations of lithium (I) \(Li^{+}\), sodium (I) \(Na^{+}\), and potassium (I) \(K^{+}\), as well as alkaline earth cations beryllium (II) \(Be^{2+}\), magnesium (II) \(Mg^{2+}\), and calcium (II) \(Ca^{2+}\) through ionisation of a water molecule. Key quantities are summarized in Table 1. To compare the reach of environment assisted electron capture with respect to the respective dimension of hydration shells, the total ICEC radius is depicted in multiples of the first hydration shell radius in Figure 3. We stress that the ICEC radius represents a single-acceptor quantity. In a liquid environment, the contribution from each partner molecule adds to the total cross section. The interpretation of the ICEC radius in terms of the process's reach is therefore only a lower limit within the virtual photon approximation. ### Alkali Monocations With an energetic threshold between 7.16 eV (\(Li^{+}\)) and 8.20 eV (\(K^{+}\)) to overcome before assisted electron capture opens, the characteristic distance \(r_{\rm IC}\) shows a sharp onset between 1.15 nm (\(Li^{+}\)) and 0.90 nm (\(K^{+}\)) for capture into the ground state \(s\) shell with a clear step up to 1.32 nm for \(Li^{+}\) and 1.28 nm for \(K^{+}\) at the threshold of the lowest \(p\) shell. Each of these opening onsets of assisted electron capture show clearly fine fluctuations within the first eV. This is the signature of molecular resonances in the water photoionisation cross section. The maximum of the characteristic distance between 1.39 nm (\(Li^{+}\)) to 1.43 nm (\(K^{+}\)) reaches very close to its captor-independent analytical limit \(r_{\rm max}\) of 1.45 nm set solely by the assisting water molecule. Its position between 12.45 eV (\(Li^{+}\)) and 12.24 eV (\(K^{+}\)) coincides roughly with its respective continuum threshold energy. Above this threshold, the individual contributions of capturing states as well as the total characteristic distance clearly wear the large-energy signature dictated by the photoionisation function of the assisting water molecule. While decreasing with increasing energy, the ICEC radius remains above 1.23 nm for energies up to 18 eV. This is large in comparison to the respective hydration shells. Intermolecular coulombic electron capture reaches thereby significantly beyond the third solvation shell for each alkali metal already from the opening of the channel. The absolute assisted-capture radius is relatively similar across the elements but the first hydration shell grows with the atomic number. This leads to differences in the ICEC radius relative to the first solvation shell radius \(r_{1}\). At the opening plateau of assisted ground state capture (\(2s\)), it reaches beyond 6.1 \(r_{1}\) for lithium. It reaches 4.6 \(r_{1}\) for sodium at the opening plateau of assisted ground-state capture into 3\(s\), and still reaches beyond 3.5 \(r_{1}\) for the much larger potassium for the opening plateau of ground state capture into 4\(s\). Figure 3 shows the respective total ICEC radius in units of first solvation shell radius \(r_{1}\) for the investigated alkali monocations as well as alkaline earth dications. With additional channels open at higher incident energies all three capturing cations show ICEC radii larger than 5.0 \(r_{1}\) up to 17 eV. ### Alkaline Earth Dications The dications of alkaline earth metals show an ICEC radius above 1 nm for a significantly larger energy range than alkali monocations, and more variation in their ICEC radius behaviour among each other owing to their larger charge and spread-out energetic thresholds of capture channels. The difference between ionisation potentials of beryllium (II) and water is -5.64 eV. This indicates that the assisted capture is in fact already open for an incident electron with vanishing kinetic energy. The same holds for magnesium (II), while calcium (II) has an assisted-capture threshold at 0.82 eV which is close to zero but positive (cf. Table 1). As a result, \(Be^{2+}\) already allows assisted capture into the \(2s\) and \(2p\) shells from vanishing incident energies which already presents the decaying tail of \(\sigma_{\!H\!O}/(h\nu)^{4}\) with increasing energy in the ICEC radius. This trend is interrupted by distinct steps upwards due to channel openings for capture into higher shells. The magnesium (II) cation with its intermediate threshold of -2.32 eV allows only \(3s\) capture at vanishing incident \begin{table} \begin{tabular}{c c c c c} \hline \hline species & threshold energy & optimal energy & maximal radius & first hydration shell \\ \(A^{+}\) & \(V_{A^{0}}-V_{\!H\!O}\) [eV] & \(\epsilon\) where \(r_{\rm IC}=\) max. [eV] & \(\max\limits_{\epsilon}r_{\rm IC}\) [nm] & \(r_{1}\) [nm] \\ \hline \(Li^{+}\) & 7.16 eV & 12.45 eV & 1.39 nm & 0.19 nm [41, 42, 43, 44, 45] \\ \(Na^{+}\) & 7.50 eV & 12.52 eV & 1.41 nm & 0.24 nm [41, 46] \\ \(K^{+}\) & 8.20 eV & 12.24 eV & 1.43 nm & 0.26 nm [41] \\ \(Be^{2+}\) & -5.64 eV & 12.02 eV & 1.25 nm & 0.17 nm [47, 48] \\ \(Mg^{2+}\) & -2.32 eV & 12.14 eV & 1.33 nm & 0.20 nm [41] \\ \(Ca^{2+}\) & 0.82 eV & 5.56 eV & 1.40 nm & 0.23 nm [41] \\ \hline \hline \end{tabular} \end{table} Table 1: Key values of the distance \(r_{\rm IC}\) for water-assisted electron capture. energy, but the second assisted capture channel into \(3p\) opens at 2.05 eV. This produces an overall increase of the ICEC radius with increasing incident energy up to its maximum of 1.33 nm at 12.14 eV. The characteristic distance for assisted-capture by a calcium (II) cation looks arguably most similar to that of the alkali monocations with comparison to the other two alkaline earth dications: It shows a positive energetic threshold for ground state capture, a significant stepwise increase with the opening of the second capture channel, here \(3d\) capture at 3.13 eV, and a global maximum reached closely thereafter, here at 5.56 eV. The absolute maximum ICEC radius of 1.40 nm is closer to those reached by the alkali monocations and only 3.33% short of the limiting \(r_{\rm max}\) despite the curve's shift to lower incident energies. Figure 2: Characteristic distance \(r_{\rm IC}\) of ICEC by alkali and alkaline earth cations in assistance from a water molecule as function of incident electron energy \(\epsilon\). Key numbers of energy threshold, maximal characteristic distance and its energetic position are summarized in Table 1. The solid line indicates the total characteristic distance in nanometres and relative to respective hydration shell radii \(r_{n}\). The five strongest contributing capture channels are indicated in dashed lines. Their channel openings have been marked as well as some higher channel openings and the continuum threshold for assisted capture (\(\infty\)). The maximal allowed characteristic distance \(r_{\rm max}\) of 1.45306 nm is exclusively determined by the assisting water molecule. With respect to the first hydration shell radius \(r_{1}\), ICEC on alkaline earth dications has a large reach beyond 4.7 \(r_{1}\) for a range over more than 14.5 eV from energy thresholds of at least two open assisted capture channels upwards due to their higher charge and tighter hydration radius compared to alkali monocations. While the alkaline earth dications fell short of the maximal allowed \(r_{\rm max}\) in comparison of their absolute characteristic distance to their alkali monocationic counterparts, they are actually reaching further with respect to their respective hydration shell radius \(r_{1}\) (cf. Figure 3): The ICEC radius \(r_{\rm IC}\) reaches up to 7.35 \(r_{1}\) for \(Be^{2+}\), up to 6.65 \(r_{1}\) for \(Mg^{2+}\), and up to 6.09 \(r_{1}\) for \(Ca^{2+}\). Particularly above 13 eV incident electron energy, the characteristic distances with respect to the first hydration shell radius are comparable for the pairs of \(Li^{+}\) and \(Be^{2+}\) as well as for \(K^{+}\) and \(Ca^{2+}\) and show the predicted high-energy tail behaviour of slow decay with increasing electron energy. The assisted capture radius \(r_{\rm IC}\) still reaches significantly beyond 4.5 \(r_{1}\) at 18 eV for all the investigated metal cations. ## 5 Conclusions In this work, we introduced the characteristic distance \(r_{\rm IC}\) for interparticle coulombic electron capture (ICEC) as a measure of quantum efficiency with respect to environment-independent photorecombination. In equivalence to the Forster radius for Forster resonant energy transfer, this \(r_{\rm IC}\) allows to interpret the reach of ICEC. We have furthermore presented experimentally relevant limits that can easily be evaluated to classify the significance of ICEC for any given pair of electron captor and assisting partner. Notably, the ICEC radius as function of incident electron energy is mainly shaped by the photoionisation cross section of the assisting partner molecule. The reach of ICEC was evaluated for bio-relevant alkali monocations \(Li^{+}\), \(Na^{+}\), \(K^{+}\) and alkaline earth dications \(Be^{2+}\), \(Mg^{2+}\), and \(Ca^{2+}\) by assistance from a water molecule. Assisted capture of a free electron dominates significantly over environment-independent photo-recombination for distances between the reaction partners far beyond the third hydration shell radius. The assisted capture radius \(r_{\rm IC}\) exceeded even distances 4 times that of the respective radius of the first hydration shell \(r_{1}\) for an energy range of at least 10 eV. The maximum reaches ranges between 5.5 \(r_{1}\) for \(K^{+}\), and 7.4 \(r_{1}\) for \(Be^{2+}\). Alkaline earth metal dications \(Be^{2+}\), \(Mg^{2+}\) and \(Ca^{2+}\) are active at low incident energy. The ICEC reaction pathway is here already open at vanishing incident energy for the smaller dications and opens for calcium at 0.82 eV. The investigated alkali monocations \(Li^{+}\), \(Na^{+}\) and \(K^{+}\) show a clear energetic threshold between 7.16 eV to 8.20 eV for Figure 3: Characteristic distances \(r_{\rm IC}\) of ICEC by alkali and alkaline earth metals in assistance by a water molecule in units of first solvation shell radius \(r_{1}\). The calcium dication \(Ca^{2+}\), for instance, has an energetic threshold of 0.82 eV for the incident free electron to allow an assisted electron capture by energy transfer to a water molecule. From this energetic onset, ICEC opens for capture into the \(4s\) shell. The capture through assistance by water shows a strong gain in reach and is dominating over photorecombination up to 2.69 \(r_{1}\) above 0.95 eV from where it shows a plateau. As the additional assisted-capture channel into \(3d\) opens at 3.13 eV, the assisted-capture radius \(r_{\rm IC}\) increases steeply with energy to reach up to 5.73 \(r_{1}\) at 3.26 eV. From there it presents a shallow increase with energy until reaching the maximum of 6.11 \(r_{1}\) at 5.56 eV from where it slowly descend with increasing energy. It still reaches up to 4.65 \(r_{1}\) at 18 eV incident electron energy. the incident electron to be captured through the ICEC channel. The introduced measures \(r_{\mathrm{IC}}\) and \(r_{\mathrm{max}}\) for the reach of ICEC will allow easy access to the design of future dedicated experimental measurements. Particularly the choice of the assisting partner molecule is essential. In other environmental contexts for instance, the introduced maximal value \(r_{\mathrm{max}}\) can be quickly estimated. For instance, while it was 1.453 nm for an assisting water molecule, it would be 1.635 nm for an assisting ethanol molecule and 2.357 nm in the case of a carbon dioxide molecule as assisting partner. Species with a higher photoionisation cross section are therefore expected to assist ICEC over even longer distances. The far reach of assisted electron capture has considerable implications for our understanding of reactions induced by slow electrons in any environment, but particularly in the context of propagating radiation damage in biological systems: Slow secondary electrons induced by radiation damage in biological systems predominantly recombine with solvated cations through water-molecule-assisted capture rather than via photorecombination. This occurs within at least the first and second hydration shell when the threshold energy is met. It reduces the cation's bio-chemical availability. A tertiary electron emerges from the assisting water molecule with a different energy. While the number of free electrons remains constant during this process, the kinetic energy changes according to the difference in ionisation thresholds.
2304.08410
About the Expressive Power and Complexity of Order-Invariance with Two Variables
Order-invariant first-order logic is an extension of first-order logic FO where formulae can make use of a linear order on the structures, under the proviso that they are order-invariant, i.e. that their truth value is the same for all linear orders. We continue the study of the two-variable fragment of order-invariant first-order logic initiated by Zeume and Harwath, and study its complexity and expressive power. We first establish coNExpTime-completeness for the problem of deciding if a given two-variable formula is order-invariant, which tightens and significantly simplifies the coN2ExpTime proof by Zeume and Harwath. Second, we address the question of whether every property expressible in order-invariant two-variable logic is also expressible in first-order logic without the use of a linear order. We suspect that the answer is ``no''. To justify our claim, we present a class of finite tree-like structures (of unbounded degree) in which a relaxed variant of order-invariant two-variable FO expresses properties that are not definable in plain FO. By contrast, we show that if one restricts their attention to classes of structures of bounded degree, then the expressive power of order-invariant two-variable FO is contained within FO.
Bartosz Bednarczyk, Julien Grange
2023-04-17T16:24:37Z
http://arxiv.org/abs/2304.08410v5
# About the expressive power and complexity of order-invariance with two variables ###### Abstract. Order-invariant first-order logic is an extension of first-order logic (FO) where formulae can make use of a linear order on the structures, under the proviso that they are order-invariant, _i.e._ that their truth value is the same for all linear orders. We continue the study of the two-variable fragment of order-invariant first-order logic initiated by Zeume and Harwath, and study its complexity and expressive power. We first establish coNExpTime-completeness for the problem of deciding if a given two-variable formula is order-invariant, which tightens and significantly simplifies the coN2ExpTime proof by Zeume and Harwath. Second, we address the question of whether every property expressible in order-invariant two-variable logic is also expressible in first-order logic without the use of a linear order. While we were not able to provide a satisfactory answer to the question, we suspect that the answer is "no". To justify our claim, we present a class of finite tree-like structures (of unbounded degree) in which a relaxed variant of order-invariant two-variable FO expresses properties that are not definable in plain FO. On the other hand, we show that if one restricts their attention to classes of structures of bounded degree, then the expressive power of order-invariant two-variable FO is contained within FO. Key words and phrases:Finite model theory, order-invariance, two-variable logic, complexity ## 1. Introduction The main goal of finite model theory is to understand formal languages describing finite structures: their complexity and their expressive power. Such languages are ubiquitous in computer science, starting from descriptive complexity, where they are used to provide machine-independent characterisations of complexity classes, and ending up on database theory and knowledge-representation, where formal languages serve as fundamental querying formalism. A classical idea in finite model theory is to employ invariantly-used relations, capturing the data-independence principle in databases: it makes sense to give queries the ability to exploit the presence of the order in which the data is stored in the memory, but at the same time we would like to make query results independent of this specific ordering. It is not immediately clear that the addition of an invariantly-used linear order to first-order logic (FO) allow us to gain anything on the standpoint of expressive power. And indeed, as long as we consider arbitrary (_i.e._ not necessarily finite) structures it does not, which is a direct consequence of FO having the Craig Interpolation Property. However, as it was first shown by Gurevich [10, Thm. 5.3], the claim holds true over finite structures: order-invariant FO is more expressive than plain FO. Unfortunately, order-invariant FO is poorly understood. As stated in [1], one of the reasons why progress in understanding order-invariance is rather slow is the lack of logical toolkit. The classical model-theoretic methods based on types were proposed only recently [1], and order-invariant FO is not even a logic in the classical sense, as its syntax is undecidable. Moreover, the availability of locality-based methods is limited: order-invariant FO is known to be Gaifman-local [12, Thm. 2] but the status of its Hanf-locality remains open. This suggests that a good way to understand order-invariant FO is to first look at its fragments, _e.g._ the fragments with a limited number of variables. ### Our contribution We continue the line of research initiated in [11], which aims to study the complexity and the expressive power of order-invariant FO\({}^{2}\), the two-variable fragment of order-invariant FO. From a complexity point of view, it is known that order-invariant FO\({}^{2}\) has a coNExpTime-complete validity problem (which is inherited from FO\({}^{2}\) with a single linear order, see [13, Thm. 1.2]), and that whether a given FO\({}^{2}\)-formula is order-invariant is decidable in coNExpTime[11, Thm. 12]. From an expressive power point of view, order-invariant FO\({}^{2}\) is more expressive than plain FO\({}^{2}\) as it can count globally, _cf._[11, Example 2]. It remains open [11, Sec. 7], however, whether it is true that every order-invariant FO\({}^{2}\)-formula is equivalent to an FO-formula without the linear order predicate. This paper contributes to the field in the three following ways: * We provide a tight bound for deciding order-invariance for FO\({}^{2}\); namely, we show that this problem is coNExpTime-complete. Our proof method relies on establishing an exponential-size counter-model property, and is significantly easier than the proof of [11, Thm. 12]. * We present a class \(\mathit{C}_{\mathit{tree}}\) of tree-like structures, inspired by [13], and show that there exists an FO\({}^{2}\)-formula that is _order-invariant over \(\mathit{C}_{\mathit{tree}}\)_ (but not over all finite structures!) which is not equivalent to any FO-formula without the linear order predicate. This leads us to believe that the answer to the question of [11, Sec. 7] of whether the expressive power of order-invariant FO\({}^{2}\) lies inside FO is "_no_". The problem remains open, though. * In stark contrast to the previous result, we show that order-invariant FO\({}^{2}\) cannot express properties beyond the scope of FO over classes of structures of bounded degree. We show that this upper bound remains when adding counting to FO\({}^{2}\). This work is an extended version of [1] and [15]. ## 2. Preliminaries We employ standard terminology from finite model theory, assuming that the reader is familiar with the syntax and the semantics of first-order logic (FO) [10, Sec. 2.1], basics on computability and complexity [10, Secs. 2.2-2.3], and order-invariant queries [10, Secs. 5.1-5.2]. By FO\((\Sigma)\) we denote the first-order logic with equality (written FO when \(\Sigma\) is clear from the context) on a finite signature \(\Sigma\) composed of relation and constant symbols. By FO\({}^{2}\) we denote the fragment of FO in which the only two variables are \(x\) and \(y\). **Structures.** Structures are denoted by calligraphic upper-case letters \(\mathcal{A},\mathcal{B}\) and their domains are denoted by the corresponding Roman letters \(A,B\). We assume that structures have non-empty, _finite_ domains. We write \(\varphi[R/S]\) to denote the formula obtained from \(\varphi\) by replacing each occurrence of the symbol \(R\) with \(S\). We write \(\varphi(\bar{x})\) to indicate that all the free variables of \(\varphi\) are in \(\bar{x}\). A sentence is a formula without free variables. By \(\mathcal{A}\mathord{\restriction}_{\Delta}\) we denote the substructure of the structure \(\mathcal{A}\) restricted to the set \(\Delta\subseteq A\). **Order-invariance.** A sentence \(\varphi\in\operatorname{FO}^{2}(\Sigma\cup\{<\})\), where \(<\) is a binary relation symbol not belonging to \(\Sigma\), is said to be _order-invariant_ if for every finite \(\Sigma\)-structure \(\mathcal{A}\), and every pair of strict linear orders \(<_{0}\) and \(<_{1}\) on \(A\), \((\mathcal{A},<_{0})\models\varphi\) if and only if \((\mathcal{A},<_{1})\models\varphi\). It is then convenient to omit the interpretation for the symbol \(<\), and to write \(\mathcal{A}\models\varphi\) if \((\mathcal{A},<)\models\varphi\) for any (or, equivalently, every) linear order \(<\). Note that \(\varphi\) is _not_ order-invariant if there is a structure \(\mathcal{A}\) and two linear orders \(<_{0},<_{1}\) on \(A\) such that \((\mathcal{A},<_{0})\models\varphi\) and \((\mathcal{A},<_{1})\not\models\varphi\). The set of order-invariant sentences using two variables is denoted \(<\)-inv \(\operatorname{FO}^{2}\). While determining whether an \(\operatorname{FO}\)-sentence is order-invariant is undecidable [15, Ex. 9.3], the situation improves when we allow only two variables: checking order-invariance for \(\operatorname{FO}^{2}\)-formulae was shown to be in \(\operatorname{coN2ExpTime}\) in [11, Thm. 12].1 Footnote 1: The authors of [11] incorrectly stated the complexity in their Thm. 12, mistaking “invariance” with “non-invariance”. Decision problems.The _finite satisfiability_ (resp. _validity_) _problem_ for a logic \(\mathcal{L}\) asks whether an input sentence \(\varphi\) from \(\mathcal{L}\) is satisfied in some (resp. every) finite structure. Recall that the finite satisfiability and validity for \(\operatorname{FO}\) are undecidable [13, 14], while for \(\operatorname{FO}^{2}\) they are respectively \(\operatorname{NExpTime}\)-complete and \(\operatorname{coNExpTime}\)-complete, _cf_. [1, Thm. 5.3] and [15, Thm. 3]. Note that \(\varphi\) is finitely valid iff \(\neg\varphi\) is finitely unsatisfiable. Definability and similarity.Let \(\mathcal{L},\mathcal{L}^{\prime}\) be two logics defined over the same signature, and \(\mathcal{C}\) be a class of finite structures on this signature. We say that a property \(\mathscr{D}\subseteq\mathcal{C}\) is _definable_ (or expressible) in \(\mathcal{L}\) on \(\mathcal{C}\) if there exists an \(\mathcal{L}\)-sentence \(\varphi\) such that \(\mathscr{D}=\{\mathcal{A}\in\mathcal{C}:\ \mathcal{A}\models\varphi\}\). When \(\mathcal{C}\) is the class of all finite structures, we omit it. We say that \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\) on \(\mathcal{C}\) if every property on \(\mathcal{C}\) definable in \(\mathcal{L}\) is also definable in \(\mathcal{L}^{\prime}\). Since a sentence which does not mention the linear order predicate is trivially order-invariant, we get the inclusion \(\operatorname{FO}^{2}\subseteq<\)-inv \(\operatorname{FO}^{2}\). This inclusion is strict [11, Example 2]. The _quantifier rank_ of a formula is the maximal number of quantifiers in a branch of its syntactic tree. Given two \(\Sigma\)-structures \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\), and \(\mathcal{L}\) being one of \(\operatorname{FO}\), \(\operatorname{FO}^{2}\) and \(<\)-inv \(\operatorname{FO}^{2}\), we write \(\mathcal{A}_{0}\equiv_{k}^{\mathcal{L}}\mathcal{A}_{1}\) if \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) satisfy the same \(\mathcal{L}\)-sentences of quantifier rank at most \(k\). In this case, we say that \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) are \(\mathcal{L}\)_-similar at depth \(k\)_. We write \(\mathcal{A}_{0}\simeq\mathcal{A}_{1}\) if \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) are isomorphic. **Atomic types.** An (atomic) \(1\)_-type_ over \(\Sigma\) is a maximal satisfiable set of atoms or negated atoms from \(\Sigma\) with a free variable \(x\). Similarly, an (atomic) \(2\)_-type_ over \(\Sigma\) is a maximal satisfiable set of atoms or negated atoms with free variables \(x,y\). Note that the total number of atomic \(1\)- and \(2\)-types over \(\tau\) is bounded exponentially in \(|\Sigma|\). We often identify a type with the conjunction of all its elements. The sets of \(1\)-types and \(2\)-types over the signature consisting of the symbols appearing in \(\varphi\) are respectively denoted \(\boldsymbol{\alpha}_{\varphi}\) and \(\boldsymbol{\beta}_{\varphi}\). Given a structure \(\mathcal{A}\) and an element \(a\in A\) we say that _a realises_ a \(1\)-type \(\alpha\) if \(\alpha\) is the unique \(1\)-type such that \(\mathcal{A}\models\alpha[a]\). We then write \(\operatorname{tp}^{0}_{\mathcal{A}}(a)\) to refer to this type. Similarly, for (non-necessarily distinct) \(a,b\in A\), we denote by \(\operatorname{tp}^{0}_{\mathcal{A}}(a,b)\) the unique \(2\)-type _realised_ by the pair \((a,b)\), _i.e._ the \(2\)-type \(\beta\) such that \(\mathcal{A}\models\beta[a,b]\). Finally, given a linearly ordered \(\Sigma\)-structure \((\mathcal{A},<)\), we split \(\operatorname{tp}^{0}_{(\mathcal{A},<)}(a,b)\) into \(\operatorname{tp}^{0}_{<}(a,b)\) and \(\operatorname{tp}^{0}_{\mathcal{A}}(a,b)\), where \(\operatorname{tp}^{0}_{<}(a,b)\) is one of \(\{x<y\},\{x>y\}\) and \(\{x=y\}\). ### Gaifman graphs and degree The _Gaifman graph_\(\mathcal{G}_{\mathcal{A}}\) of a structure \(\mathcal{A}\) is the simple graph with vertices in \(A\) and undirected edges between any pair of distinct elements that appear in the same tuple of some relation of \(\mathcal{A}\). By \(\operatorname{dist}_{\mathcal{A}}(a,b)\) we denote the distance between \(a\) and \(b\) in \(\mathcal{G}_{\mathcal{A}}\), defined in the usual way. For \(B\subseteq A\), we note \(N_{\mathcal{A}}(B)\) the set of elements at distance exactly \(1\) from \(B\) in \(\mathcal{G}_{\mathcal{A}}\). In particular, \(B\cap N_{\mathcal{A}}(B)=\emptyset\). The _degree_ of \(\mathcal{A}\) is the maximal degree of its Gaifman graph. The class \(C\) of \(\Sigma\)-structures is said to have _bounded degree_ if there exists some \(d\in\mathbb{N}\) such that the degree of every \(\mathcal{A}\in C\) is at most \(d\). ## 3. Complexity of the invariance problem We study the complexity of the problem of deciding if an input formula \(\varphi\in\operatorname{FO}^{2}\) is order-invariant. Starting from the lower bound first, let us consider the following program, inspired by [12, 13]. ``` Input: An \(\operatorname{FO}^{2}\)-formula \(\varphi\). 1 If \(\neg\varphi\) has a model with a single-element domain, return False. // a corner case 2 Let \(\psi_{<}:=\exists x\left(P(x)\land\forall y(y<x)\right)\) for a fresh unary predicate \(P\). // not \(<\)-inv on all \(P\)-expansions of \(\mathcal{A}\) as soon as \(|A|\geq 2\) Return True if \((\neg\varphi)\to\psi_{<}\) is \(<\)-invariant and False otherwise. // the reduction ``` **Procedure 1:** From validity to \(<\)-invariance The above procedure provides a Turing reduction from finite \(\operatorname{FO}^{2}\)-validity to testing order-invariance of \(\operatorname{FO}^{2}\)-sentences: Procedure 1 returns True iff its input is finitely valid. Its correctness follows from a straightforward case analysis. Hence, from the complexity of the finite validity problem for \(\operatorname{FO}^{2}\)[14, Thm. 3], we conclude: **Corollary 3.1**.: _Testing whether an \(\operatorname{FO}^{2}\) formula is order-invariant is \(\operatorname{\textsc{coNExpTime}}\)-hard._ Our upper bound uses the following fact, immediate from the definition of order-invariance. **Fact 3.2**.: An \(\operatorname{FO}^{2}\) sentence \(\varphi\) is _not_ order-invariant iff the sentence \(\varphi[<\!/\!<_{0}]\land\neg\varphi[<\!/\!<_{1}]\) is finitely satisfiable over structures interpreting \(<_{0}\) and \(<_{1}\) as linear orders over the domain. Let \(\operatorname{FO}^{2}_{-}[<_{0},<_{1}]\) be composed of sentences of the shape \(\varphi[<\!/\!<_{0}]\land\neg\varphi[<\!/\!<_{1}]\) for all sentences \(\varphi\in\operatorname{FO}^{2}(\Sigma\cup\{<\})\) over any signature \(\Sigma\). We always assume that (decorated) symbols \(<\) are interpreted as linear orders over the domain. To simplify the reasoning about such formulae, we rewrite them in the following way. Take any \(\varphi\in\operatorname{FO}^{2}(\Sigma\cup\{<\})\): * By applying transformations from [1, Sec. 3, p. 57-58], we may assume that all predicates appearing in \(\varphi\) are of arity at most two. * Next, we get rid of constants symbols by introducing fresh unary relations and enforcing that they are interpreted by exactly one element. In a similar spirit, we can remove all nullary symbols from the vocabulary: for each of them, we introduce a fresh unary predicate symbol and enforce that either no element satisfy it, or all elements do. * Finally, we reduce \(\varphi[</<_{0}]\wedge\neg\varphi[</<_{1}]\) to a Scott-like _normal form_, _cf._[1, Sec. 4], [10, Sec. 3.1]. It suffices to apply [1, Lemma 1] (providing such a form for \(\operatorname{FO}^{2}\) with a linear order predicate) to both \(\varphi[</<_{0}]\) and \(\neg\varphi[</<_{1}]\) and take their conjunction. By summarizing all the above steps, we conclude: **Corollary 3.3**.: _For any \(\operatorname{FO}^{2}_{-}[<_{0},<_{1}]\)-sentence there is an equi-satisfiable, polynomial-time computable \(\operatorname{FO}^{2}_{-}[<_{0},<_{1}]\)-sentence (over a purely relational signature composed of predicates of arity only \(1\) or \(2\)) having the form:_ \[\bigwedge_{i=0}^{1}\left(\forall x\forall y\;\chi_{i}(x,y)\wedge\bigwedge_{j=1 }^{m_{i}}\forall x\exists y\;\gamma_{i}^{j}(x,y)\right),\] _where the decorated \(\chi\) and \(\gamma\) are quantifier-free and the \(<_{i}\) do not appear in \(\chi_{1-i}\) and \(\gamma_{1-i}^{j}\)._ Given a model \(\mathcal{A}\models\varphi\) of a formula \(\varphi\) in normal form and elements \(a,b\in A\) such that \(\mathcal{A}\models\gamma_{i}^{j}(a,b)\), we call \(b\) a \(\gamma_{i}^{j}\)_-witness_ for \(a\) (or simply a witness). The core of our upper bound proof is the following small model theorem, employing the circular witnessing scheme by Gradel, Kolaitis, and Vardi [1, Thm. 4.3]. **Lemma 3.4**.: _Any finitely satisfiable sentence \(\varphi\in\operatorname{FO}^{2}_{-}[<_{0},<_{1}]\) in normal form has a model with \(\mathcal{O}(|\varphi|^{3}\cdot 2^{|\varphi|})\) elements._ Proof.: Let \(M:=\max\left(m_{0},m_{1}\right)\) and let \(\mathcal{A}\) be a model of \(\varphi\). We are going to construct from \(\mathcal{A}\) a model \(\mathcal{B}\models\varphi\) whose domain \(B:=W_{0}\cup W_{1}\cup W_{2}\cup W_{3}\) (where the sets \(W_{i}\) are constructed below) has cardinality at most \(224\;|\varphi|^{3}\cdot 2^{|\varphi|}\). Call a \(1\)-type _rare_ if it is realised by at most \(32M\) elements in \(\mathcal{A}\). Let the set \(S\) be composed of all elements of \(\mathcal{A}\) of rare \(1\)-types, and of the \(8M\) minimal and \(8M\) maximal (w.r.t. each of \(<_{0}^{\mathcal{A}}\), \(<_{1}^{\mathcal{A}}\)) realisations of each non-rare \(1\)-type in \(\mathcal{A}\). Define \(W_{0}\) as the set composed of all elements realising rare \(1\)-types, as well as the \(M\) minimal and \(M\) maximal (w.r.t. each of \(<_{0}^{\mathcal{A}}\) and \(<_{1}^{\mathcal{A}}\)) realisations of each non-rare \(1\)-type in \(\mathcal{A}\). Put the rest of elements of \(S\) to \(W_{1}\). We clearly have \(|W_{0}\cup W_{1}|\leq 32M\cdot|\boldsymbol{\alpha}_{\varphi}|\). The idea behind \(W_{0}\) is that this set contains "dangerous" elements, _i.e._ the ones for which \(\mathcal{A}\mathord{\upharpoonright}_{W_{0}}\) may be uniquely determined by \(\varphi\). Elements from \(W_{1}\) will help to restore the satisfaction of \(\forall\exists\) conjuncts. According to the terminology from [1], such elements would be called kings and the royal court. We next close \(W_{0}\cup W_{1}\) twice under taking witnesses. More precisely, let \(W_{2}\) be any \(\subseteq\)-minimal subset of \(A\) so that all elements from \(W_{0}\cup W_{1}\) have all the required \(\gamma_{i}^{j}\)-witnesses in \(W_{0}\cup W_{1}\cup W_{2}\). Similarly, we define \(W_{3}\) to be any \(\subseteq\)-minimal subset of \(A\) so that all elements from \(W_{0}\cup W_{1}\cup W_{2}\) have all the required \(\gamma_{i}^{j}\)-witnesses in \(W_{0}\cup W_{1}\cup W_{2}\cup W_{3}\). Observe that: \[|W_{2}|\leq 2M|W_{0}\cup W_{1}|\leq 2M\cdot 32M|\boldsymbol{\alpha}_{\varphi}| =64M^{2}|\boldsymbol{\alpha}_{\varphi}|\text{ and }|W_{3}|\leq 2M|W_{2}|\leq 128M^{3}| \boldsymbol{\alpha}_{\varphi}|.\] Consider the structure \(\mathcal{B}:=\mathcal{A}\mathord{\upharpoonright}_{W_{0}\cup W_{1}\cup W_{2} \cup W_{3}}\). We see that: \[|B|\leq|W_{0}\cup W_{1}|+|W_{2}|+|W_{3}|\leq(32M+64M^{2}+128M^{3})|\boldsymbol {\alpha}_{\varphi}|\leq 224M^{3}|\boldsymbol{\alpha}_{\varphi}|\leq 224\;| \varphi|^{3}\cdot 2^{|\varphi|}.\] Note that universal formulae are preserved under substructures, thus \(<_{1}^{\mathcal{B}},<_{2}^{\mathcal{B}}\) are linear orders over \(B\) and \(\mathcal{B}\) satisfies the \(\forall\forall\)-conjuncts of \(\varphi\). Hence, the only reason for \(\mathcal{B}\) to not be a model of \(\varphi\) is the lack of required \(\gamma_{i}^{j}\)-witnesses for elements from the set \(W_{3}\). We fix this issue by reinterpreting binary relations between the sets \(W_{3}\) and \(W_{1}\). Before we start, we are going to collect, for each non-rare 1-type \(\alpha\), pairwise-disjoint sets of \(M\) minimal and \(M\) maximal (w.r.t. each of \(<^{\mathcal{A}}_{0}\), \(<^{\mathcal{A}}_{1}\)) realisations of \(\alpha\) from \(W_{1}\). Formally: Fix a non-rare \(\alpha\). Let \(V^{0}_{\alpha}\) be composed of the first \(M<_{0}\)-minimal elements from \(\mathcal{A}|_{W_{1}}\). Next, let \(V^{1}_{\alpha}\) be composed of the last \(M<_{0}\)-maximal elements from \(\mathcal{A}|_{W_{1}\setminus(V^{0}_{\alpha}\cup V^{1}_{\alpha})}\). Similarly, let \(V^{2}_{\alpha}\) be composed of the first \(M<_{1}\)-minimal elements from \(\mathcal{A}|_{W_{1}\setminus(V^{0}_{\alpha}\cup V^{1}_{\alpha})}\). Finally let \(V^{3}_{\alpha}\) be composed of the last \(M<_{1}\)-maximal elements from \(\mathcal{A}|_{W_{1}\setminus(V^{0}_{\alpha}\cup V^{1}_{\alpha}\cup V^{2}_{ \alpha})}\). Put \(V_{\alpha}:=\bigcup_{k=0}^{3}V^{k}_{\alpha}\). Notice that all the components of \(V_{\alpha}\) are pairwise disjoint (by construction), and they are well-defined since we included sufficiently many elements in \(W_{1}\). Going back to the proof, we fix any element \(a\) from \(W_{3}\) that violate some of the \(\forall\exists\)-conjuncts of \(\varphi\), and fix any \(\forall\exists\)-conjunct \(\psi:=\forall x\exists y\;\gamma^{j}_{i}(x,y)\) whose satisfaction is violated by \(a\). Since \(\mathcal{A}\models\varphi\) we know that there is an element \(b\in A\) such that \(b\) is a \(\gamma^{j}_{i}\)-witness for \(a\) and \(\gamma^{j}_{i}\) in \(\mathcal{A}\) and let \(\alpha\) be the 1-type of \(b\) in \(\mathcal{A}\). Observe that \(\alpha\) is not rare (otherwise \(b\in W_{0}\), and hence \(b\in B\)), and \(a\neq b\). Moreover either \(b<^{\mathcal{A}}_{i}a\) or \(a<^{\mathcal{A}}_{i}b\) holds. Thus, we take \(V^{2i+k}_{\alpha}\) (where \(k\) equals 0 if \(b<^{\mathcal{A}}_{i}a\) and 1 otherwise) to be the corresponding set of \(M\) minimal/maximal \(<_{i}\) realisations of \(\alpha\) in the same direction to \(a\) as \(b\) is. Now it suffices to take the \(j\)-th element \(b_{j}\) from \(V^{2i+k}_{\alpha}\) and change the binary relations between \(a\) and \(b_{j}\) in \(\mathcal{B}\) so that the equality holds \(\operatorname{tp}^{0}_{\mathcal{A}}(a,b)=\operatorname{tp}^{0}_{\mathcal{B}}(a,b_{j})\) holds (which can be done as \(b\) and \(b_{j}\) have the same 1-type). We repeat the process for all remaining \(\gamma^{j}_{i}\) formulae violated by \(a\). We stress that it is not a coincidence that we use the \(j\)-th element \(b_{j}\) from the corresponding set \(V^{2i+k}_{\alpha}\) to be a fresh \(\gamma^{j}_{i}\)-witness for \(a\): this guarantees that we never redefine connection between \(a\) and some element twice. Observe that all elements from \(B\) that had \(\gamma^{j}_{i}\)-witnesses before our redefinition of certain 2-types, still do have them (as we did do not touch 2-types between them and their witnesses), \(\mathcal{B}\) still satisfies the \(\forall\forall\)-component of \(\varphi\) (since the modified 2-type does not violate \(\varphi\) in \(\mathcal{A}\) it does not violate \(\varphi\) in \(\mathcal{B}\)) and \(a\) has all required witnesses. By repeating the strategy for all the other elements from \(W_{3}\) violating \(\varphi\), we obtain the desired "small" model of \(\varphi\). Lemma 3.4 yields an NExpTime algorithm for deciding satisfiability of \(\operatorname{FO}^{2}_{-}[<_{0},<_{1}]\) formulae: convert an input into normal form, guess its exponential size model and verify the modelhood with a standard model-checking algorithm (in PTime[1, Prop. 4.1]). After applying Fact 3.2 and Corollary 3.1 we conclude: **Theorem 3.5**.: _Checking if an \(\operatorname{FO}^{2}\)-formula is order-invariant is \(\operatorname{coNExpTime}\)-complete._ Can order-invariant \(\operatorname{FO}^{2}\) express properties beyond the scope of \(\operatorname{FO}\)? While we do not solve the question stated in the heading of this section, we provide a partial solution. Let \(\mathcal{C}\) be some class of finite structures. A sentence \(\varphi\in\operatorname{FO}^{2}(\Sigma\cup\{<\})\), where \(<\) is a binary relation symbol not belonging to \(\Sigma\), is said to be _order-invariant over \(\mathcal{C}\)_ if for every finite \(\Sigma\)-structure \(\mathcal{A}\)_in \(\mathcal{C}\)_, and every pair of strict linear orders \(<_{0}\) and \(<_{1}\) on \(A\), \((\mathcal{A},<_{0})\models\varphi\) iff \((\mathcal{A},<_{1})\models\varphi\). Note that this is a weakening of the classical condition of order-invariance, and that the usual definition is recovered when \(\mathcal{C}\) is the class of all finite structures. In what follows, we present a class \(\mathit{C}_{\mathit{tree}}\) over the vocabulary \(\Sigma_{\mathit{C}_{\mathit{tree}}}:=\{T,D,S\}\) of tree-like finite structures, and a sentence \(\varphi\in\operatorname{FO}^{2}[\Sigma_{\mathit{C}_{\mathit{tree}}}\cup\{<\}]\) "expressing even depth" that is order-invariant over \(\mathit{C}_{\mathit{tree}}\) but not equivalent to any first-order sentence over \(\Sigma_{\mathit{C}_{\mathit{tree}}}\). A _dendroid_ is a finite \(\Sigma_{\mathit{C}_{\mathit{tree}}}\)-structure \(\mathcal{A}\) that, intuitively, is a complete directed binary tree decorated with a binary parent-child relation \(T^{\mathcal{A}}\), a descendant relation \(D^{\mathcal{A}}\), and a sibling relation \(S^{\mathcal{A}}\). Formally, a \(\Sigma_{\mbox{\tiny{{\it C}}}_{\mbox{\tiny{free}}}}\)-structure \(\mathcal{A}\) is called a _dendroid_ if there is a positive integer \(n\) such that * \(A=\{0,1\}^{\leq n}\) (_i.e._ the set of all binary words of length at most \(n\)), * \(T^{\mathcal{A}}=\{(w,w0),(w,w1)\ \mid\ w\in A,|w|<n\}\), * \(D^{\mathcal{A}}=(T^{\mathcal{A}})^{+}\) (_i.e._\(D^{\mathcal{A}}\) is the transitive closure of \(T^{\mathcal{A}}\)), and * \(S^{\mathcal{A}}=\{(w0,w1),(w1,w0)\ \mid\ w\in A,|w|<n\}\). We call the number \(n\) the _depth_ of \(\mathcal{A}\), and call the length of a node \(v\in A\) the _level_ of \(v\). We also use the terms "root" and "leaf" in the usual way. If \(\mathcal{A},\mathcal{B}\) are dendroids of depth \(\geq 2^{q+1}\) then \(\mathcal{A}\equiv_{q}\mathcal{B}\). Proof.: This is a tedious generalisation of the winning strategy for the duplicator in the \(q\)-round Ehrenfeucht-Fraisse games on linear orders [13, Thm 3.6 Proof #1]. As an immediate corollary we get: There is no \(FO(\Sigma_{\mbox{\tiny{{\it C}}}_{\mbox{\tiny{free}}}})\)-formula \(\varphi_{\mbox{\tiny{even}}}\) such that for every \(\mathcal{A}\in\mbox{\tiny{{\it C}}}_{\mbox{\tiny{free}}}\) we have \(\mathcal{A}\models\varphi_{\mbox{\tiny{even}}}\) iff the depth of \(\mathcal{A}\) is even. In contrast to the above corollary, we will show that the even depth query can be defined as an \(\operatorname{FO}^{2}(\Sigma_{\mbox{\tiny{{\it C}}}_{\mbox{\tiny{free}}}} \cup\{<\})\)-formula which is order-invariant over \(\mbox{\tiny{{\it C}}}_{\mbox{\tiny{free}}}\) (but unfortunately not over the class of all finite structures). Henceforth we considered _ordered dendroids_, _i.e._ dendroids that are additionally linearly-ordered by \(<\). Given such an ordered dendroid \(\mathcal{T}\), and an element \(c\) with children \(a,b\) we say that \(a\) is the _left child_ of \(c\) iff \(a<^{\mathcal{T}}b\) holds. Otherwise we call \(a\) the _right child_ of \(c\). A _zig-zag_ in the ordered \(\mathcal{T}\) is a sequence of elements \(a_{0},a_{1},\ldots,a_{n}\), where \(a_{n}\) is a leaf of \(\mathcal{T}\), \(a_{0}\) is the root of \(\mathcal{T}\), \(a_{2i+1}\) is the right child of \(a_{2i}\) for any \(i\geq 0\) and \(a_{2i}\) is the left child of \(a_{2i-1}\) for any \(i\geq 1\). A zig-zag is _even_ if its last element is the left child of its parent, and _odd_ otherwise. The underlying trees in dendroids are complete and binary, thus: An ordered dendroid \(\mathcal{T}\) has an even zig-zag iff \(\mathcal{T}\) is of even depth. Moreover, if \(\mathcal{T}\) is a dendroid of even (resp. odd) depth then for any linear order \(<\) over its domain the ordered dendroid \((\mathcal{T},<)\) has an even (resp. odd) zig-zag. Proof.: Immediate by induction after observing that \(\mathcal{A}\mathord{\restriction}_{\{0,1\}^{\leq n}}\), for any positive integer \(n\) smaller than the depth of \(\mathcal{A}\), is also a dendroid. The above lemma suggests that a good way to express the evenness of the depth of a dendroid is to state the existence of an even zig-zag; this is precisely the property that we are going to describe with an \(\operatorname{FO}^{2}\)-formula. Let us first introduce a few useful macros: \[\operatorname{\texttt{ROOT}}(x):=-\exists y\;T(y,x)\qquad\mathtt{LEAF}(x):=- \exists y\;T(x,y)\qquad\mathtt{2nd}(x):=\exists y\;T(y,x)\wedge\mathtt{ROOT}(y)\] \[\mathtt{LS}(x):=\exists y\;S(x,y)\wedge x<y\qquad\mathtt{RS}(x):=\exists y\;S( x,y)\wedge y<x\] The first two macros have an obvious meaning. The third macro identifies a child of the root, while the last two macros identify, respectively, the left and the right siblings (according to the linear order \(<\)). Our desired formula \(\varphi_{\mbox{\tiny{even-zig-zag}}}\) is then: \[\exists x\;\bigl{(}[\mathtt{LEAF}(x)\wedge\mathtt{LS}(x)]\land[\forall y\;( \mathtt{2nd}(y)\wedge D(y,x))\to\mathtt{RS}(y)]\] \[\wedge[\forall y\;(\neg\mathtt{ROOT}(y)\land\neg\mathtt{2nd}(y)\wedge D(y,x) \wedge\mathtt{RS}(y))\to\exists x\;T(x,y)\wedge\mathtt{LS}(x)]\] \[\wedge[\forall y\;(\neg\mathtt{ROOT}(y)\land\neg\mathtt{2nd}(y)\wedge D(y,x) \wedge\mathtt{LS}(y))\to\exists x\;T(x,y)\wedge\mathtt{RS}(x)]\bigr{)}\] Note that the above formula, by fixing a leaf, fixes the whole path from such a leaf to the root (since root-to-leaf paths in trees are unique). To say that such a path is an even zig-zag, we need a base of induction (the first line) stating that the selected leaf is a left child and the root's child lying on this path is its right one, as well as an inductive step stating that every left (resp. right) child on the path has a parent which is itself a right (resp. left) child, with the obvious exception of the root and its child. From there, it is easily shown that: **Proposition 4.4**.: _An ordered dendroid \(\mathcal{T}\) satisfies \(\varphi_{\text{even-zig-zag}}\) iff it has even depth._ Proof.: To prove the right-to-left implication, we use Observation 4.3 to infer the existence of an even zig-zag \(a_{0},a_{1},\ldots,a_{2n}\) in \(\mathcal{T}\). Taking \(a_{2n}\) as a witness for the existential quantifier in front of \(\varphi_{\text{even-zig-zag}}\) and going back to the definition of an even zig-zag, we get \(\mathcal{T}\models\varphi_{\text{even-zig-zag}}\). For the other direction, consider a leaf \(a\) satisfying the properties enforced in \(\varphi_{\text{even-zig-zag}}\). There is a unique path \(\rho=a_{0},a_{1},\ldots,a_{n}=a\) from the root of \(\mathcal{T}\) to \(a\). The first line of \(\varphi_{\text{even-zig-zag}}\) guarantees that \(a_{n}\) is a left child and \(a_{1}\) is a right child. We then show by induction, relying on the last two lines of \(\varphi_{\text{even-zig-zag}}\), that for any \(i\geq 0\), \(a_{2i+1}\) is the right child of \(a_{2i}\), and for \(i\geq 1\), \(a_{2i}\) is the left child of \(a_{2i-1}\). Thus \(\rho\) is an even zig-zag. By invoking Observation 4.3 again, we get that \(\mathcal{T}\) has even depth. As a direct consequence of the previous statement, observe that our formula \(\varphi_{\text{even-zig-zag}}\) is order-invariant over \(C_{\text{tree}}\): whether an ordered dendroid has even depth only depends on the underlying dendroid, and not on the particulars of its linear order. Recalling Corollary 4.2, we conclude the following: **Theorem 4.5**.: _There exists a class of finite structures \(C_{\text{tree}}\) and an \(FO^{2}(\Sigma_{C_{\text{tree}}}\cup\{<\})\)-sentence which is order-invariant over \(C_{\text{tree}}\), but is not equivalent to any \(FO(\Sigma_{C_{\text{tree}}})\) sentence._ ## 5. Expressive power when the degree is bounded We have seen in the previous section that if we relax the order-invariant constraint (namely, by requiring invariance only on a restricted class of structures), then one is able to define, with two variables, properties that lie beyond the expressive power of FO. We conjecture that this is still the case when requiring invariance over the class of all finite structures. In this section, we go the other way, and show that when one considers only classes of bounded degree, then \(<\)-inv \(FO^{2}\) can only express FO-definable properties. Note that although the class \(C_{\text{tree}}\) from Section 4 contains tree-like structures, the descendant relation makes this a dense class of structures (as it contains cliques of arbitrarily large size), and in particular \(C_{\text{tree}}\) does not have a bounded degree. ### Overview of the result We give an upper bound to the expressive power of order-invariant \(FO^{2}\) when the degree is bounded: **Theorem 5.1**.: _Let \(C\) be a class of bounded degree. Then \(<\)-inv \(FO^{2}\subseteq FO\) on \(C\)._ For the remainder of this section, we fix a signature \(\Sigma\), an integer \(d\) and a class \(C\) of \(\Sigma\)-structures of degree at most \(d\). Let us now show the skeleton of our proof. The technical part of the proof will be the focus of Sections 5.2 and 5.3. Our general strategy is to show the existence of a function \(f:\mathbb{N}\to\mathbb{N}\) such that every formula \(\varphi\in<\)-inv \(FO^{2}\) of quantifier rank \(k\) is equivalent on \(C\) (_i.e._ satisfied by the same structures of \(C\)) to an FO-formula \(\psi\) of quantifier rank at most \(f(k)\) To prove this, it is enough to show that for any two structures \(\mathcal{A}_{0},\mathcal{A}_{1}\in\mathcal{C}\) such that \(\mathcal{A}_{0}\equiv_{f(k)}^{\mathrm{FO}}\mathcal{A}_{1}\), we have \(\mathcal{A}_{0}\equiv_{k}^{<\mathrm{inv}\;\mathrm{FO}^{2}}\mathcal{A}_{1}\). Indeed, the class of structures satisfying a formula \(\varphi\) of \(<\)-inv \(\mathrm{FO}^{2}\) of quantifier rank \(k\) is a union of equivalence classes for the equivalence relation \(\equiv_{k}^{<\mathrm{inv}\;\mathrm{FO}^{2}}\), whose intersection with \(\mathcal{C}\) is in turn the intersection of \(\mathcal{C}\) with a union of equivalence classes for \(\equiv_{f(k)}^{\mathrm{FO}}\). It is folklore (see, _e.g._, [10, Cor. 3.16]) that the equivalence relation \(\equiv_{f(k)}^{\mathrm{FO}}\) has finite index, and that each of its equivalence classes is definable by an \(\mathrm{FO}\)-sentence of quantifier rank \(f(k)\). Then \(\psi\) is just the finite disjunction of these \(\mathrm{FO}\)-sentences. In order to show that \(\mathcal{A}_{0}\equiv_{k}^{<\mathrm{inv}\;\mathrm{FO}^{2}}\mathcal{A}_{1}\), we will construct in Section 5.2 two particular orders \(<_{0}\), \(<_{1}\) on these respective structures, and we will prove in Section 5.3 that \[(\mathcal{A}_{0},<_{0})\equiv_{k}^{\mathrm{FO}^{2}}(\mathcal{A}_{1},<_{1})\,. \tag{5.1}\] This concludes the proof, since any sentence \(\theta\in<\)-inv \(\mathrm{FO}^{2}\) with quantifier rank at most \(k\) holds in \(\mathcal{A}_{0}\) iff it holds in \((\mathcal{A}_{0},<_{0})\) (by definition of order-invariance), iff it holds in \((\mathcal{A}_{1},<_{1})\) (by (5.1)), iff it holds in \(\mathcal{A}_{1}\). ### Constructing linear orders on \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) Recall that our goal is to find a function \(f\) such that, given two structures \(\mathcal{A}_{0}\), \(\mathcal{A}_{1}\) in \(\mathcal{C}\) such that \[\mathcal{A}_{0}\equiv_{f(k)}^{\mathrm{FO}}\mathcal{A}_{1}\,, \tag{5.2}\] we are able to construct two linear orders \(<_{0},<_{1}\) such that \((\mathcal{A}_{0},<_{0})\equiv_{k}^{\mathrm{FO}^{2}}(\mathcal{A}_{1},<_{1})\). In this section, we define \(f\) and we detail the construction of such orders. The proof of \(<\)-inv \(\mathrm{FO}\)-similarity between \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) will be the focus of Section 5.3. Let us now explain how we define \(f\). For that, we need to introduce the notion of neighbourhood and neighbourhood type. These notions are defined in Section 5.2.1. We then explain in Section 5.2.2 how to divide neighbourhood types into rare ones and frequent ones. Finally, the details of the construction are given in Section 5.2.3. #### 5.2.1. Neighbourhoods Let us now define the notion of neighbourhood of an element in a structure. Let \(c\) be a new constant symbol, and let \(\mathcal{A}\in\mathcal{C}\). For \(k\in\mathbb{N}\) and \(a\in A\), the (pointed) \(k\)_-neighbourhood_\(\mathcal{N}_{\mathcal{A}}^{k}(a)\) of \(a\) in \(\mathcal{A}\) is the \((\Sigma\cup\{c\})\)-structure whose restriction to the vocabulary \(\Sigma\) is the substructure of \(\mathcal{A}\) induced by the set \(N_{\mathcal{A}}^{k}(a)=\{b\in A:\operatorname{dist}_{\mathcal{A}}(a,b)\leq k\}\,,\) and where \(c\) is interpreted as \(a\). In other words, it consists of all the elements at distance at most \(k\) from \(a\) in \(\mathcal{A}\), together with the relations they share in \(\mathcal{A}\); the center \(a\) being marked by the constant \(c\). We sometimes refer to \(N_{\mathcal{A}}^{k}(a)\) as the \(k\)-neighbourhood of \(a\) in \(\mathcal{A}\) as well, but the context will always make clear whether we refer to the whole substructure or only its domain. The \(k\)_-neighbourhood type_\(\tau=\) neigh-\(\mathrm{tp}_{\mathcal{A}}^{k}(a)\) of \(a\) in \(\mathcal{A}\) is the isomorphism class of its \(k\)-neighbourhood. We say that \(\tau\) is a \(k\)-neighbourhood type over \(\Sigma\), and that \(a\) is an _occurrence_ of \(\tau\). We denote by \(|\mathcal{A}|_{\tau}\) the number of occurrences of \(\tau\) in \(\mathcal{A}\), and we write \([\![\mathcal{A}_{0}]\!]_{k}=^{t}[\![\mathcal{A}_{1}]\!]_{k}\) to mean that for every \(k\)-neighbourhood type \(\tau\), \(|\mathcal{A}_{0}|_{\tau}\) and \(|\mathcal{A}_{1}|_{\tau}\) are either equal, or both larger than \(t\). Let \(\textsc{NeighType}_{k}^{d}\) denote the set of \(k\)-neighbourhood types over \(\Sigma\) occurring in structures of degree at most \(d\). Note that \(\textsc{NeighType}_{k}^{d}\) is a finite set. The interest of this notion resides in the fact that when the degree is bounded, FO is exactly able to count the number of occurrences of neighbourhood types up to some threshold [10]. We will only use one direction of this characterization, namely: **Proposition 5.2**.: _For all integers \(k\) and \(t\), there exists some \(\hat{f}(k,t)\in\mathbb{N}\) (which also depends on the bound \(d\) on the degree of structures in \(C\)) such that for all structures \(\mathcal{A}_{0},\mathcal{A}_{1}\in C\),_ \[\mathcal{A}_{0}\equiv_{\hat{f}(k,t)}^{FO}\mathcal{A}_{1}\quad\to\quad\llbracket \mathcal{A}_{0}\rrbracket_{k}=^{t}\llbracket\mathcal{A}_{1}\rrbracket_{k}\,.\] We now exhibit a function \(\Theta:\mathbb{N}\to\mathbb{N}\) such that, if \(\llbracket\mathcal{A}_{0}\rrbracket_{k}=^{\Theta(k)}\llbracket\mathcal{A}_{1} \rrbracket_{k}\), then one can construct \(<_{0},<_{1}\) satisfying (5.1). Proposition 5.2 then ensures that \(f:k\mapsto\hat{f}(k,\Theta(k))\) fits the bill. Let us now explain how the function \(\Theta\) is chosen. #### 5.2.2. Frequency of a neighbourhood type Let us denote \(\left|\textsc{NeighType}_{k}^{d}\right|\) as \(N\). Recall that every \(\mathcal{A}\in C\) has degree at most \(d\). What this means is that if we consider the set \(\textsc{Freq}[\mathcal{A}]_{k}\) of \(k\)-neighbourhood types that have enough occurrences in \(\mathcal{A}\) (where "enough" will be given a precise meaning later on), each type in \(\textsc{Freq}[\mathcal{A}]_{k}\) must have many occurrences that are scattered across \(\mathcal{A}\). Not only that, but we can also make sure that such occurrences are far from all the occurrences of every \(k\)-neighbourhood type not in \(\textsc{Freq}[\mathcal{A}]_{k}\), which by definition have few occurrences in \(\mathcal{A}\). Since the degree is bounded, \(N\) is bounded too, which prevents our distinction (which will be formalized later on) between rare neighbourhood types and frequent neighbourhood types from being circular. Such a dichotomy is introduced and detailed in [11]; we simply adapt this construction to our needs. In the remainder of this section, we describe this construction at a high level, and leave the technical details (such as the exact bounds) to the reader. The proof of the following lemma (in the vein of [1]) is straightforward, and relies on the degree boundedness hypothesis. Intuitively, Lemma 5.3 states that when the degree is bounded, it is not possible for all the elements of large sets to be concentrated in one corner of the structure, thus making it possible to pick elements in each set that are scattered across the structure. **Lemma 5.3**.: _Given three integers \(m\), \(\delta\), \(s\), there exists a threshold \(g(m,\delta,s)\in\mathbb{N}\) such that for all \(\mathcal{A}\in C\), all \(B\subseteq A\) of size at most \(s\), and all subsets \(C_{1},\cdots,C_{n}\subseteq A\) (with \(n\leq N\)) of size at least \(g(m,\delta,s)\), it is possible to find elements \(c_{j}^{1},\cdots,c_{j}^{m}\in C_{j}\) for all \(j\in\{1,\cdots,n\}\), such that for all \(j,j^{\prime}\in\{1,\cdots,n\}\) and \(i,i^{\prime}\in\{1,\cdots,m\}\), \(\text{dist}_{\mathcal{A}}(c_{j}^{i},B)>\delta\) and \(\text{dist}_{\mathcal{A}}(c_{j}^{i},c_{j^{\prime}}^{i^{\prime}})>\delta\) if \((j,i)\neq(j^{\prime},i^{\prime})\)._ Note that the \(N\) is this lemma could be replaced by any constant. Our goal is, given a structure \(\mathcal{A}\in C\), to partition the \(k\)-neighbourhood types into two classes: the frequent types, and the rare types. The property we wish to ensure is that there exist in \(\mathcal{A}\) some number \(m\) (which will be made precise later on, but only depends on \(k\)) of occurrences of each one of the frequent \(k\)-neighbourhood types which are both * at distance greater than \(\delta\) (which, as for \(m\), is a function of \(k\) and will be fixed in the following) from one another, and * at distance greater than \(\delta\) from every occurrence of a rare \(k\)-neighbourhood type. To establish this property, we would like to use Lemma 5.3, with \(s\) being the total number of occurrences of all the rare \(k\)-neighbourhood types, and \(C_{1},\cdots,C_{n}\) being the sets of occurrences of the \(n\) distinct frequent \(k\)-neighbourhood types. The number \(N\) of different \(k\)-neighbourhood types of degree at most \(d\) is bounded by a function of \(k\) (as well as \(\Sigma\) and \(d\), which are fixed). Hence, we can proceed according to the following (terminating) algorithm to make the distinction between frequent and rare types: 1. First, let us mark every \(k\)-neighbourhood type as frequent. 2. Among the types which are currently marked as frequent, let \(\tau\) be one with the smallest number of occurrences in \(\mathcal{A}\). 3. If \(|\mathcal{A}|_{\tau}\) is at least \(g(m,\delta,s)\) (\(g\) being the function from Lemma 5.3) where \(s\) is the total number of occurrences of all the \(k\)-neighbourhood types which are currently marked as rare, then we are done and the marking frequent/rare is final. Otherwise, mark \(\tau\) as rare, and go back to step 2 if there remains at least one frequent \(k\)-neighbourhood type. Notice that we can go at most \(N\) times through step 2, where \(N\) depends only on \(k\). Furthermore, each time we add a type to the set of rare \(k\)-neighbourhood types, we have the guarantee that this type has few occurrences (namely, less than \(g(m,\delta,s)\), where \(s\) can be bounded by a function of \(k\)). It is thus apparent that the threshold \(t\) such that a \(k\)-neighbourhood type \(\tau\) is frequent in \(\mathcal{A}\) iff \(|\mathcal{A}|_{\tau}\geq t\) can be bounded by some \(T\) depending only on \(k\) - importantly, \(T\) is the same for all structures of \(C\). Let us now make the above more formal. For \(t\in\mathbb{N}\) and \(\mathcal{A}\in C\), let \(\textsc{Freq}[\mathcal{A}]_{k}^{\geq t}\subseteq\textsc{NeighType}_{k}^{d}\) denote the set of \(k\)-neighbourhood types which have at least \(t\) occurrences in \(\mathcal{A}\). By applying the procedure presented above, we derive the following lemma: **Lemma 5.4**.: _Let \(k,m,\delta\in\mathbb{N}\). There exists \(T\in\mathbb{N}\) such that for every \(\mathcal{A}\in C\), there exists some \(t\leq T\) such that_ \[t\geq g(m,\delta,\sum_{\tau\notin\textsc{Freq}[\mathcal{A}]_{k}^{\geq t}}| \mathcal{A}|_{\tau})\,.\] Let \(\textsc{Freq}[\mathcal{A}]_{k}:=\textsc{Freq}[\mathcal{A}]_{k}^{\geq t}\) for the smallest threshold \(t\) given in Lemma 5.4. Some \(k\)-neighbourhood type \(\tau\in\textsc{NeighType}_{k}^{d}\) is said to be _frequent_ in \(\mathcal{A}\in C\) if it belongs to \(\textsc{Freq}[\mathcal{A}]_{k}\); that is, if \(|\mathcal{A}|_{\tau}\geq t\). Otherwise, \(\tau\) is said to be _rare_. With the definition of \(g\) in mind, Lemma 5.4 can then be reformulated as follows: in every structure \(\mathcal{A}\in C\), one can find \(m\) occurrences of each frequent \(k\)-neighbourhood type which are at distance greater than \(\delta\) from one another and from the set of occurrences of every rare \(k\)-neighbourhood type. All that remains is for us to give a value (depending only on \(k\)) to the integers \(m\) and \(\delta\): let \(M:=\max\{|\tau|:\tau\in\textsc{NeighType}_{k}^{d}\}\) (\(M\) indeed exists, and is a function of \(k\) - recall that the signature \(\Sigma\) and the degree \(d\) are assumed to be fixed). Let us consider \[m:=2\cdot(k+1)\cdot M!\qquad\text{and}\qquad\delta:=4k\,. \tag{5.3}\] We then define \(\Theta(k)\) as the integer \(T\) provided by Lemma 5.4 for these values of \(m\) and \(\delta\). The threshold \(\Theta(k)\) indeed only depends on \(k\). Finally, notice that if \(\llbracket\mathcal{A}_{0}\rrbracket_{k}=^{\Theta(k)}\llbracket\mathcal{A}_{1 }\rrbracket_{k}\,,\) then \(\textsc{Freq}[\mathcal{A}_{0}]_{k}=\textsc{Freq}[\mathcal{A}_{1}]_{k}\,.\) As discussed in Section 5.2.1, there exists a function \(f\) such that \(\mathcal{A}_{0}\equiv^{\mathrm{FO}}_{f(k)}\mathcal{A}_{1}\) entails \(\llbracket\mathcal{A}_{0}\rrbracket_{k}=^{\Theta(k)}\llbracket\mathcal{A}_{1} \rrbracket_{k}\). We also make sure that \(f(k)\geq\Theta(k)\cdot N+1\) for every \(k\). Let us now consider \(\mathcal{A}_{0},\mathcal{A}_{1}\in C\) such that \(\mathcal{A}_{0}\equiv^{\mathrm{FO}}_{f(k)}\mathcal{A}_{1}\) for such an \(f\). If \(\textsc{Freq}[\mathcal{A}_{0}]_{k}=\emptyset\), then \(|\mathcal{A}_{0}|\leq\Theta(k)\cdot N\). This guarantees that \(\mathcal{A}_{0}\simeq\mathcal{A}_{1}\), and in particular that \(\mathcal{A}_{0}\equiv^{<\mathrm{inv}\,\,\mathrm{FO}^{2}}_{k}\mathcal{A}_{1}\). From now on, we suppose that there is at least one frequent \(k\)-neighbourhood type. The construction of two linear orders \(<_{0}\) and \(<_{1}\) satisfying \((\mathcal{A}_{0},<_{0})\equiv^{\mathrm{FO}^{2}}_{k}(\mathcal{A}_{1},<_{1})\) is the object of Section 5.2.3. #### 5.2.3. Construction of \(<_{0}\) and \(<_{1}\) This section is dedicated to the definition of two linear orders \(<_{0},<_{1}\) on \(\mathcal{A}_{0},\mathcal{A}_{1}\in\mathcal{C}\). We then prove in Section 5.3 that \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) are FO\({}^{2}\)-similar at depth \(k\). Recall that by hypothesis, \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) are FO-similar at depth \(f(k)\), which entails that they have the same number of occurrences of each \(\tau\in\textsc{NeighType}_{k}^{d}\) up to a threshold \(\Theta(k)\). To construct our two linear orders, we need to define the notion of \(k\)-environment: given \(\mathcal{A}\in\mathcal{C}\), a linear order \(<\) on \(A\), \(k\in\mathbb{N}\) and an element \(a\in A\), we define the _\(k\)-environment \(\mathcal{E}\textsc{nv}_{(\mathcal{A},<)}^{k}(a)\) of \(a\) in \((\mathcal{A},<)\)_ as the restriction of \((\mathcal{A},<)\) to the \(k\)-neighbourhood of \(a\) in \(\mathcal{A}\), where \(a\) is the interpretation of the constant symbol \(c\). Note that the order is not taken into account when determining the domain of the substructure (it would otherwise be \(A\), given that any two distinct elements are adjacent for \(<\)). The _\(k\)-environment type_\(\textsc{env-tp}_{(\mathcal{A},<)}^{k}(a)\) is the isomorphism class of \(\mathcal{E}\textsc{nv}_{(\mathcal{A},<)}^{k}(a)\). In other words, \(\textsc{env-tp}_{(\mathcal{A},<)}^{k}(a)\) contains the information of \(\mathcal{N}_{\mathcal{A}}^{k}(a)\) together with the order of its elements in \((\mathcal{A},<)\). Given \(\tau\in\textsc{NeighType}_{k}^{d}\), we define \(<_{0}\) as the set of \(k\)-environment types whose underlying \(k\)-neighbourhood type is \(\tau\). For \(i\in\{0,1\}\), we aim to partition \(A_{i}\) into \(2(2k+1)+2\) segments: \[A_{i}=X_{i}\cup\bigcup_{j=0}^{2k}(L_{i}^{j}\cup R_{i}^{j})\cup M_{i}\,.\] Once we have set a linear order on each segment, the linear order \(<_{i}\) on \(A_{i}\) will result from the concatenation of the orders on the segments as follows: \[(A_{i},<_{i}):=X_{i}\cdot L_{i}^{0}\cdot L_{i}^{1}\cdots L_{i}^{2k}\cdot M_{i} \cdot R_{i}^{2k}\cdots R_{i}^{1}\cdot R_{i}^{0}\,.\] Each segment \(L_{i}^{j}\), for \(j\in\{0,\cdots,2k\}\) is itself decomposed into two segments \(\textit{NL}_{i}^{j}\cdot\textit{UL}_{i}^{j}\). The \(\textit{UL}_{i}^{j}\) for \(j\in\{k+1,\cdots,2k\}\) will be empty; they are defined solely in order to keep the notations uniform. The 'N' stands for "neighbour" and the 'U' for "universal", for reasons that will soon become apparent. Symmetrically, each \(R_{i}^{j}\) is decomposed into \(\textit{UR}_{i}^{j}\cdot\textit{NR}_{i}^{j}\), with empty \(\textit{UR}_{i}^{j}\) as soon as \(j\geq k+1\). For \(i\in\{0,1\}\) and \(r\in\{0,\cdots,2k\}\), we define \(S_{i}^{r}\) as \[S_{i}^{r}:=X_{i}\cup\bigcup_{j=0}^{r}(L_{i}^{j}\cup R_{i}^{j})\,.\] Let us now explain how the segments are constructed in \(\mathcal{A}_{0}\); see Figure 1 for an illustration. Figure 1: The black curvy edges represent the edges between elements belonging to different segments. Edges between elements of the same segment are not represented here. The order \(<_{0}\) grows from the left to the right. For every \(\tau\in\textsc{Freq}[\mathcal{A}_{0}]_{k}\), let \(\tau_{1},\cdots,\tau_{|\textsc{Env}(\tau)|}\) be an enumeration of \(<_{0}\). Recall that we defined \(M\) in Section 5.2.2 as \(\max\{|\tau|:\tau\in\textsc{NeighType}^{d}_{k}\}\). Thus, we have \(|\textsc{Env}(\tau)|\leq M!\) for every \(\tau\in\textsc{NeighType}^{d}_{k}\). In particular, by definition of frequency, and by choice of \(m\) and \(\delta\) in (5.3), Lemma 5.4 ensures that we are able to pick, for every \(\tau\in\textsc{Freq}[\mathcal{A}_{0}]_{k}\), every \(l\in\{1,\cdots,|\textsc{Env}(\tau)|\}\) and every \(j\in\{0,\cdots,k\}\), two elements \(a[\tau_{l}]_{L}^{j}\) and \(a[\tau_{l}]_{R}^{j}\) which have \(\tau\) as \(k\)-neighbourhood type in \(\mathcal{A}_{0}\), such that all the \(a[\tau_{l}]_{*}^{j}\) are at distance at least \(4k+1\) from each other and from any occurrence of a rare \(k\)-neighbourhood type in \(\mathcal{A}_{0}\). Construction of \(X_{0}\) and \(N\!L_{0}^{0}\).We start with the set \(X_{0}\), which contains all the occurrences of rare \(k\)-neighbourhood types, together with their \(k\)-neighbourhoods. Formally, the domain of \(X_{0}\) is \(\bigcup_{a\in A_{0}:\ \textsc{neigh-tp}^{k}_{\mathcal{A}_{0}}(a)\notin \textsc{FREQ}[\mathcal{A}_{0}]_{k}}N^{k}_{\mathcal{A}_{0}}(a)\,.\) We set \(N\!L_{0}^{0}:=N_{\mathcal{A}_{0}}(X_{0})\) (the set of neighbours of elements of \(X_{0}\)), and define the order \(<_{0}\) on \(X_{0}\) and on \(N\!L_{0}^{0}\) in an arbitrary way. Construction of \(U\!L_{0}^{j}\).If \(k<j\leq 2k\), then we set \(U\!L_{0}^{j}:=\emptyset\). Otherwise, for \(j\in\{0,\cdots,k\}\), once we have constructed \(L_{0}^{0},\cdots,L_{0}^{j-1}\) and \(N\!L_{0}^{j}\), we construct \(U\!L_{0}^{j}\) as follows. The elements of \(U\!L_{0}^{j}\) are \(\bigcup_{\tau\in\textsc{FREQ}[\mathcal{A}_{0}]_{k}}\bigcup_{l=1}^{|\textsc{ Env}(\tau)|}N^{k}_{\mathcal{A}_{0}}(a[\tau_{l}]_{L}^{j})\,.\) Note that \(U\!L_{0}^{j}\) does not intersect the previously constructed segments, by choice of the \(a[\tau_{l}]_{L}^{j}\) and of \(\delta=4k\) in (5.3). Furthermore, the \(N^{k}_{\mathcal{A}_{0}}(a[\tau_{l}]_{L}^{j})\) are pairwise disjoint, hence we can fix \(<_{0}\) freely and independently on each of them. Unsurprisingly, we order each \(N^{k}_{\mathcal{A}_{0}}(a[\tau_{l}]_{L}^{j})\) so that \(\textsc{env-tp}^{k}_{(\mathcal{A}_{0},<_{0})}(a[\tau_{l}]_{L}^{j})=\tau_{l}\). This is possible because for every \(\tau\in\textsc{Freq}[\mathcal{A}_{0}]_{k}\) and each \(l\), \(\textsc{neigh-tp}^{k}_{\mathcal{A}_{0}}(a[\tau_{l}]_{L}^{j})=\tau\) by choice of \(a[\tau_{l}]_{L}^{j}\). Once each \(N^{k}_{\mathcal{A}_{0}}(a[\tau_{l}]_{L}^{j})\) is ordered according to \(\tau_{l}\), the linear order \(<_{0}\) on \(U\!L_{0}^{j}\) can be completed in an arbitrary way. Note that every possible \(k\)-environment type extending a frequent \(k\)-neighbourhood type in \(\mathcal{A}_{0}\) occurs in each \(U\!L_{0}^{j}\). The \(U\!L_{0}^{j}\) are _universal_ in that sense. Construction of \(N\!L_{0}^{j}\).Now, let us see how the \(N\!L_{0}^{j}\) are constructed. For \(j\in\{1,\cdots,2k\}\), suppose that we have constructed \(L_{0}^{0},\cdots,L_{0}^{j-1}\). The domain of \(N\!L_{0}^{j}\) consists of all the neighbours (in \(\mathcal{A}_{0}\)) of the elements of \(L_{0}^{j-1}\) not already belonging to the construction so far. Formally, \(N_{\mathcal{A}_{0}}(L_{0}^{j-1})\setminus(X_{0}\cup\bigcup_{m=0}^{j-2}L_{0}^{ m})\). The order \(<_{0}\) on \(N\!L_{0}^{j}\) is chosen arbitrarily. Construction of \(R_{0}^{j}\).We construct similarly the \(R_{0}^{j}\), for \(j\in\{0,\cdots,2k\}\), starting with \(N\!R_{0}^{0}:=\emptyset\), then \(U\!R_{0}^{0}\) which contains each \(a[\tau_{l}]_{R}^{0}\) together with its \(k\)-neighbourhood in \(\mathcal{A}_{0}\) ordered according to \(\tau_{l}\), then \(N\!R_{0}^{1}:=N_{\mathcal{A}_{0}}(R_{0}^{0})\), then \(U\!R_{0}^{1}\), etc. Note that the \(a[\tau_{l}]_{R}^{j}\) have been chosen so that they are far enough in \(\mathcal{A}_{0}\) from all the segments that have been constructed so far, allowing us once more to order their \(k\)-neighbourhood in \(\mathcal{A}_{0}\) as we see fit. Construction of \(M_{0}\).\(M_{0}\) contains all the elements of \(A_{0}\) besides those already belonging to \(S_{0}^{2k}\). The order \(<_{0}\) chosen on \(M_{0}\) is arbitrary. **Transfer on \(\mathcal{A}_{1}\).** Suppose that we have constructed \(S_{0}^{2k}\). We can make sure, retrospectively, that the index \(f(k)\) in (5.2) is large enough so that there exists a set \(S\subseteq A_{1}\) so that \(\mathcal{A}_{0}\!\restriction_{S_{0}^{2k}\cup N_{\mathcal{A}_{0}}(S_{0}^{2k})} \simeq\mathcal{A}_{1}\!\restriction_{S}\) (this is ensured as long as \(f(k)\geq|S_{0}^{2k}\cup N_{\mathcal{A}_{0}}(S_{0}^{2k})|+1\), which can be bounded by a function of \(k\), independent of \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\)). Let \(\varphi_{0}:\mathcal{A}_{0}\!\restriction_{S_{0}^{2k}}\to\mathcal{A}_{1}\! \restriction_{S^{\prime}}\) be the restriction to \(S_{0}^{2k}\) of said isomorphism, and let \(\varphi_{1}\) be its converse. By construction, the \(k\)-neighbourhood of every \(a\in S_{0}^{k}\) is included in \(S_{0}^{2k}\); hence every such \(a\) has the same \(k\)-neighbourhood type in \(\mathcal{A}_{0}\) as has \(\varphi_{0}(a)\) in \(\mathcal{A}_{1}\). We transfer alongside \(\varphi_{0}\) all the segments, with their order, from \((\mathcal{A}_{0},<_{0})\) to \(\mathcal{A}_{1}\), thus defining \(X_{1},N\!L_{1}^{j},U\!L_{1}^{j},\cdots\) as the respective images by \(\varphi_{0}\) of \(X_{0},N\!L_{0}^{j},U\!L_{0}^{j},\cdots\), and define \(M_{1}\) as the counterpart to \(M_{0}\). Note that the properties concerning neighbourhood are transferred; _e.g._ all the neighbours of an element in \(L_{1}^{j}\), \(1\leq j<2k\), belong to \(L_{1}^{j-1}\cup L_{1}^{j}\cup L_{1}^{j+1}\,.\) By construction, we get the following lemma: **Lemma 5.5**.: _For each \(a\in S_{0}^{k},\) we have env-tp\({}^{k}_{(\mathcal{A}_{0},<_{0})}(a)=\) env-tp\({}^{k}_{(\mathcal{A}_{1},<_{1})}(\varphi_{0}(a))\,.\)_ Lemma 5.5 has two immediate consequences: * The set \(X_{1}\) contains the occurrences in \(\mathcal{A}_{1}\) of all the rare \(k\)-neighbourhood types (just forget about the order on the \(k\)-environments, and remember that \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) have the same number of occurrences of each rare \(k\)-neighbourhood type). * All the universal segments \(U\!L_{1}^{j}\) and \(U\!R_{1}^{j}\), for \(0\leq j\leq k\), contain at least one occurrence of each environment in \(<_{0}\), for each \(\tau\in\textsc{Freq}[\mathcal{A}_{0}]_{k}\). Our construction also guarantees the following result: **Lemma 5.6**.: _For each \(a,b\in S_{0}^{k}\), we have tp\({}^{0}_{(\mathcal{A}_{0},<_{0})}(a,b)=\) tp\({}^{0}_{(\mathcal{A}_{1},<_{1})}(\varphi_{0}(a),\varphi_{0}(b))\,.\)_ In particular, for \(a=b\in S_{0}^{k}\), we have tp\({}^{0}_{(\mathcal{A}_{0},<_{0})}(a)=\) tp\({}^{0}_{(\mathcal{A}_{1},<_{1})}(\varphi_{0}(a))\,.\) Proof of the \(\textsc{FO}^{2}\)-similarity of \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) In this section, we aim to show the following result: **Proposition 5.7**.: _We have that \((\mathcal{A}_{0},<_{0})\equiv^{\textsc{FO}^{2}}_{k}(\mathcal{A}_{1},<_{1})\,.\)_ #### 5.3.1. The two-pebble Ehrenfeucht-Fraisse game To establish Proposition 5.7, we use Ehrenfeucht-Fraisse games with two pebbles. These games have been introduced by Immerman and Kozen [11]. Let us adapt their definition to our context. The \(k\)_-round two-pebble Ehrenfeucht-Fraisse game on \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\)_ is played by two players: the spoiler and the duplicator. The spoiler tries to expose differences between the two structures, while the duplicator tries to establish their indistinguishability. There are two pebbles associated with each structure: \(p_{0}^{x}\) and \(p_{0}^{y}\) on \((\mathcal{A}_{0},<_{0})\), and \(p_{1}^{x}\) and \(p_{1}^{y}\) on \((\mathcal{A}_{1},<_{1})\). Formally, these pebbles can be seen as the interpretations in each structure of two new constant symbols, but it will be convenient to see them as moving pieces. At the start of the game, the duplicator places \(p_{0}^{x}\) and \(p_{0}^{y}\) on elements of \((\mathcal{A}_{0},<_{0})\), and \(p_{1}^{x}\) and \(p_{1}^{y}\) on elements of \((\mathcal{A}_{1},<_{1})\). The spoiler wins if the duplicator is unable to ensure that tp\({}^{0}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x},p_{0}^{y})=\) tp\({}^{0}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x},p_{1}^{y})\). Otherwise, the proper game starts. Note that in the usual definition of the starting position, the pebbles are not on the board; however, it will be convenient to have them placed in order to uniformize our invariant. This change is not profound and does not affect the properties of the game. For each of the \(k\) rounds, the spoiler starts by choosing a structure and a pebble in this structure, and places this pebble on an element of the chosen structure. In turn, the duplicator must place the corresponding pebble in the other structure on an element of that structure. The spoiler wins at once if \(\operatorname{tp}^{0}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x},p_{0}^{y})\neq \operatorname{tp}^{0}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x},p_{1}^{y})\). Otherwise, another round is played. If the spoiler has not won after \(k\) rounds, then the duplicator wins. The main interest of these games is that they capture the expressive power of \(\operatorname{FO}^{2}\)[10]. We will only need the fact that these games are correct: **Theorem 5.8**.: _If the duplicator has a winning strategy in the \(k\)-round two-pebble Ehrenfeucht-Fraisse game on \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\), then \((\mathcal{A}_{0},<_{0})\equiv_{k}^{FO^{2}}(\mathcal{A}_{1},<_{1})\,.\)_ Thus, in order to prove Proposition 5.7, we show that the duplicator wins the \(k\)-round two-pebble Ehrenfeucht-Fraisse game on \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\,.\) For that, let us show by a decreasing induction on \(r=k,\cdots,0\) that the duplicator can ensure, after \(k-r\) rounds, that the three following properties (described below) hold: \[\forall i\in\{0,1\},\forall\alpha\in\{x,y\},\ p_{i}^{\alpha}\in S _{i}^{r}\ \to\ p_{1-i}^{\alpha}=\varphi_{i}(p_{i}^{\alpha})\] ( \[S_{r}\] ) \[\forall\alpha\in\{x,y\},\ \operatorname{env-tp}^{r}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{\alpha})=\operatorname{env-tp}^{r}_{(\mathcal{A}_{1},<_{1})}( p_{1}^{\alpha})\] ( \[E_{r}\] ) \[\operatorname{tp}^{0}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x},p_{0}^ {y})=\operatorname{tp}^{0}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x},p_{1}^{y})\] ( \[T_{r}\] ) The first property, \((S_{r})\), guarantees that if a pebble is close (in a sense that depends on the number of rounds left in the game) to one of the \(<_{i}\)-minimal or \(<_{i}\)-maximal elements, the corresponding pebble in the other structure is located at the same position with respect to this \(<_{i}\)-extremal element. As for \((E_{r})\), it states that two corresponding pebbles are always placed on elements sharing the same \(r\)-environment type. Once again, the safety distance decreases with each round that goes. Finally, \((T_{r})\) controls that both pebbles have the same relative position (both with respect to the order and the original vocabulary) in the two ordered structures. In particular, the duplicator wins the game if \((T_{r})\) is satisfied at the beginning of the game, and after each of the \(k\) rounds of the game. #### 5.3.2. Base case: proofs of \((S_{k})\), \((E_{k})\) and \((T_{k})\) We start by proving \((S_{k})\), \((E_{k})\) and \((T_{k})\). At the start of the game, the duplicator places both \(p_{0}^{x}\) and \(p_{0}^{y}\) on the \(<_{0}\)-minimal element of \((\mathcal{A}_{0},<_{0})\), and both \(p_{1}^{x}\) and \(p_{1}^{y}\) on the \(<_{1}\)-minimal element of \((\mathcal{A}_{1},<_{1})\). In particular, \[p_{1}^{x}=p_{1}^{y}=\varphi_{0}(p_{0}^{x})=\varphi_{0}(p_{0}^{y})\,.\] This ensures that \((S_{k})\) holds, while \((E_{k})\) and \((T_{k})\) respectively follow from Lemmas 5.5--5.6. #### 5.3.3. Strategy for the duplicator We now describe the duplicator's strategy to ensure that \((S_{r})\), \((E_{r})\) and \((T_{r})\) hold no matter how the spoiler plays. Suppose that we have \((S_{r+1})\), \((E_{r+1})\) and \((T_{r+1})\) for some \(0\leq r<k\), after \(k-r-1\) rounds of the game. Without loss of generality, we may assume that, in the \((k-r)\)-th round of the Ehrenfeucht-Fraisse game between \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\), the spoiler moves \(p_{0}^{x}\) in \((\mathcal{A}_{0},<_{0})\). Let us first explain informally the general idea behind the duplicator's strategy. 1. If the spoiler plays around the endpoints (by which we mean the elements that are \(<_{i}\)-minimal and maximal), the duplicator has no choice but to play a tit-for-tat strategy, _i.e_. to respond to the placement of \(p_{i}^{\alpha}\) near the endpoints by moving \(p_{1-i}^{\alpha}\) on \(\varphi_{i}(p_{i}^{\alpha})\). If the duplicator does not respond this way, then the spoiler will be able to expose the difference between \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) in the subsequent moves, by forcing the duplicator to play closer and closer to the endpoint, which will prove to be impossible at some point. On top of that, the occurrences of rare neighbourhood types are located in \((\mathcal{A}_{i},<_{i})\) near the \(<_{i}\)-minimal element. If the duplicator does not play according to \(\varphi_{0}\) in this area, it will be easy enough for the spoiler to win the game. The reason we introduced the segments \(N\!L_{i}^{j},U\!L_{i}^{j},N\!R_{i}^{j}\) and \(U\!R_{i}^{j}\) is precisely to bound the area in which the duplicator must implement the tit-for-tat strategy. Indeed, as soon as a pebble is placed in \(M_{i}\), there is no way for the spoiler to join the endpoints in less than \(k\) moves while forcing the duplicator's hand. The case where the spoiler plays near the endpoints corresponds to Case ((I)) below, and is detailed in Section 5.3.4. 2. Next, suppose that the spoiler places a pebble, say \(p_{0}^{x}\), next (in \(\mathcal{A}_{0}\)) to \(p_{0}^{y}\), _i.e_. such that \(p_{0}^{x}\in N^{1}_{\mathcal{A}_{0}}(p_{0}^{y})\). The duplicator must place \(p_{1}^{x}\) on an element whose relative position to \(p_{1}^{y}\) is the same as the relative position of \(p_{0}^{x}\) with respect to \(p_{0}^{y}\). Note that once this is done, the spoiler can change variable, and place \(p_{0}^{y}\) (or \(p_{1}^{y}\), if they decide to play in \((\mathcal{A}_{1},<_{1})\)) in \(N^{1}_{\mathcal{A}_{0}}(p_{0}^{x})\), thus forcing the duplicator to play near \(p_{1}^{x}\). In order to prevent the spoiler from being able, in \(k\) such moves, to expose the difference between \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\), the duplicator must make sure, with \(r\) rounds left, that \(p_{0}^{x}\) and \(p_{1}^{x}\) (as well as \(p_{0}^{y}\) and \(p_{1}^{y}\)) share the same \(r\)-environment in \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\). This will guarantee that the duplicator can play along if the spoiler decides to take \(r\) moves adjacent (in \(\mathcal{A}_{i}\)) to one another. The case where the spoiler places a pebble next (in the structure without ordering) to the other pebble is our Case ((II)), and is treated in Section 5.3.5. 3. Suppose now that the spoiler's move does not fall under the previous templates. Let us assume that the spoiler plays in \((\mathcal{A}_{0},<_{0})\), and moves \(p_{0}^{x}\) to the left of \(p_{0}^{y}\) (_i.e_. such that \((\mathcal{A}_{0},<_{0})\models p_{0}^{x}<p_{0}^{y}\)). In order to play according to the remarks from Cases 1 and 2, the duplicator must place \(p_{1}^{x}\) on an element which shares the same \(r\)-environment with \(p_{0}^{x}\) (where \(r\) is the number of rounds left in the game), which is not near the endpoints. It must be the case that the \(k\)-neighbourhood type of \(p_{0}^{x}\) in \(\mathcal{A}_{0}\) is frequent, since it is not near the endpoints of \((\mathcal{A}_{0},<_{0})\), hence not in \(X_{0}\). By construction, every universal segment \(U\!L_{1}^{j}\), for \(0\leq j\leq k\), contains elements of each \(k\)-environment type extending any frequent \(k\)-neighbourhood type. In particular, it contains an element having the same \(r\)-environment as \(p_{0}^{x}\). The duplicator will place \(p_{1}^{x}\) on such an element in the leftmost segment \(U\!L_{1}^{j}\) which is not considered to be near the endpoints (this notion depends on the number \(r\) of rounds left in the game). This is detailed in Cases ((III)) and ((V)) (for the symmetrical case where \(p_{0}^{x}\) is placed to the right of \(p_{0}^{y}\)) below. However, we have to consider a subcase, where \(p_{1}^{y}\) is itself in the leftmost segment \(L_{1}^{j}\) which is not near the endpoints. Indeed, in this case, placing \(p_{1}^{x}\) as discussed may result in \(p_{1}^{x}\) being to the right of \(p_{1}^{y}\), or being in \(N^{1}_{\mathcal{A}_{1}}(p_{1}^{y})\); either of which being game-losing to the duplicator. However, since \(p_{1}^{y}\) was considered to be near the endpoints in the previous round of the game, we know that the duplicator played a tit-for-tat strategy at that point, which allows us to replicate the placement of \(p_{0}^{x}\) according to \(\varphi_{0}\). This subcase, as well as the equivalent subcase where the spoiler places \(p_{0}^{x}\) to the right of \(p_{0}^{y}\), are formalized in Cases ((IV)) and ((VI)) below. We are now ready to describe formally the strategy implemented by the duplicator: 1. If \(p_{0}^{x}\in S_{0}^{r}\), then the duplicator responds by placing \(p_{1}^{x}\) on \(\varphi_{0}(p_{0}^{x})\). This corresponds to the tit-for-tat strategy implemented when the spoiler plays near the endpoints, as discussed in Case 1. 2. Else, if \(p_{0}^{x}\notin S_{0}^{r}\), and \(p_{0}^{x}\in N^{1}_{\mathcal{A}_{0}}(p_{0}^{y})\), then \((E_{r+1})\) ensures that there exists an isomorphism \(\psi:\mathcal{E}\mathrm{nv}_{(\mathcal{A}_{0},<_{0})}^{r+1}(p_{0}^{y}) \to\mathcal{E}\mathrm{nv}_{(\mathcal{A}_{1},<_{1})}^{r+1}(p_{1}^{y})\,\). The duplicator responds by placing \(p_{1}^{x}\) on \(\psi(p_{0}^{x})\). This makes formal the duplicator's response to a move next to the other pebble, as discussed in Case 2 above. 3. Else suppose that \((\mathcal{A}_{0},<_{0})\models p_{0}^{x}<p_{0}^{y}\) and \(p_{0}^{y}\notin L_{0}^{r+1}\). Note that \(\tau:=\mathrm{neigh}\text{-}\mathrm{tp}_{\mathcal{A}_{0}}^{k}(p_{0}^{x})\in \mathrm{Freq}[\mathcal{A}_{0}]_{k}\), since \(p_{0}^{x}\notin X_{0}\). Let \(\tau_{l}:=\mathrm{env}\text{-}\mathrm{tp}_{(\mathcal{A}_{0},<_{0})}^{k}(p_{0} ^{x})\). The duplicator responds by placing \(p_{1}^{x}\) on \(\varphi_{0}(a[\tau_{l}]_{L}^{r+1})\). 4. Else, if \((\mathcal{A}_{0},<_{0})\models p_{0}^{x}<p_{0}^{y}\) and \(p_{0}^{y}\in L_{0}^{r+1}\), then the duplicator moves \(p_{1}^{x}\) on \(\varphi_{0}(p_{0}^{x})\) (by \((S_{r+1})\), \(p_{0}^{x}\) indeed belongs to the domain of \(\varphi_{0}\)). 5. Else, suppose that \((\mathcal{A}_{0},<_{0})\models p_{0}^{y}<p_{0}^{x}\) and \(p_{0}^{y}\notin R_{0}^{r+1}\). This case is symmetric to Case ((III)). Similarly, the duplicator opts to play \(p_{1}^{x}\) on \(\varphi_{0}(a[\tau_{l}]_{R}^{r+1})\), where \(\tau_{l}:=\mathrm{env}\text{-}\mathrm{tp}_{(\mathcal{A}_{0},<_{0})}^{k}(p_{0} ^{x})\). 6. If we are in none of the cases above, it means that the spoiler has placed \(p_{0}^{x}\) to the right of \(p_{0}^{y}\), and that \(p_{0}^{y}\in R_{0}^{r+1}\). This case is symmetric to Case ((IV)). Once again, the duplicator places \(p_{1}^{x}\) on \(\varphi_{0}(p_{0}^{x})\). It remains to show that this strategy satisfies our invariants: under the inductive assumption that \((S_{r+1})\), \((E_{r+1})\) and \((T_{r+1})\) hold, for some \(0\leq r<k\), we need to show that this strategy ensures that \((S_{r})\), \((E_{r})\) and \((T_{r})\) hold. We treat each case in its own section: Section 5.3.4 is devoted to Case ((I)) while Section 5.3.5 covers Case ((II)). Both Cases ((III)) and ((IV)) are treated in Section 5.3.6. Cases ((V)) and ((VI)), being their exact symmetric counterparts, are left to the reader. **Remark 5.9**.: Note that some properties need no verification. Since \(p_{0}^{y}\) and \(p_{1}^{y}\) are left untouched by the players, \((S_{r+1})\) ensures that half of \((S_{r})\) automatically holds, namely that \[\forall i\in\{0,1\},\quad p_{i}^{y}\in S_{i}^{r}\quad\to\quad p_{1-i}^{y}= \varphi_{i}(p_{i}^{y})\,.\] Similarly, the part of \((E_{r})\) concerning \(p_{0}^{y}\) and \(p_{1}^{y}\) follows from \((E_{r+1})\): \[\mathrm{env}\text{-}\mathrm{tp}_{(\mathcal{A}_{0},<_{0})}^{r}(p_{0}^{y})= \mathrm{env}\text{-}\mathrm{tp}_{(\mathcal{A}_{1},<_{1})}^{r}(p_{1}^{y})\,.\] Lastly, notice that once we have shown that \((E_{r})\) holds, it follows that \[\left\{\begin{array}{l}\mathrm{tp}_{\mathcal{A}_{0}}^{0}(p_{0}^{x})=\mathrm{tp }_{\mathcal{A}_{1}}^{0}(p_{1}^{x})\\ \mathrm{tp}_{\mathcal{A}_{0}}^{0}(p_{0}^{y})=\mathrm{tp}_{\mathcal{A}_{1}}^{0} (p_{1}^{y})\end{array}\right.\] #### 5.3.4. When the spoiler plays near the endpoints: Case ((I)) In this section, we treat the case where the spoiler places \(p_{0}^{x}\) near the \(<_{0}\)-minimal or \(<_{0}\)-maximal element of \((\mathcal{A}_{0},<_{0})\). Obviously, what "near" means depends on the number of rounds left in the game; the more rounds remain, the more the duplicator must be cautious regarding the possibility for the spoiler to reach an endpoint and potentially expose a difference between \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\). As we have stated in Case ((I)), with \(r\) rounds left, we consider a move on \(p_{0}^{x}\) by the spoiler to be near the endpoints if it is made in \(S_{0}^{r}\). In that case, the duplicator responds along the tit-for-tat strategy, namely by placing \(p_{1}^{x}\) on \(\varphi_{0}(p_{0}^{x})\). Let us now prove that this strategy guarantees that \((S_{r})\), \((E_{r})\) and \((T_{r})\) hold. Recall from Note 5.9 that part of the task is already taken care of. **Proof of \((S_{r})\) in Case ((I)).** We have to show that \(\forall i\in\{0,1\},\ p_{i}^{x}\in S_{i}^{r}\ \to\ p_{1-i}^{x}=\varphi_{i}(p_{i}^{x})\,.\) This follows directly from the duplicator's strategy, since \(p_{1}^{x}=\varphi_{0}(p_{0}^{x})\) (thus \(p_{0}^{x}=\varphi_{1}(p_{1}^{x})\)). **Proof of \((E_{r})\) in Case ((I)).** We need to prove that \(\text{env-tp}^{r}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})=\text{env-tp}^{r}_{( \mathcal{A}_{1},<_{1})}(p_{1}^{x})\,,\) which is a consequence of Lemma 5.5 given that \(p_{1}^{x}=\varphi_{0}(p_{0}^{x})\) and \(r<k\). **Proof of \((T_{r})\) in Case ((I)).** First, suppose that \(p_{0}^{y}\in S_{0}^{r+1}\). By \((S_{r+1})\), we know that \(p_{1}^{y}=\varphi_{0}(p_{0}^{y})\). Thus, Lemma 5.6 allows us to conclude that \(\text{tp}^{0}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x},p_{0}^{y})=\text{tp}^{0}_{( \mathcal{A}_{1},<_{1})}(p_{1}^{x},p_{1}^{y})\). Otherwise, \(p_{0}^{y}\notin S_{0}^{r+1}\) and \((S_{r+1})\) entails that \(p_{1}^{y}\notin S_{1}^{r+1}\). We have two points to establish: \[\text{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x},p_{0}^{y}) =\text{tp}^{0}_{\mathcal{A}_{1}}(p_{1}^{x},p_{1}^{y}) \tag{5.4}\] \[\text{tp}^{0}_{<_{0}}(p_{0}^{x},p_{0}^{y}) =\text{tp}^{0}_{<_{1}}(p_{1}^{x},p_{1}^{y}) \tag{5.5}\] Notice that \[\left\{\begin{array}{l}\text{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x},p_{0}^{y}) =\text{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x})\cup\text{tp}^{0}_{\mathcal{A}_{0} }(p_{0}^{y})\\ \text{tp}^{0}_{\mathcal{A}_{1}}(p_{1}^{x},p_{1}^{y})=\text{tp}^{0}_{\mathcal{A }_{1}}(p_{1}^{x})\cup\text{tp}^{0}_{\mathcal{A}_{1}}(p_{1}^{y})\end{array}\right.\] This is because, by construction, the neighbours in \(\mathcal{A}_{i}\) of an element of \(S_{i}^{r}\) all belong to \(S_{i}^{r+1}\). Equation (5.4) follows from this remark and Note 5.9. As for Equation (5.5), either \[p_{0}^{x}\in X_{0}\cup\bigcup_{0\leq j\leq r}L_{0}^{j}\quad\text{and}\quad p_{ 1}^{x}\in X_{1}\cup\bigcup_{0\leq j\leq r}L_{1}^{j}\,,\] in which case \(\text{tp}^{0}_{<_{0}}(p_{0}^{x},p_{0}^{y})=\{x<y\}=\text{tp}^{0}_{<_{1}}(p_{1 }^{x},p_{1}^{y})\,,\) or \[p_{0}^{x}\in\bigcup_{0\leq j\leq r}R_{0}^{j}\quad\text{and}\quad p_{1}^{x}\in \bigcup_{0\leq j\leq r}R_{1}^{j}\,,\] in which case \(\text{tp}^{0}_{<_{0}}(p_{0}^{x},p_{0}^{y})=\{x>y\}=\text{tp}^{0}_{<_{1}}(p_{1 }^{x},p_{1}^{y})\,.\) #### 5.3.5. When the spoiler plays next to the other pebble: Case ((II)) Suppose now that the spoiler places \(p_{0}^{x}\) next to the other pebble in \(\mathcal{A}_{0}\) (_i.e._\(p_{0}^{x}\in N^{1}_{\mathcal{A}_{0}}(p_{0}^{y})\)), but not in \(S_{0}^{r}\) (for that move would fall under the jurisdiction of Case ((I))). In that case, the duplicator must place \(p_{1}^{x}\) so that the relative position of \(p_{1}^{x}\) and \(p_{1}^{y}\) is the same as that of \(p_{0}^{x}\) and \(p_{0}^{y}\). For that, we can use \((E_{r+1})\), which guarantees that \(\mbox{env-tp}^{r+1}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{y})=\mbox{env-tp}^{r+1}_ {(\mathcal{A}_{1},<_{1})}(p_{1}^{y})\,.\) Thus there exists an isomorphism \(\psi\) between \(\mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{y})\) and \(\mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{y})\). Note that this isomorphism is unique, by virtue of \(<_{0}\) and \(<_{1}\) being linear orders. The duplicator's response is to place \(p_{1}^{x}\) on \(\psi(p_{0}^{x})\). Let us now prove that this strategy is correct with respect to our invariants \((S_{r})\), \((E_{r})\) and \((T_{r})\). **Proof of \((S_{r})\) in Case ((II)).** Because the spoiler's move does not fall under Case ((I)), we know that \(p_{0}^{x}\notin S_{0}^{r}\). Let us now show that \(p_{1}^{x}\) is not near the endpoints either: suppose that \(p_{1}^{x}\in S_{1}^{r}\). By construction, since \(p_{1}^{x}\) and \(p_{1}^{y}\) are neighbours in \(\mathcal{A}_{1}\), this entails that \(p_{1}^{y}\in S_{1}^{r+1}\). But then, we know by \((S_{r+1})\) that \(p_{0}^{y}=\varphi_{1}(p_{1}^{y})\); and because \(\psi\) is the unique isomorphism between \(\mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{y})\) and \(\mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{1},<_{0})}(p_{1}^{y})\), \(\psi\) is equal to the restriction \(\widetilde{\varphi_{0}}\) of \(\varphi_{0}\): \[\widetilde{\varphi_{0}}\ :\ \mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{0},<_{0})}(p _{0}^{y})\ \to\ \mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{y})\,.\] Thus \(p_{0}^{x}=\psi^{-1}(p_{1}^{x})=\widetilde{\varphi_{0}}^{-1}(p_{1}^{x})=\varphi _{1}(p_{1}^{x})\), and by definition of the segments on \((\mathcal{A}_{1},<_{1})\), which are just a transposition of the segments of \((\mathcal{A}_{0},<_{0})\) via \(\varphi_{0}\), \(p_{1}^{x}\in S_{1}^{r}\) then entails that \(p_{0}^{x}\in S_{0}^{r}\), which is clearly a contradiction. Since we neither have \(p_{0}^{x}\in S_{0}^{r}\) nor \(p_{1}^{x}\in S_{1}^{r}\), \((S_{r})\) holds - recall from Note 5.9 that the part concerning \(p_{0}^{y}\) and \(p_{1}^{y}\) is always satisfied. **Proof of \((E_{r})\) in Case ((II)).** Recall that the duplicator placed \(p_{1}^{x}\) on the image of \(p_{0}^{x}\) by the isomorphism \[\psi:\mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{y})\to \mathcal{E}\mbox{nv}^{r+1}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{y})\,.\] It is easy to check that the restriction \(\widetilde{\psi}\) of \(\psi\): \(\widetilde{\psi}\ :\ \mathcal{E}\mbox{nv}^{r}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})\ \to\ \mathcal{E}\mbox{nv}^{r}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x})\) is well-defined, and is indeed an isomorphism. This ensures that \(\mbox{env-tp}^{r}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})=\mbox{env-tp}^{r}_{( \mathcal{A}_{1},<_{1})}(p_{1}^{x})\,,\) thus completing the proof of \((E_{r})\). **Proof of \((T_{r})\) in Case ((II)).** This follows immediately from the fact that the isomorphism \(\psi\) maps \(p_{0}^{x}\) to \(p_{1}^{x}\) and \(p_{0}^{y}\) to \(p_{1}^{y}\): all the atomic facts about these elements are preserved. #### 5.3.6. When the spoiler plays to the left: Cases ((III)) and ((IV)) We now treat our last case, which covers both Cases ((III)) and ((IV)), _i.e._ the instances where the spoiler places \(p_{0}^{x}\) to the left of \(p_{0}^{y}\) (formally: such that \((\mathcal{A}_{0},<_{0})\models p_{0}^{x}<p_{0}^{y}\)), which do not already fall in Cases ((I)) and ((II)). Note that the scenario in which the spoiler plays to the right of the other pebble is the exact symmetric of this one (since the \(X_{i}\) play no role in this case, left and right can be interchanged harmlessly). The idea here is very simple: since the spoiler has placed \(p_{0}^{x}\) to the left of \(p_{0}^{y}\), but neither in \(S_{0}^{r}\) nor in \(N^{1}_{\mathcal{A}_{0}}(p_{0}^{y})\), the duplicator responds by placing \(p_{1}^{x}\) on an element of \(U\!L_{1}^{r+1}\) (the leftmost universal segment not in \(S_{1}^{r}\)) sharing the same \(k\)-environment. This is possible by construction of the universal segments: if \(\tau_{l}:=\mbox{env-tp}^{k}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})\) (which must extend a frequent \(k\)-neighbourhood type, since \(p_{0}^{x}\notin X_{0}\)), then \(\varphi_{0}(a[\tau_{l}]_{l}^{r+1})\) satisfies the requirements. There is one caveat to this strategy. If \(p_{1}^{y}\) is itself in \(L_{1}^{r+1}\), two problems may arise: first, it is possible for \(p_{1}^{x}\) and \(p_{1}^{y}\) to be in the wrong order (_i.e._ such that \((\mathcal{A}_{1},<_{1})\models p_{1}^{x}>p_{1}^{y}\)). Second, it may be the case that \(p_{1}^{x}\) and \(p_{1}^{y}\) are neighbours in \(\mathcal{A}_{1}\), which, together with the fact that \(p_{0}^{x}\) and \(p_{0}^{y}\) are orthogonal in \(\mathcal{A}_{0}\) (_i.e._\(\operatorname{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x},p_{0}^{y})=\operatorname{tp }^{0}_{\mathcal{A}_{0}}(p_{0}^{x})\cup\operatorname{tp}^{0}_{\mathcal{A}_{0}}(p _{0}^{y})\)), would break \((T_{r})\). This is why the duplicator's strategy depends on whether \(p_{1}^{y}\in L_{1}^{r+1}\): * if this is not the case, then the duplicator places \(p_{1}^{x}\) on \(\varphi_{0}(a[\tau_{1}]_{L}^{r+1})\). This corresponds to Case ((III)). * if \(p_{1}^{y}\in L_{1}^{r+1}\), then \((S_{r+1})\) guarantees that \(p_{0}^{y}\in L_{0}^{r+1}\). Hence \(p_{0}^{x}\), which is located to the left of \(p_{0}^{y}\), is in the domain of \(\varphi_{0}\): the duplicator moves \(p_{1}^{x}\) to \(\varphi_{0}(p_{0}^{x})\). This situation corresponds to Case ((IV)). Let us prove that \((S_{r})\), \((E_{r})\) and \((T_{r})\) hold in both of these instances. [leftmargin=0cm] **Proof of \((S_{r})\) in Case ((III)).** Since the spoiler's move does not fall under Case ((I)), we have that \(p_{0}^{x}\notin S_{0}^{r}\). By construction, \(a[\tau_{1}]_{L}^{r+1}\in L_{0}^{r+1}\), thus \(\varphi_{0}(a[\tau_{1}]_{L}^{r+1})\in L_{1}^{r+1}\), and \(p_{1}^{x}\notin S_{1}^{r}\). [leftmargin=0cm] **Proof of \((E_{r})\) in Case ((III)).** It follows from \(\operatorname{env-tp}^{k}_{(\mathcal{A}_{0},<_{0})}(a[\tau_{1}]_{L}^{r+1})= \tau_{l}\) together with Lemma 5.5 that \[\operatorname{env-tp}^{k}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})=\operatorname{ env-tp}^{k}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x})\,.\] A fortiori, \(\operatorname{env-tp}^{r}_{(\mathcal{A}_{0},<_{0})}(p_{0}^{x})=\operatorname{ env-tp}^{r}_{(\mathcal{A}_{1},<_{1})}(p_{1}^{x})\). [leftmargin=0cm] **Proof of \((T_{r})\) in Case ((III)).** Because the spoiler's move does not fall under Case ((II)), \(p_{0}^{x}\notin N^{1}_{\mathcal{A}_{0}}(p_{0}^{y})\). In other words, \[\operatorname{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x},p_{0}^{y})=\operatorname{ tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x})\cup\operatorname{tp}^{0}_{\mathcal{A}_{0}}(p _{0}^{y})\,.\] Recall the construction of \(U\!L_{0}^{r+1}\): the whole \(k\)-neighbourhood of \(a[\tau_{1}]_{L}^{r+1}\) was included in this segment. In particular, \(N^{1}_{\mathcal{A}_{1}}(p_{1}^{x})=N^{1}_{\mathcal{A}_{1}}(\varphi_{0}(a[\tau _{1}]_{L}^{r+1}))\subseteq U\!L_{1}^{r+1}\). By assumption, \(p_{1}^{y}\notin L_{1}^{r+1}\), which entails that \(\operatorname{tp}^{0}_{\mathcal{A}_{1}}(p_{1}^{x},p_{1}^{y})=\operatorname{tp }^{0}_{\mathcal{A}_{1}}(p_{1}^{x})\cup\operatorname{tp}^{0}_{\mathcal{A}_{1}} (p_{1}^{y})\,.\) It then follows from the last observation of Note 5.9 that \(\operatorname{tp}^{0}_{\mathcal{A}_{0}}(p_{0}^{x},p_{0}^{y})=\operatorname{tp }^{0}_{\mathcal{A}_{1}}(p_{1}^{x},p_{1}^{y})\,.\) Let us now prove that \(\operatorname{tp}^{0}_{<_{1}}(p_{1}^{x},p_{1}^{y})=\{x<y\}\). We claim that \(p_{1}^{y}\notin X_{1}\cup\bigcup_{0\leq j\leq r+1}L_{1}^{j}\). Suppose otherwise: \((S_{r+1})\) would entail that \(p_{0}^{y}\in X_{0}\cup\bigcup_{0\leq j\leq r+1}L_{0}^{j}\) which, together with the hypothesis \(p_{0}^{y}\notin L_{0}^{r+1}\) and \(p_{0}^{x}<p_{0}^{y}\), would result in \(p_{0}^{x}\) being in \(S_{0}^{r}\), which is absurd. Thus, \(\operatorname{tp}^{0}_{<_{1}}(p_{1}^{x},p_{1}^{y})=\{x<y\}=\operatorname{tp}^{0 }_{<_{0}}(p_{0}^{x},p_{0}^{y})\), which concludes the proof of \((T_{r})\). [leftmargin=0cm] **Proof of \((S_{r})\), \((E_{r})\) and \((T_{r})\) in Case ((IV)).** Let us now move to the case where \(p_{1}^{y}\in L_{1}^{r+1}\). Recall that under this assumption, \(p_{0}^{y}=\varphi_{1}(p_{1}^{y})\in L_{0}^{r+1}\) and since \(p_{0}^{x}<p_{0}^{y}\) and \(p_{0}^{x}\notin S_{0}^{r}\), we have that \(p_{0}^{x}\in L_{0}^{r+1}\). The duplicator places the pebble \(p_{1}^{x}\) on \(\varphi_{0}(p_{0}^{x})\); in particular, \(p_{1}^{x}\in L_{1}^{r+1}\). The proof of \((S_{r})\) follows from the simple observation that \(p_{0}^{x}\notin S_{0}^{r}\) and \(p_{1}^{x}\notin S_{1}^{r}\). As for \((E_{r})\) and \((T_{r})\), they follow readily from Lemma 5.5 and 5.6 and the fact that \(p_{1}^{x}=\varphi_{0}(p_{0}^{x})\) and \(p_{1}^{y}=\varphi_{0}(p_{0}^{y})\). ### Counting quantifiers We now consider the natural extension \(\mathrm{C}^{2}\) of \(\mathrm{FO}^{2}\), where one is allowed to use counting quantifiers of the form \(\exists^{\geq i}x\) and \(\exists^{\geq i}y\), for \(i\in\mathbb{N}\). Such a quantifier, as expected, expresses the existence of at least \(i\) elements satisfying the formula which follows it. This logic \(\mathrm{C}^{2}\) has been extensively studied. On an expressiveness standpoint, \(\mathrm{C}^{2}\) stricly extends \(\mathrm{FO}^{2}\) (which cannot count up to three), and contrary to the latter, \(\mathrm{C}^{2}\) does not enjoy the small model property (meaning that contrary to \(\mathrm{FO}^{2}\), there exist satisfiable \(\mathrm{C}^{2}\)-sentences which do not have small - or even finite - models). However, the satisfiability problem for \(\mathrm{C}^{2}\) is still decidable [1, 10, 11]. To the best of our knowledge, it is not known whether \(<\)-inv \(\mathrm{C}^{2}\) has a decidable syntax. Let us now explain how the proof of Theorem 5.1 can be adapted to show the following stronger version: **Theorem 5.10**.: _Let \(\mathit{C}\) be a class of structures of bounded degree._ _Then \(<\)-inv \(\mathit{C}^{2}\subseteq FO\) on \(\mathit{C}\)._ Proof.: The proof is very similar as to that of Theorem 5.1. The difference is that we now need to show, at the end of the construction, that the structures \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) are not only \(\mathrm{FO}^{2}\)-similar, but \(\mathrm{C}^{2}\)-similar. More precisely, we show that for every \(k\in\mathbb{N}\), there exists some \(f(k)\in\mathbb{N}\) such that if \(\mathcal{A}_{0}\equiv_{f(k)}^{\mathrm{FO}}\mathcal{A}_{1}\), then it is possible to construct two linear orders \(<_{0}\) and \(<_{1}\) such that \((\mathcal{A}_{0},<_{0})\) and \((\mathcal{A}_{1},<_{1})\) agree on all \(\mathrm{C}^{2}\)-sentences of quantifier rank at most \(k\), and with counting indexes at most \(k\), which we denote \((\mathcal{A}_{0},<_{0})\equiv_{k,k}^{\mathrm{C}^{2}}(\mathcal{A}_{1},<_{1})\). This is enough to complete the proof, as these classes of \(\mathrm{C}^{2}\)-sentences cover all the \(\mathrm{C}^{2}\)-definable properties. In order to prove that \((\mathcal{A}_{0},<_{0})\equiv_{k,k}^{\mathrm{C}^{2}}(\mathcal{A}_{1},<_{1})\), we need an Ehrenfeucht-Fraisse-game capturing \(\equiv_{k,k}^{\mathrm{C}^{2}}\). It is not hard to derive such a game from the Ehrenfeucht-Fraisse-game for \(\mathrm{C}^{2}\)[1]. This game only differs from the two-pebble Ehrenfeucht-Fraisse-game in that in each round, once the spoiler has chosen a structure (say \((\mathcal{A}_{0},<_{0})\)) and a pebble to move (say \(p_{0}^{x}\)), the spoiler picks not only one element of that structure, but a set \(P_{0}\) of up to \(k\) elements. Then the duplicator must respond with a set \(P_{1}\) of same cardinality in \((\mathcal{A}_{1},<_{1})\). The spoiler then places \(p_{1}^{x}\) on any element of \(P_{1}\), to which the duplicator responds by placing \(p_{0}^{x}\) on some element of \(P_{0}\). As usual, the spoiler wins after this round if \(\mathrm{tp}_{(\mathcal{A}_{0},<_{0})}^{0}(p_{0}^{x},p_{0}^{y})\neq\mathrm{tp}_{ (\mathcal{A}_{1},<_{1})}^{0}(p_{1}^{x},p_{1}^{y})\). Otherwise, the game goes on until \(k\) rounds are played. It is not hard to establish that this game indeed captures \(\equiv_{k,k}^{\mathrm{C}^{2}}\), in the sense that \((\mathcal{A}_{0},<_{0})\equiv_{k,k}^{\mathrm{C}^{2}}(\mathcal{A}_{1},<_{1})\) if and only if the duplicator has a winning strategy for \(k\) rounds of this game. The restriction on the cardinal of the set chosen by the spoiler (which is at most \(k\)) indeed corresponds to the fact that the counting indexes of the formulae are at most \(k\). As for the number of rounds (namely, \(k\)), it corresponds as usual to the quantifier rank. This can be easily derived from a proof of Theorem 5.3 in [1], and is left to the reader. Let us now explain how to modify the construction of \(<_{0}\) and \(<_{1}\) presented in Section 5.2 in order for the duplicator to maintain similarity for \(k\)-round in such a game. The only difference lies in the choice of the universal elements. Recall that in the previous construction, we chose, for each \(k\)-environment type \(\tau_{l}\) extending a frequent \(k\)-neighbourhood type and each segment \(\mathit{U\!L}_{0}^{j}\), an element \(a[\tau_{l}]_{L}^{j}\) whose \(k\)-environment type in \((\mathcal{A}_{0},<_{0})\) is destined to be \(\tau_{l}\) (and similarly for \(\mathit{U\!R}_{0}^{j}\) and \(a[\tau_{l}]_{R}^{j}\)). In the new construction, we pick \(k\) such elements, instead of just one. Just as previously, all these elements must be far enough from one another in the Gaifman graph of \(\mathcal{A}_{0}\). Once again, this condition can be met by virtue of the \(k\)-neighbourhood type \(\tau\) underlying \(\tau_{l}\) being frequent, and thus having many occurrences scattered across \(\mathcal{A}_{0}\) (remember that we have a bound on the degree of \(\mathcal{A}_{0}\), thus all the occurrences of \(\tau\) cannot be concentrated). We only need to multiply the value of \(m\) by \(k\) in (5.3). When the spoiler picks a set of elements of size at most \(k\) in one of the structures (say \(P_{0}\) in \((\mathcal{A}_{0},<_{0})\)), the duplicator responds by selecting, for each one of the elements of \(P_{0}\), an element in \((\mathcal{A}_{1},<_{1})\) along the strategy for the \(\operatorname{FO}^{2}\)-game explained in Section 5.3.3. All that remains to be shown is that it is possible for the duplicator to answer each element of \(P_{0}\) with a different element in \((\mathcal{A}_{1},<_{1})\). Note that if the duplicator follows the strategy from Section 5.3.3, they will never answer two moves by the spoiler falling under different cases among Cases ((I))-((VI)) with the same element. Thus we can treat separately each one of these cases; and for each case, we show that if the spoiler chooses up to \(k\) elements in \((\mathcal{A}_{0},<_{0})\) falling under this case in \(P_{0}\), then the duplicator can find the same number of elements in \((\mathcal{A}_{1},<_{1})\), following the aforementioned strategy. * For Case ((I)), this is straightforward, since the strategy is based on the isomorphism between the borders of the linear orders. The same goes for Cases ((II)), ((IV)) and ((VI)), as the strategy in these cases also relies on an isomorphism argument. * Suppose now that \(p_{0}^{y}\notin L_{0}^{r+1}\), and assume that the spoiler chooses several elements to the left of \(p_{0}^{y}\), but outside of \(S_{0}^{r}\) and not adjacent to \(p_{0}^{y}\). This corresponds to Case ((III)). Recall that our new construction guarantees, for each \(k\)-environment type extending a frequent \(k\)-neighbourhood type, the existence in \(L_{1}^{r+1}\) of \(k\) elements having this environment. This lets us choose, in \(L_{1}^{r+1}\), a distinct answer for each element in the set selected by the spoiler, sharing the same \(k\)-environment type. Case ((V)) is obviously symmetric. This concludes the proof of Theorem 5.10. ## 6. Conclusion In this paper, we made significant progress towards a better understanding of the two-variable fragment of order-invariant first-order logic: * From a complexity point of view, we established the \(\operatorname{coNExpTime}\)-completeness of the problem of deciding if a given \(\operatorname{FO}^{2}\)-sentence is order-invariant (Theorem 3.5), significantly simplifying and improving the result by Zeume and Harwath [11, Thm. 12]. * From an expressivity point of view, we addressed the question of whether every property definable in order-invariant \(\operatorname{FO}^{2}\) can also be expressed in plain FO. We failed short of fully answering the question, but provided two interesting results. The first one (namely, Theorem 4.5) establishes that under a more relaxed notion of order-invariance, the answer to the above question is "no". While this does not bring a fully-satisfactory answer to the problem, this leads us to believe that order-invariant \(\operatorname{FO}^{2}\) can indeed express properties beyond the scope of FO. The second one (Theorem 5.1) states that when the degree is bounded, every property expressible in order-invariant \(\operatorname{FO}^{2}\) is definable in FO without the use of the order. This is an important step towards resolving the conjecture that order-invariant FO over classes of structures of bounded degree cannot express properties beyond the reach of FO. Results of Section 5 also apply to the case of the two-variable logic with counting, \(\operatorname{C}^{2}\). While order-invariant \(\operatorname{C}^{2}\) has decidable satisfiability and validity problems [13, Theorem 6.20], it is open if it has a decidable syntax (_i.e_. whether the problem of determining if a given \(\operatorname{C}^{2}\)-sentence is order-invariant is decidable). Unfortunately the techniques introduced in Section 3 are of no use here, as \(\mathrm{C}^{2}\) lacks the finite model property. Finally, it might be a good idea to study order-invariant \(\mathrm{FO}^{2}\) over graph classes beyond classes of bounded-degree, _e.g_. planar graphs or nowhere-dense classes of graphs. ## Acknowledgements Bartosz Bednarczyk was supported by the ERC Consolidator Grant No. 771779 (DeciGUT). He would like to thank Antti Kuusisto and Anna Karykowska for many insightful discussions on the problem.
2306.09611
Multi-MeV electrons from above-threshold ionization of the neon K-shell
We present measurements of integrated electron energies produced by above-threshold ionization (ATI) of neon in a laser field with intensity exceeding 10$^{20}$ W/cm$^{2}$. We observe electrons with energy exceeding 10 MeV ejected in the laser forward direction above a threshold intensity of $2 \times 10^{20}$ W/cm$^{2}$. We compare to ATI models using both tunneling (ADK-PPT) and barrier suppression ionization and observe the onset of ATI at a higher threshold intensity than predicted by these models.
A. Yandow, T. N. Ha, C. Aniculaesei, H. L. Smith, C. G. Richmond, M. M. Spinks, H. J. Quevedo, S. Bruce, M. Darilek, C. Chang, D. A. Garcia, E. Gaul, M. E. Donovan, B. M. Hegelich, T. Ditmire
2023-06-16T04:02:52Z
http://arxiv.org/abs/2306.09611v1
# Multi-MeV electrons from above-threshold ionization of the neon K-shell ###### Abstract We present measurements of integrated electron energies produced by above-threshold ionization (ATI) of neon in a laser field with intensity exceeding 10\({}^{20}\) W/cm\({}^{2}\). We observe electrons with energy exceeding 10 MeV ejected in the laser forward direction above a threshold intensity of 2 \(\times 10^{20}\) W/cm\({}^{2}\). We compare to ATI models using both tunneling (ADK-PPT) and barrier suppression ionization and observe the onset of ATI at a higher threshold intensity than predicted by these models. Above-threshold ionization (ATI) is a fundamental response of an atomic system to an intense flux of photons. The first experimental evidence of ATI showed the absorption of seven photons in a six-photon multiphoton ionization pathway of xenon [1]. As near-infrared laser intensity increases beyond 10\({}^{14}\) W/cm\({}^{2}\), ATI can be well-described by a quasi-classical two-step model. The ionization process can be described by the Ammosov-Krainov-Delone and Perelomov-Popov-Terent'ev (ADK-PPT) model [2][3], where the photons sum coherently to create a strong electric field which liberates electrons through a quasi-static tunneling process. The ATI electron is then "born" into the laser field with initial conditions consistent with the ADK-PPT tunneling model, and the absorbed laser energy can be found by integrating the classical Lorentz force equations. The ADK-PPT ionization rate model has been well-validated by measurements of ion charge states produced in the laser focus at nonrelativistic intensities [4]. Extension of these experiments to intensity above 10\({}^{20}\) W/cm\({}^{2}\) has been elusive even with advances in laser technology due to the limited repetition rate of high-energy ultrafast lasers and the gas density limits imposed by conventional time-of-flight experiment designs [5]. Yamakawa _et al._ and Chowdhury _et al._ performed precision measurements of highly-charged noble gas ions at relativistic laser intensities exceeding 2\(\times\)10\({}^{19}\) W/cm\({}^{2}\)[6][7]. Both authors exploited the strongly nonlinear dependence of tunneling ionization probability on laser intensity to calculate a model intensity from the relative charge state yields. The intensity computed by Chowdhury _et al._ was at the estimated lower bound of their experimental intensity. Yamakawa _et al._ found unexpectedly that the laser intensity calculated from the ADK-PPT tunneling model depended on the noble gas ion species when the laser parameters were held constant, with the intensity calculated from the model decreasing systematically with increasing atomic number. Recent modeling suggests the ionization of helium-like ions is more robust to uncertainties in the ionization modeling [8][9], motivating investigation into ionization of the neon K-shell. In this Letter we explore the observation of K-shell neon charge states produced in a laser focus by detecting the high-energy ATI electrons produced by the laser-ion interaction. We select high-energy ATI electrons as our experimental observable because direct laser acceleration of the highly-charged ions will severely degrade the resolution of time-of-flight spectrometers as intensity approaches 10\({}^{21}\) W/cm\({}^{2}\)[10]. The behavior of relativistic ATI electrons is well-characterized by experiment [11][12][13], with higher-energy electrons confined to a smaller forward cone in the laser forward direction at an angle \[tan(\theta)=\sqrt{\frac{2}{\gamma-1}} \tag{1}\] from the laser propagation direction[13]. Modulation of the ATI electron energy spectrum and spatial distributions induced by the gaps in appearance intensity between different atomic shells has also been confirmed experimentally in argon and xenon [12]. The large gap in ionization potential between the L-shell (\(<\) 239 eV) and K-shell electrons (1196 eV and 1362 eV) of neon result in the K-shell electrons being "born" into a field nearly two orders of magnitude higher in intensity, and the K-shell ATI electrons that interact with the laser field at peak strength will be ejected in a narrower cone centered on the laser forward direction than the L-shell electrons that will be ponderomotively expelled by the leading edge of the laser pulse and will therefore attain lower energies. Figure 1 shows a simplified diagram of our experimental setup, which is described in greater detail elsewhere [14]. A multi-Joule, linearly polarized, ultrafast laser pulse produced by the rod amplifier of the Texas Petawatt laser is focused with an \(f/1.4\) off-axis parabolic mirror, focusing to a spot with central maximum full-width at half-maximum measured to be \(2.6\pm 0.2\)\(\mu\)m with optimal wavefront. The intensity at the focal plane in the target chamber is estimated using indirect measurements of the wavefront, an estimated pulse duration deconvolved from a second-order autocorrelation measurement assuming a Gaussian pulse shape, and a calibrated energy measurement. A peak intensity of \(5\times 10^{20}\) W/cm\({}^{2}\) is attained in this configuration. Intensity was scanned by decreasing the laser energy with the laser wavefront and pulse duration remaining optimized. The focused laser pulse interacts with a low-density plume of neon gas near the target chamber center that is introduced by a flow-calibrated orifice with a diameter of 100 \(\mu\)m backed with 60 torr of ultra high-purity neon gas. We estimate the maximum density of the gas in the interaction volume to be about \(3\times 10^{14}\) cm\({}^{-3}\) using Ansys Fluent[15] simulation of the steady-state gas flow into vacuum, below the threshold density for collective plasma effects. The electrons produced by the laser-ion interactions are detected by scintillating calorimeter detectors placed around the laser focus. The three detectors discussed in this Letter were oriented along the polarization plane at 30\({}^{\circ}\) from the laser forward direction, along the laser forward direction, and at a control position 110\({}^{\circ}\) from the laser forward direction and out of the polarization plane. Each detector consisted of a 50 mm diameter, 40 mm long cylinder of long-lifetime (285 ns) scintillating plastic (Eljen Technologies EJ240) coupled to a photomultiplier tube with a tapered voltage divider for optimal pulse linearity. The scintillator plastic and photomultiplier tubes (PMT) were encased in a vacuum-compatible PTFE housing that was made light-tight with colloidal graphite and aluminum foil. The relatively large solid angle (\(\sim\) 0.03 steradians) subtended by the detectors captured several hundred ATI electrons at each detector, enabling accurate calorimeter energy measurements with only a few shots at each laser intensity. The output current pulse from each photomultiplier tube was recorded Figure 1: A conceptual diagram of the experimental setup. Two detectors discussed in this Letter are oriented at angles of 0\({}^{\circ}\) and 30\({}^{\circ}\) from the laser forward direction and in the laser polarization plane. The dashed lobes demonstrate angular separation of the L-shell and K-shell electrons. This drawing is not to scale. Color figures available online. Figure 2: Electron energy deposited in the scintillator at the (a)30\({}^{\circ}\) position and (b) 0\({}^{\circ}\) position (b). Shield thicknesses of 1 mm and 2.6 mm have 50% efficiency cutoffs at 1.4 MeV and 2.8 MeV, respectively. Predictions of the ADK-PPT model (solid curve) and August BSI model (dashed curve) are presented alongside, with simulation intensities at the marker points. Color figures available online. The floors mark the maximum values and intensity ranges for measurements falling below the lowest detector charge threshold in each shielding configuration. on a Tektronix TDS5054 oscilloscope and digitally filtered to eliminate ringing from on-shot electromagnetic noise. The upper and lower charge thresholds at each voltage were estimated from the breakdown of the linear relationship between current amplitude and integrated charge due to pulse saturation and residual ringing, respectively. We performed a series of control experiments to confirm our signal originated from K-shell ATI electrons. We verified the signal was not due to electromagnetic pulse effects by verifying it disappeared when the gas flow was turned off. We verified the signal from forward-directed radiation was at least 500 times greater than backward-detected radiation at the control detector position. We swapped the detectors located at the \(30^{\circ}\) and control positions to confirm the observed signal was due to radiation detected in the laser forward direction and not a detector-specific artifact. A series of control shots was also taken using helium gas as a target to simulate the laser interaction with the L-shell electrons. We observed no repeatable signal that would correspond to L-shell electrons with energy greater than 1.4 MeV at the \(0^{\circ}\) position and 2.8 MeV at the \(30^{\circ}\) position. Figures 2a and 2b show our integrated ATI electron energy yields at the \(30^{\circ}\) and \(0^{\circ}\) detector positions. Aluminum filters with thicknesses of 1 mm and 2.6 mm are inserted in front of the detectors to stop electrons with energy below 1.4 MeV and 2.8 MeV, respectively, to eliminate low-energy L-shell electrons observed scattered toward the laser forward direction [14]. We observe a similar intensity threshold effect at both positions, where the energy carried forward by ATI electrons becomes measurable above \(2\times 10^{20}\) W/cm\({}^{2}\) and increases rapidly with intensity. The minimum signal level, below which detector ringing renders our estimated uncertainty unreliable, is marked for both shielding configurations presented in Figure 2b, showing that the ATI electron signal drops off more than an order of magnitude at the K-shell ionization threshold intensity. A scaling transition characteristic of tunneling ionization [4][16] is visible at an intensity around \(3\times 10^{20}\) W/cm\({}^{2}\) in Figure 2a, where the measured ATI electron energy yield transitions to a power-law intensity dependence dominated by the volume of the focal region exceeding the ionization threshold intensity. Below this intensity, the integrated ATI electron energy dependence is dominated by the probability of K-shell ionization in the most intense region of the laser focus, which is a highly nonlinear function of intensity. We observe the K-shell ionization threshold intensity is nearly double the barrier suppression intensity, and that both the ADK-PPT and Augst barrier suppression models described in detail elsewhere [14] underestimate the K-shell ionization intensity threshold. The primary sources of error originate from the detector energy calibration (\(\sim 20\%\)) and our method of calculating intensity (\(\sim 15-20\%\)), and are not sufficient to explain the discrepancy between the observed ATI electron energy yields and the Monte Carlo models. The limited number of laser shots and the low density of target gas prevented measurement of the electron spectrum using a magnetic spectrometer. We placed a series of aluminum filters of different thicknesses in front of the scintillating plastic and took repeated measurements to gain information about the energy spectrum and the maximum electron energy. Figure 3a shows a series of detector efficiency curves calculated using G4beamline [17] for different aluminum filter thicknesses, with thicker fil Figure 3: a) Detector efficiencies at different aluminum thicknesses (right axis) and a simulated K-shell ATI electron spectrum using the ADK-PPT model at intensity of \(1.06\times 10^{20}\) W/cm\({}^{2}\) (left axis) shown for comparison. b) Integrated ATI electron energy at the two positions at two different average intensities, with ADK-PPT simulation predictions (open markers) for comparison. ters shielding the scintillating plastic from lower-energy electrons. Although the lack of a sharp cutoff in the energy efficiency curves makes it impossible to invert our integrated measurements to obtain a unique electron energy spectrum, we can gain qualitative information on the shape of the spectrum and estimate a maximum ATI electron energy range by comparing the ratio of the measured integrated electron energy for the two thickest shields and to the ratio of the respective efficiency curves. Figure 3b shows the measured energy yields at the \(30^{\circ}\) and on-axis detectors, where the modeling predicts the highest number of K-shell ATI electrons. At an average intensity of \(4.1\pm 0.4\times 10^{20}\) W/cm\({}^{2}\), we found the most favorable comparison was to the ADK-PPT ATI electron model with a model intensity of \(1.06\times 10^{20}\) W/cm\({}^{2}\). The August BSI model predicted a significantly higher ratio of energy yield in the laser forward direction than the ADK-PPT model at every intensity, and a model intensity range consistent with the energy yields observed at both detector positions could not be found for the August BSI model [14]. From the experimental measurements at \(30^{\circ}\) at two intensities (\((4.1\pm 0.4)\times 10^{20}\) W/cm\({}^{2}\) and \((2.2\pm 0.4)\times 10^{20}\) W/cm\({}^{2}\)), the ADK-PPT model falls off quicker with increasing shield thickness than the measured yields because the model significantly underestimates the proportion of electrons with energy \(>6\) MeV at both model intensities. We cannot make a conclusive observation about the K-shell electrons with energy \(<3\) MeV because our helium control shots indicate that lower-energy L-shell electrons are scattered further into the laser forward direction than predicted by modeling [14]. The maximum ATI electron energy ranges consistent with our measured integrated energies at these two intensities are 5.6-13 MeV and 10-16 MeV, respectively. Figure 4 compares these ranges to different analytic models of peak ATI electron energy: the ponderomotive, relativistic ponderomotive, and superponderomotive "wave-particle resonance" [18][19] models. The experimentally determined ranges fall between the relativistic and non-relativistic ponderomotive models, with the Monte Carlo model overestimating the maximum ATI electron energy by a factor or 2-3. The Monte-Carlo modeling likely overestimates the integrated ATI electron energies because a Gaussian laser focus is assumed to make the model computationally tractable. The predicted electron energy at the \(0^{\circ}\) position will be overestimated because higher-order spatial modes will dephase an electron co-propagating with the laser field more quickly due to stronger Gouy phase shift associated with higher-order modes, decreasing the maximum electron energy along the laser direction[19]. Our experimental finding that the maximum ATI electron energy falls between the ponderomotive and relativistic ponderomotive models raises an important theoretical question about whether the superponderomotive scaling of the maximum ATI electron energy at the onset of "wave-particle resonance" predicted by D. F. Gordon _et al._[18] and demonstrated by our Monte-Carlo modeling in Figure 4 would be a feature of a non-Gaussian laser focus. We find that neither the ADK-PPT Monte-Carlo model nor the Augst BSI model provide a fully consistent quantitative description of integrated ATI electron energy measurements. While a significant threshold intensity shift is unexpected for the ionization of a simple helium-like atom, it is not inconsistent with the other ionization experiments at relativistic intensities. Yamakawa _et al._ compared direct measurements of ion charge states to the ADK-PPT model and found that the ADK-PPT model predicted a laser intensity that was a factor of 2-8 lower than the indirectly calculated laser field intensity of \(2.6\times 10^{19}\) W/cm\({}^{2}\), with heavier noble gas ions consistent with lower field intensities [7]. While laser intensity has been claimed to exceed the barrier suppression intensity of the neon K-shell states since at least 2006 and charge states as high as Kr\({}^{24+}\) have been observed [20], our work represents the first inference of neon K-shell ionization that has been reported in the literature to our knowledge and the first detection of ATI electrons exceeding 10 MeV, corresponding to the absorption of \(10^{7}\) photons during the ionization process. The detection of ATI electrons from K-shell states of argon (\(\sim 3\times 10^{21}\) W/cm\({}^{2}\)) and krypton (\(\sim 10^{23}\)) W/cm\({}^{2}\) are relatively straightforward extensions of our work made possible by development of repetition-rated multi-petawatt laser systems [21]. The interaction of Petawatt laser pulses with highly-charged ions can gener Figure 4: Comparison between the peak ATI electron energies predicted by different models and our experimental results. Curves are analytic models (see Refs. [18] and [19]) and the markers represent the average of the top 10% most energetic electrons in the ADK-PPT Monte-Carlo simulations. The shaded red regions represent peak ATI electron consistent with the data in Fig. 3b ate ATI electrons with energy exceeding 100 MeV[10][19], would exit the focus in a small forward cone that could be monitored with a magnetic spectrometer or a scintillating detector array. Extension of this basic experimental design and the development of computational techniques to model the interaction of ultra-relativistic electrons with realistic spatio-temporal laser fields are active areas of research necessary to further develop a direct intensity measurement technique using ATI electrons from highly-charged ions in an intense laser field. A. Y. acknowledges helpful conversations with E. Chowdhury regarding the design of this experiment. This work was supported by the DOE, Office of Science, Fusion Energy Sciences under Contract No. DE-SC0021125: LaserNetUS: A Proposal to Advance North America's First High Intensity Laser Research Network, the Air Force Office of Scientific Research through Awards No. FA9550-14-1-0045 and No. FA9550-17-1-0264, the National Nuclear Security Agency (NNSA) through Award No. NA0002008, and was prepared by LLNL under Contract DE-AC52-07NA27344. A. Y. gratefully acknowledges the generous support of the Jane and Michael Downer Fellowship in Laser Physics in Memory of Glenn Bryan Focht.
2305.10610
Solving Cosine Similarity Underestimation between High Frequency Words by L2 Norm Discounting
Cosine similarity between two words, computed using their contextualised token embeddings obtained from masked language models (MLMs) such as BERT has shown to underestimate the actual similarity between those words (Zhou et al., 2022). This similarity underestimation problem is particularly severe for highly frequent words. Although this problem has been noted in prior work, no solution has been proposed thus far. We observe that the L2 norm of contextualised embeddings of a word correlates with its log-frequency in the pretraining corpus. Consequently, the larger L2 norms associated with the highly frequent words reduce the cosine similarity values measured between them, thus underestimating the similarity scores. To solve this issue, we propose a method to discount the L2 norm of a contextualised word embedding by the frequency of that word in a corpus when measuring the cosine similarities between words. We show that the so called stop words behave differently from the rest of the words, which require special consideration during their discounting process. Experimental results on a contextualised word similarity dataset show that our proposed discounting method accurately solves the similarity underestimation problem.
Saeth Wannasuphoprasit, Yi Zhou, Danushka Bollegala
2023-05-17T23:41:30Z
http://arxiv.org/abs/2305.10610v1
# Solving Cosine Similarity Underestimation between ###### Abstract Cosine similarity between two words, computed using their contextualised token embeddings obtained from masked language models (MLMs) such as BERT has shown to underestimate the actual similarity between those words Zhou et al. (2022). This similarity underestimation problem is particularly severe for highly frequent words. Although this problem has been noted in prior work, no solution has been proposed thus far. We observe that the \(\ell_{2}\) norm of contextualised embeddings of a word correlates with its log-frequency in the pretraining corpus. Consequently, the larger \(\ell_{2}\) norms associated with the highly frequent words reduce the cosine similarity values measured between them, thus underestimating the similarity scores. To solve this issue, we propose a method to _discount_ the \(\ell_{2}\) norm of a contextualised word embedding by the frequency of that word in a corpus when measuring the cosine similarities between words. We show that the so called _stop_ words behave differently from the rest of the words, which require special consideration during their discounting process. Experimental results on a contextualised word similarity dataset show that our proposed discounting method accurately solves the similarity underestimation problem. ## 1 Introduction Cosine similarity is arguably the most popular word similarity measure used in numerous natural language processing (NLP) tasks, such as question answering (QA), information retrieval (IR) and machine translation (MT) Echizen-ya et al. (2019); Oniani and Wang (2020); Kim et al. (2022); Hanifi et al. (2022). First, a word is represented by a vector (aka _embedding_) and then the similarity between two words is computed as the cosine of the angle between the corresponding vectors Rahutomo et al. (2012). Despite the good performance of cosine similarity as a similarity measure in various downstream tasks, Zhou et al. (2022) showed that it systematically underestimates the true similarity between highly frequent words, when computed using contextualised word embeddings obtained from MLMs such as BERT Devlin et al. (2018). Compared to the problem of estimating similarity between highly frequent words, the opposite problem of estimating the similarity between (or involving) rare (low frequency) words has received greater attention, especially in the scope of static word embeddings Levy and Goldberg (2014); Hellrich and Hahn (2016); Mimno and Thompson (2017); Wendlandt et al. (2018). If a word is rare in a corpus, we might not have a sufficiently large number of contexts containing that word to learn an accurate embedding for it. This often leads to unreliable similarity estimations between words and has undesirable implications in downstream tasks such as the detection of analogies and social biases Ethayaraj et al. (2019, 2019). On the other hand, Zhou et al. (2022) studied the impact of frequency on contextualised word embeddings and showed that the cosine similarity between highly frequent words are systematically underestimated. Unlike in the previously discussed low frequency word scenario, we do have adequate contexts to learn an accurate semantic representation for highly frequent words. Therefore, it might appear surprising at first that cosine similarity cannot be correctly estimated even for the highly frequent words. Zhou et al. (2021) show that the diversity (measured by the volume of the bounding hypersphere) of the contextualised embeddings of a target word, computed from multiple contexts containing the word, increases with the frequency of that word. They provide an explanation that holds true only for 2-dimensional embeddings, which relates diversity to the underestimation of cosine similarity. Unfortunately, this explanation does not extend to the high dimensional embeddings used in practice by the NLP community (e.g. BERT token embeddings are typically more than 768 di mensional). More importantly, to the best of our knowledge, no solution has been proposed in the literature to address the cosine similarity underestimation problem associated with the highly frequent words. In prior work, the \(\ell_{2}\) norm of a static word embedding has been shown to linearly correlate with the log-frequency of that word (Arora et al., 2016; Bollegala et al., 2018). On the other hand, we empirically study the \(\ell_{2}\) norm of the contextualised embedding of a word \(w\) averaged over all of its contexts, and find that it too approximately linearly correlates with the log-frequency of \(w\) in the corpus used to pretrain the MLM. Recall that the cosine similarity is defined as the inner-product between two embeddings, divided by the \(\ell_{2}\) norm of those embeddings. Therefore, we suspect that the underestimation of cosine similarity between highly frequent words is due to the larger \(\ell_{2}\) norms associated with those words. To correct for this bias associated with the \(\ell_{2}\) norms of highly frequent words, we propose a linearly parameterised discounting scheme in the log-frequency space. Specifically, we use Monte-Carlo Bayesian Optimisation (Balandat et al., 2019) to find the optimal discounting parameters. Our proposed discounting method is shown to accurately correct the underestimation of cosine similarities between highly frequent words on the Word-in-Context (WiC) (Pilehvar and Camacho-Collados, 2019) dataset where human similarity ratings are available for the same word in two different contexts. Source code for reproducing the experiments reported in this is paper is publicly available.1 Footnote 1: [https://github.com/LivNLP/cosine-discounting](https://github.com/LivNLP/cosine-discounting) ## 2 Underestimation of Cosine Similarity Let us denote the \(d\)-dimensional contextualised word embedding produced by an MLM \(f\) for a target word \(w\) appearing in a context \(c\) by \(\mathbf{f}(w,c)(\in\mathbb{R}^{d})\). Moreover, let the set of contexts where \(w\) occurs in a given corpus be \(\mathcal{S}(w)\). We refer to \(\{\mathbf{f}(w,c)|w\in\mathcal{S}(w)\}\) as the set of _sibling embeddings_ of \(w\). To study the relationship between the cosine similarity scores and the frequency of words, we use the 768-dimensional bert-base-uncased2 as the contextualised embedding model. We use the token embedding of \(w\) from the final hidden layer of BERT as \(\mathbf{f}(w,c)\). We approximate the word frequencies in BERT pretraining corpus using the BookCorpus (Zhu et al., 2015). Let \(\psi_{w}\) be the frequency of \(w\) in this corpus. Footnote 2: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) We use the WiC dataset, which contains 5428 pairs of words appearing in various contexts with annotated human similarity judgements. WiC dataset is split into official training and development sets, while a separate hidden test set is used by the leaderboard for ranking Word Sense Disambiguation systems.3 WiC dataset contains pairs of contexts labelled as having the **same meaning** (e.g. "to _drive_ sheep out of a field" vs. "to _drive_ the cows into the barn") and **different meaning** (e.g. "the _play_ lasted two hours" vs. "they made a futile _play_ for power"). Footnote 3: [https://pilehvar.github.io/wic/](https://pilehvar.github.io/wic/) We compute the cosine similarity between the two contextualised embeddings of a target word in two of its contexts to predict a similarity score. Figure 1 shows the predicted similarity scores for both contexts in which a target word has been used in the same or different meanings for all words in the WiC dataset against \(\log(\psi_{w})\). As seen from Figure 3, \(\psi_{w}\) has a power-law distribution. Therefore, we plot its log instead of raw frequency counts in Figure 1. From Figure 1, we see that for both same as well as different meaning contexts, the predicted cosine similarities drop with the word frequencies. Moreover, the gradient of the drop for same meaning pairs (Pearson's \(r=-0.3001\)) is larger than that Figure 1: Cosine similarity between two instances of the same word \(w\) in two contexts in the WiC train dataset. When the log-frequency of \(w\) in the corpus increases, cosine similarities computed for both contexts that express the same meaning of \(w\) as well as its different meanings decreases. for the different meaning pairs (\(r=-0.2125\)), indicating that the underestimation of cosine similarity is more sever for the similar contexts of highly frequent words. ## 3 \(\ell_{2}\) norm Discounting To understand the possible reasons behind the cosine similarity underestimation for highly frequent words discussed in SS 2, for each word \(w\) we compute its mean sibling embedding, \(\hat{\mathbf{w}}\), given by (1). \[\hat{\mathbf{w}}=\frac{1}{|\mathcal{S}(w)|}\sum_{c\in\mathcal{S}(w)}\mathbf{f}(w,c) \tag{1}\] We plot \(||\hat{\mathbf{w}}||\) against \(\log(\psi(w))\) in Figure 2 separately for a predefined set of stop words and all other words (i.e. non-stop words). For this purpose, we use the default 1466 stop words from NLTK and randomly selected 997,425 non-stop words from the BookCorpus. Pearson \(r\) values of stop words and non-stop words are respectively 0.1697 and 0.3754, while the lines of best fits for each class of words are superimposed. From Figure 2, we see that overall, \(||\hat{\mathbf{w}}||\) increases with \(\log(\psi_{w})\) for both stop and non-stop words, while the linear correlation is stronger in the latter class. Considering that stop words cover function words such as determiners and conjunctions that co-occur with a large number of words in diverse contexts, we believe that the \(\ell_{2}\) norm of stop words mostly remains independent of their frequency. Recall that the cosine similarity between two words is defined as the fraction of the inner-product of the corresponding embeddings, divided by the product of the \(\ell_{2}\) norms of the embeddings. Therefore, even if the inner-product between two words remain relatively stable, it will be divided by increasingly larger \(\ell_{2}\) norms in the case of highly frequent words. Moreover, this bias is further amplified when both words are high frequent due to the _product_ of \(\ell_{2}\) norms in the denominator. To address this problem, we propose to discount the \(\ell_{2}\) norm of a word \(w\) by a discounting term, \(\alpha(\psi_{w})\), and propose a discounted version of the cosine similarity given by (2). \[\cos_{\alpha}(\mathbf{x},\mathbf{y})=\frac{\mathbf{x}^{\top}\mathbf{y}}{||\mathbf{x}||\,\alpha( \psi_{x})\,||\mathbf{y}||\,\alpha(\psi_{y})} \tag{2}\] Following Figure 2, we linearly parameterise \(\alpha(\psi_{w})\) separately for stop vs. non-stop words as in (3). \[\alpha(\psi_{w})=\begin{cases}1+m_{s}(b_{s}-\log(\psi_{w}))&\text{w is a stop word}\\ 1+m_{n}(b_{n}-\log(\psi_{w}))&\text{w is a non-stop word}\end{cases} \tag{3}\] The scalar parameters \(m_{s},m_{n},b_{s}\) and \(b_{n}\) are estimated as follows. First, we randomly initialise all parameters uniformly in \([0,1]\) and use (2) to predict cosine similarity between two contexts in which a target word \(w\) occurs in the WiC train instances. We then make a binary similarity judgement (i.e. **same** or **different** meaning) for the pair of contexts in an instance depending on whether the predicted cosine similarity is greater than a threshold \(\theta\). Next, we compute the overall binary classification accuracy for the similarity predictions made on the entire WiC training dataset, Figure 3: Histogram of word frequencies in the BERT pretrain corpus. We see a Zipfian (power-law) distribution, which turns out to be approximately liner in the log-frequency space. Figure 2: \(\ell_{2}\) norm of the averaged contextualised word embedding of a word against its log-frequency in the pretrain corpus. Stop words and non-stop words are shown respectively in orange and blue dots. Lines of best fits for each category are superimposed. and use Bayesian Optimisation to find the optimal values: \(\theta=0.545\), \(m_{s}=0.00422\), \(b_{s}=0.643\), \(m_{n}=0.00427\) and \(b_{n}=4.821\). Specifically we used the Adaptive Experimentation Platform4 for learning those optimal values. We found this is more efficient than conducting a linear search over the parameter space. We repeat the estimation five times and use the averaged parameter values in the remainder of the experiments. Note that \(m_{n}\) > \(m_{s}\) above, which indicates that non-stop words must be discounted slightly more heavily than the stop words. This makes sense since the impact of word frequency of non-stop words on their \(\ell_{2}\)-norm is stronger than that for the stop words as indicated by the slopes of the lines of best fit in Figure 2. Footnote 4: [https://ax.dev/](https://ax.dev/) ## 4 Results To evaluate the effect of the proposed \(\ell_{2}\) norm discounting when computing cosine similarity, we repeat the analysis presented in Figure 1 using (2) to predict the similarity between contextualised word embeddings. Comparing the lines of best fit for the original (blue, \(r=-0.3006\)) vs. discounted (orange, \(r=-0.1366\)) for the same meaning contexts, we see that the gradient of the drop has decreased by \(51.65\%\). Likewise, comparing the lines of best fit for the original (green, \(r=-0.2125\)) vs. discounted (red, \(r=-0.0843\)) for the different meaning contexts, we see the gradient of the drop has decreased by \(57.04\%\). This result clearly shows that the proposed \(\ell_{2}\) norm discounting method is able to reduce the underestimation of cosine similarities for the highly frequent words. Given that the discounting parameters in (3) are learned from the WiC train data, it remains an open question as to how well the proposed discounting method generalises when predicting similarity between contextualised embeddings of unseen words. To evaluate this generalisability of the proposed method, we use (3) with its learned parameters from WiC train data, to predict the similarity between contextualised word embeddings in WiC dev data.5 Specifically, we predict binary (same vs. different meaning) similarity labels according to the similarity threshold \(\theta\) learnt in SS 3 and compare against the human judgements using binary classification accuracy. Footnote 5: Note that the test set of WiC is publicly unavailable due to being used in a leaderboard. The maximum accuracy on WiC dev split obtained using the original (non-discounted) cosine similarities is \(0.6667\), which indicates that the cosine similarity is somewhat predictive of the human binary judgements. The overall F1 is improved by \(2.4\%\) (0.68 with original cosine vs. 0.71 with the proposed discounting method) and recall is improved by \(12\%\) (0.75 with original cosine vs. 0.84 with the proposed). On the other hand, the drop Figure 4: Cosine similarity between two instances of the same word \(w\) in two contexts in the WiC train dataset, computed using the original (non-discounted) cosine similarity (shown in blue and green respectively for the same and different meaning pairs) and using the proposed \(\ell_{2}\) norm discounted ((2)) (shown in orange and red respectively for the same and different meaning pairs). We see that the gradients of the drops have _decreased_ for both same and different meaning pairs _after_ applying the discounting. Figure 5: Percentage of examples labelled as having the “same meaning”. In high frequency words, we see that the cosine similarity-based predictions (orange/middle) are systematically **underestimate** the human similarity judgements (blue/left). However, after the proposed discounting method has been applied (green/right) the underestimation has reduced. in precision is \(4.7\%\) (from 0.64 to 0.61). Therefore, the proposed method solves the cosine similarity underestimation problem associated with high-frequent words, without significantly affecting the similarity scores for low-frequent ones Figure 5 shows the average proportion of instances predicted to be the same meaning as a function of frequency, grouped into ten bins, each with the same number of examples. From Figure 5, we see that in high frequency bins (i.e. bins 8, 9 and 10), the percentage of predicted instances as having the same meaning is consistently lower than that compared to the human judgements. This shows an underestimation of the true (human judged) similarity between contextualised word embeddings. On the other hand, when we use the proposed \(\ell_{2}\) norm discounted cosine similarity (defined in (2)), in the highest frequent bin (i.e. 10) we see that the gap between human judgements vs. predicted similarities has reduced. Moreover, in the low frequency bins (i.e. 1-4), we see that the proposed discounting method does not affect the predictions made using cosine similarities. We see an overestimation of the cosine similarities in the low frequency bins as reported by Zhou et al. (2021). As discussed already in SS 1, the word embeddings learnt for low frequency words tend to be unreliable due to data sparseness. Therefore, we believe it is important to focus on the problem of learning accurate word embeddings rather than to adjust cosine similarities between low-frequency words in a post-processing step. We see that in bins 5, 6 and 7 the similarity scores are slightly increased by the proposed discounting method, which is a drawback that needs to be addressed in future work. More importantly however, the overall percentage recall across all bins for retrieving same meaning instances improves significantly from 74.7% to 83.7% compared to using respectively the original cosine similarity vs. the discounted cosine similarity. Overall, this result confirms the validity of the proposed discounting method for addressing the underestimation of cosine similarity involving highly frequent words. ## 5 Conclusion We proposed a method to solve the cosine similarity underestimation problem in highly frequent words. Specifically, we observed that the \(\ell_{2}\) norm of a contextualised word embedding increases with its frequency in the pretrain corpus and proposed a discounting scheme. Experimental results on WiC dataset confirmed the validity of the proposed method. ## 6 Limitations We proposed a solution to the cosine similarity underestimation problem associated with contextualised word embeddings of highly frequent words. Our evaluations used only a single contextualised embedding model (i.e. BERT) with a single dimensionality (i.e. 768). Therefore, we believe that our proposed method must be evaluated with other (more recent) MLMs to test for its generalisability. Moreover, our evaluations were conducted only on the English language, which is known to be morphologically limited. Although in our preliminary experiments we considered discounting schemes based on the part-of-speech of words (instead of considering stop words vs. non-stop words), we did not find any significant improvements despite the extra complexity. However, these outcomes might be different for more morphologically richer languages. In order to evaluate similarity predictions in other languages, we must also have datasets similar to WiC annotated in those languages, which are difficult to construct. Although having stated that using a single MLM and single language as limitations of this work, we would like to point out that these are the same conditions under which Zhou et al. (2022) studied the cosine similarity underestimation problem. We used only a single dataset (i.e. WiC) in our experiments in this short paper due to space constraints. Other contextual similarity datasets (e.g. Stanford Contextualised Word Similarity (SCWS) (Huang et al., 2012)) could be easily used to further validate the proposed discounting method in an extended version. ## 7 Ethical Considerations In this paper, we do not annotate novel datasets nor release any fine-tuned MLMs. Therefore, we do not see any direct ethical issues arising from our work. However, we are proposing a method to address the underestimation of cosine similarity scores computed using contextualised word embeddings obtained from (possibly socially biased) pretrained MLMs. We would therefore discuss the ethical implication of this aspect of our work in this section. Cosine similarity has been used in various social bias evaluation measures such as the WEAT (Caliskan et al., 2017), SemBias (Zhao et al., 2018), WAT (Du et al., 2019), etc. These methods measure the cosine similarity between a gender and a set of pleasant or unpleasant set of attributes to compute a social bias evaluation score. Although originally these methods were developed for evaluating the social biases in static word embeddings, they have been later extended to contextualised word embeddings (Kaneko and Bollegala, 2022; Kaneko et al., 2022) and sentence embeddings (May et al., 2019), where cosine similarity still remains the main underlying metric. However, Ethayarajh et al. (2019) showed that inner-products to be superior over cosine similarity for social bias evaluation purposes. It remains unclear as to how the underestimation in cosine similarities discussed in our work would influence the social bias evaluations. In particular, the effect of the proposed \(\ell_{2}\) norm discounting scheme on social bias evaluation must be carefully studied in the future work. ## Acknowledgements Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon.
2307.09538
Uniqueness of Steady Navier-Stokes under Large Data by Continuous Data Assimilation
We propose a continuous data assimilation (CDA) method to address the uniqueness problem for steady Navier-Stokes equations(NSE). The CDA method incorporates spatial observations into the NSE, and we prove that with sufficient observations, the CDA-NSE system is well-posed even for large data where multiple solutions may exist. This CDA idea is in general helpful to determine solution for non-uniqueness partial differential equations(PDEs).
Xuejian Li
2023-07-18T18:30:39Z
http://arxiv.org/abs/2307.09538v1
# Uniqueness of Steady Navier-Stokes under Large Data by Continuous Data Assimilation ###### Abstract We propose a continuous data assimilation (CDA) method to address the uniqueness problem for steady Navier-Stokes equations (NSE). The CDA method incorporates spatial observations into the NSE, and we prove that with sufficient observations, the CDA-NSE system is well-posed even for large data where multiple solutions may exist. This CDA idea is in general helpful to determine solution for non-uniqueness partial differential equations (PDEs). keywords: Navier-Stokes equations, Continuous data assimilation, Uniqueness. + Footnote †: journal: Journal of Mathematical Analysis and Applications ## 1 Introduction The Navier-Stokes equations (NSE) are fundamental in modeling fluid mechanics. On \(\mathbb{R}^{d},d=2,3\), the steady NSE for incompressible Newtonian fluids is given by \[\begin{cases}-\nu\Delta u+(u\cdot\nabla)u+\nabla p=f\quad\text{in }\Omega,\\ \nabla\cdot u=0\quad\text{in }\Omega,\\ u=0\quad\text{on }\partial\Omega,\end{cases} \tag{1}\] where \(u\) is the velocity of fluid, \(p\) is the kinetic pressure, \(\nabla\cdot u=0\) indicates that the fluid is incompressible, \(f\) is the external force, and \(\nu\) is the viscosity of the fluid. The parameter \(Re=\frac{1}{\nu}\) plays the role of Reynolds number. It is well-known that for small data, i.e. small \(Re\) and \(f\), there exists a unique solution for the system (1). However, while \(Re\) or \(f\) increases and crosses certain critical bounds, the NSE can lose uniqueness and admits multiple solutions that fall into different branches [1]. This phenomenon is often encountered in practice, and these non-unique solutions are often called isolated solutions or branches of nonsingular solutions [1; 2]. Numerically finding such solutions is especially difficult due to non-uniqueness making nonlinear iterative solvers less effective. The main interest of this paper is showing that using continuous data assimilation (CDA) [3; 4; 5; 6; 7] can overcome the uniqueness difficulty for the steady NSE. While CDA is generally used with time dependent problems, the type of nudging employed by CDA can also be applied to steady problems, however the notion of continuous (in time) is no longer valid; still, we refer to it as CDA in this paper. To define the steady CDA-NSE system, let \(I_{H}u\) represent an interpolant operator (or observation operator) based on spatial observations of a NSE solution \(u\) of system (1) at a coarse resolution mesh size \(H\) (requirements for \(I_{H}\) are given in section 2). To uniquely identify the solution for system (1) associated with the measurements \(I_{H}u\), we propose the following CDA-NSE system: \[\begin{cases}-\nu\Delta w+(w\cdot\nabla)w+\nabla z+\mu(I_{H}w-I_{H}u)=f\quad \text{in }\Omega,\\ \nabla\cdot w=0\quad\text{in }\Omega,\\ w=0\quad\text{on }\ \partial\Omega,\end{cases} \tag{2}\] where \(\mu(I_{H}w-I_{H}u)\) is a nudging term driving state \(w\) towards to the observations, and \(\mu\) is a positive relaxation parameter that emphasizes the observations accuracy. In this context, we consider accurate spatial observations, and thus there are no size restrictions on \(\mu\). We show that with enough observations, i.e. that \(H\) is sufficiently small, the CDA-NSE (2) has a unique solution even for large \(Re\) and \(f\), and the CDA-NSE solution is identical to the isolated NSE solution that corresponds to the observed state. The analysis and results in this paper may have a positive influence on developing effective iterative solvers for the steady NSE with large Reynolds number or external forces when observations are available. While this note studies the NSE, a similar idea can lead to wellposedness for related steady multi-physics problems, such as magnetohydrodynamics or Boussinesq systems. ## 2 Uniqueness analysis Before formally presenting the main results, we briefly introduce necessary preliminaries. Consider \(\Omega\) as an open bounded domain, denote the natural function spaces by \[Q :=\{v\in L^{2}(\Omega):\int_{\Omega}vdx=0\}, \tag{3}\] \[X :=\{v\in H^{1}\left(\Omega\right):v=0\ \text{ on }\partial\Omega\},\] (4) \[V :=\{v\in X:\nabla\cdot v=0\}. \tag{5}\] Let \((\cdot,\cdot)\) denote the \(L^{2}(\Omega)\) inner product that induces the \(L^{2}\) norm \(\|\cdot\|\), \(H^{-1}\) and \(V^{*}\) denote the dual spaces of \(X\) and \(V\), respectively. In addition, let \(\langle z,v\rangle_{-1}\) denote the action of \(z\in H^{-1}\) on \(v\in X\) and \(\langle z,v\rangle_{*}\) denote the action of \(z\in V^{*}\) on \(v\in V\), respectively. Also, \[\|z\|_{-1}=\sup_{\forall v\in X}\frac{\langle z,v\rangle_{-1}}{\|\nabla v\|},\ \ \|z\|_{*}=\sup_{\forall v\in V}\frac{\langle z,v\rangle_{*}}{\|\nabla v\|}.\] The weak form of NSE (1) is to find \((u,p)\in X\times Q\) such that \[a\left(u,v\right)+b\left(u,u,v\right)+\left(p,\nabla\cdot v\right)=\left\langle f,v\right\rangle_{-1}\ \ \forall v\in X,\ \ (\nabla\cdot u,q)=0\ \ \forall q\in Q, \tag{6}\] where \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot,\cdot)\) are defined as follows: \[a(u,v)=(\nu\nabla u,\nabla v)\ \ \forall u,v\in X\] \[b(u,w,v)=((u\cdot\nabla)w,v)\ \ \forall u,w,v\in X.\] Note that due to inf-sup condition holding on \(X\times Q\)[1; 8]: \[\inf_{0\neq q\in Q}\sup_{0\neq v\in X}\frac{(q,\nabla\cdot v)}{\|q\|_{Q}\left\| v\right\|_{X}}\geq\beta>0,\] the system (6) is equivalent to: Find \(u\in V\) satisfying \[a\left(u,v\right)+b\left(u,u,v\right)=\left\langle f,v\right\rangle_{*}\ \ \ \forall v\in V. \tag{7}\] For the trilinear term \(b(\cdot,\cdot,\cdot)\), the following inequalities hold [2, 9]: \[b(u,w,v) \leq M\|\nabla u\|\|\nabla w\|\|\nabla v\|\ \ \text{for}\ d=2\ \text{and}\ d=3, \tag{8}\] \[b(u,w,v) \leq M_{1}\|u\|^{\frac{1}{2}}\|\nabla u\|^{\frac{1}{2}}\|\nabla w \|\|\nabla v\|\ \text{for}\ d=2\ \text{and}\ d=3,\] (9) \[b(u,w,v) \leq M_{2}\|u\|^{\frac{1}{2}}\|\nabla u\|^{\frac{1}{2}}\|\nabla w \|\|v\|^{\frac{1}{2}}\|\nabla v\|^{\frac{1}{2}}\ \ \text{for}\ d=2. \tag{10}\] Here, \(M\), \(M_{1}\), and \(M_{2}\) are positive constants depending on \(\Omega\). We recall the classical well-posedness result for equation (7) [2, 9]: **Lemma 1**.: _Let \(\alpha=M\nu^{-2}\|f\|_{*}\). For any \(f\in V^{*}\) and \(\nu\), there exists at least one solution for NSE (7). Besides this, every solution of (7) satisfy a priori estimate_ \[\|\nabla u\|\leq\nu^{-1}\|f\|_{*}. \tag{11}\] _Furthermore, if \(\alpha<1\), the solution is unique._ The restriction \(\alpha<1\) is usually referred as the small data condition for steady NSE. In this same spirit, we refer to \(\alpha\geq 1\) as the case of large data. Given interpolated observations \(I_{H}u\), the weak form of the CDA-NSE (2) is to find \(w\in V\) such that \[a\left(w,v\right)+b\left(w,w,v\right)+\mu(I_{H}w-I_{H}u,I_{H}v)=\left\langle f,v\right\rangle_{*}\ \ \forall v\in V. \tag{12}\] **Remark 1**.: _Note that in (12), \(\mu(I_{H}u,v)=\mu(I_{H}u,I_{H}v)\ \forall u,v\in X\) in the case that \(I_{H}\) is the \(L^{2}\) projection onto the coarse mesh space, and for general \(I_{H}\) that all results below still hold if you used \(\mu(I_{H}u,v)\) instead of \(\mu(I_{H}u,I_{H}v)\) but there would be stronger restrictions on \(\mu\) and \(H\)._ In the remainder of the paper, we assume the interpolant \(I_{H}\) is linear and have the properties: \[\|I_{H}v-v\|\leq C_{I}H\|\nabla v\|,\ \ \|I_{H}v\|\leq C\|v\|\ \ \forall v\in X. \tag{13}\] Such interpolant generally exists in finite approximation theory, for instance the \(P_{1}\) finite element interpolation[7]: \[I_{H}v:=\sum_{j=1}^{N_{1}}v(x_{H}^{j})\phi_{j}\ \ \forall v\in X.\] Here, \(H\) can be the finite element mesh size, \(N_{1}\) is the number of finite element nodes, \(x_{H}^{j}\) is the \(j^{th}\) finite element node, and \(\{\phi_{j}\}_{j=1}^{N_{1}}\) are the degree one polynomial finite element basis. Based on Leray-Schauder fixed point theorem1, it is not difficult to prove the CDA-NSE (12) has at least one solution for any non-negative \(\mu\) and \(H\). Additionally, one can observe if \(w=u\) is a solution to (12), then the existence is established this way as well. In the following, we focus the relation between equations (12) and (7) and show the uniqueness of (12). Footnote 1: This is the only place where the inequality \(\|I_{H}v\|\leq C\|v\|\) in (13) is in need. **Theorem 1**.: _Assume \(f\in V^{*}\) and \(u\) is a solution of (7). If \(\alpha<1\), for any given \(H\) and \(\mu\), the CDA-NSE (12) is equivalent to the NSE (7) in sense that the solution \(w\) to (12) is unique and equal to \(u\). If \(\alpha\geq 1\), under the condition_ \[H\leq\frac{2M^{2}}{3\sqrt{3}C_{I}M_{1}^{2}\alpha^{2}}\ \ \text{and}\ \ \mu\geq\frac{\nu}{4C_{I}^{2}H^{2}}, \tag{14}\] _the CDA-NSE (12) has a unique solution which is exactly the isolated solution of NSE (7) that corresponds to the observed state, that is, we also have \(w=u\)._ Proof.: Subtracting equation (12) from (7), we have \[\begin{split} 0&=a\left(w,v\right)-a\left(u,v\right)+b \left(w,w,v\right)-b\left(u,u,v\right)+\mu(I_{H}w-I_{H}u,I_{H}v)\\ &=a\left(w-u,v\right)+b(w,w-u,v)+b(w-u,u,v)+\mu(I_{H}w-I_{H}u,I_ {H}v).\end{split} \tag{15}\] Taking \(v=w-u\), and using (8) and (11), we obtain \[\begin{split}&\nu\|\nabla(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}=-b(w-u,u,w-u)\\ &\leq M\|\nabla(w-u)\|^{2}\|\nabla u\|\leq M\nu^{-1}\|f\|_{*}\| \nabla(w-u)\|^{2}.\end{split} \tag{16}\] Rearranging (16) gives us \[\nu(1-M\nu^{-2}\|f\|_{*})\|\nabla(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}\leq 0. \tag{17}\] If \(\alpha<1\), it is clear to see \(\|\nabla(w-u)\|=0\) is always true, i.e., \(w=u\). Thus with \(\alpha<1\), the NSE (7) has a unique solution, and so \(w=u\) is the unique CDA-NSE solution. Next, we consider the case \(\alpha\geq 1\). Continuing from the equality in (16), using inequalities (9) and (11) and generalized Young's inequality, we have \[\begin{split}&\nu\|\nabla(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}=-b(w-u,u,w-u)\\ &\leq M_{1}\|w-u\|^{\frac{1}{2}}\|\nabla(w-u)\|^{\frac{1}{2}}\| \nabla u\|\|\nabla(w-u)\|\\ &\leq M_{1}\nu^{-1}\|f\|_{*}\|\nabla(w-u)\|^{\frac{3}{2}}\|w-u\|^ {\frac{1}{2}}\\ &\leq\frac{M_{1}}{M}\nu\alpha\|\nabla(w-u)\|^{\frac{3}{2}}\|w-u \|^{\frac{1}{2}}\\ &\leq\frac{\nu}{2}\|\nabla(w-u)\|^{2}+\frac{27M_{1}^{4}\nu\alpha ^{4}}{32M^{4}}\|w-u\|^{2}.\end{split} \tag{18}\] Applying inequality (13) and the norm inequality \(\frac{\|a-b\|^{2}}{2}\leq\|a-c\|^{2}+\|c-b\|^{2}\), we bound the left side of (18) from below as \[\begin{split}&\nu\|\nabla(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}\\ &\geq\frac{3\nu}{4}\|\nabla(w-u)\|^{2}+\frac{\nu}{4C_{I}^{2}H^{ 2}}\|(w-u)-I_{H}(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}\\ &\geq\frac{3\nu}{4}\|\nabla(w-u)\|^{2}+\frac{\lambda}{2}\|w-u\|^ {2},\end{split} \tag{19}\] where \(\lambda=\min\{\frac{\nu}{4C_{I}^{2}H^{2}},\mu\}\). Combining (18) and (19) leads to \[\frac{\nu}{4}\|\nabla(w-u)\|^{2}+\left(\frac{\lambda}{2}-\frac{27M_{1}^{4}\nu \alpha^{4}}{32M^{4}}\right)\|w-u\|^{2}\leq 0. \tag{20}\] Recall that \(\mu\) can be large and there is no upper bound on \(\mu\) that arises in our analysis, we thus consider \(\mu\) large enough so that \(\lambda=\frac{\nu}{4C_{I}^{2}H^{2}}\). If \(\frac{\lambda}{2}-\frac{27M_{1}^{4}\nu\alpha^{4}}{32M^{4}}\geq 0\) is satisfied, i.e., \[H\leq\frac{2M^{2}}{3\sqrt{3}C_{I}M_{1}^{2}\alpha^{2}}, \tag{21}\] then \(\|\nabla(w-u)\|=0\) holds. Finally, since the solutions of the steady NSE are isolated, then \(w\) must be the observed isolated solution of equation (7) and thus is unique to (12) as well. This completes the proof. **Remark 2**.: _The condition on \(H\) in (14) is less restrictive for \(d=2\). Continuing from (16), using inequality (10), (11), and Young's inequality, we have_ \[\begin{split}&\nu\|\nabla(w-u)\|^{2}+\mu\|I_{H}w-I_{H}u\|^{2}=-b(w-u,u,w-u)\\ &\leq M_{2}\|w-u\|^{\frac{1}{2}}\|\nabla(w-u)\|^{\frac{1}{2}}\| \nabla u\|\|\nabla(w-u)\|^{\frac{1}{2}}\|w-u\|^{\frac{1}{2}}\\ &\leq\frac{M_{2}}{M}\nu\alpha\|\nabla(w-u)\|\|w-u\|\\ &\leq\frac{\nu}{2}\|\nabla(w-u)\|^{2}+\frac{M_{2}^{2}\nu\alpha^{ 2}}{2M^{2}}\|w-u\|^{2}.\end{split} \tag{22}\] _Combining (22) and (19) leads to_ \[\frac{\nu}{4}\|\nabla(w-u)\|^{2}+\left(\frac{\lambda}{2}-\frac{M_{2}^{2}\nu \alpha^{2}}{2M^{2}}\right)\|w-u\|^{2}\leq 0. \tag{23}\] _Consider \(\lambda=\frac{\nu}{4C_{I}^{2}H^{2}}\). If \(\frac{\lambda}{2}-\frac{M_{2}^{2}\nu\alpha^{2}}{2M^{2}}=\frac{\nu}{8C_{I}^{2} H^{2}}-\frac{M_{2}^{2}\nu\alpha^{2}}{2M^{2}}\geq 0\) is satisfied, i.e.,_ \[H\leq\frac{M}{2C_{I}M_{2}\alpha}, \tag{24}\] _then \(\|\nabla(w-u)\|=0\) must hold. Similarly, condition (24) is also sufficient for the uniqueness. Note that, compared to inequality (14), this is a significantly less restriction on \(H\)._ ## 3 Conclusion We proposed a CDA-NSE alteration of the steady NSE system that incorporates observables through the CDA nudging process, and proved that with enough observables the system is well-posed for any data. We showed a sufficient condition for how much observables is needed for well-posedness, and the amount scales with the size of the data. The analysis and results in this paper provides a mathematical foundation for incorporating CDA into iterative nonlinear solvers for the steady NSE, which is a subject of ongoing research by the author. ## Acknowledgments This work is partially supported by NSF Grant DMS 2152623.
2308.10834
SRSS: A New Chaos-Based Single-Round Single S-Box Image Encryption Scheme for Highly Auto-Correlated Data
With the advent of digital communication, securing digital images during transmission and storage has become a critical concern. The traditional s-box substitution methods often fail to effectively conceal the information within highly auto-correlated regions of an image. This paper addresses the security issues presented by three prevalent S-box substitution methods, i.e., single S-box, multiple S-boxes, and multiple rounds with multiple S-boxes, especially when handling images with highly auto-correlated pixels. To resolve the addressed security issues, this paper proposes a new scheme SRSS-the Single Round Single S-Box encryption scheme. SRSS uses a single S-box for substitution in just one round to break the pixel correlations and encrypt the plaintext image effectively. Additionally, this paper introduces a new Chaos-based Random Operation Selection System-CROSS, which nullifies the requirement for multiple S-boxes, thus reducing the encryption scheme's complexity. By randomly selecting the operation to be performed on each pixel, driven by a chaotic sequence, the proposed scheme effectively scrambles even high auto-correlation areas. When compared to the substitution methods mentioned above, the proposed encryption scheme exhibited exceptionally well in just a single round with a single S-box. The close-to-ideal statistical security analysis results, i.e., an entropy of 7.89 and a correlation coefficient of 0.007, validate the effectiveness of the proposed scheme. This research offers an innovative path forward for securing images in applications requiring low computational complexity and fast encryption and decryption speeds.
Muhammad Shahbaz Khan, Jawad Ahmad, Hisham Ali, Nikolaos Pitropakis, Ahmed Al-Dubai, Baraq Ghaleb, William J. Buchanan
2023-08-21T16:32:11Z
http://arxiv.org/abs/2308.10834v1
SRSS: A New Chaos-Based Single-Round Single S-Box Image Encryption Scheme for Highly Auto-Correlated Data ###### Abstract With the advent of digital communication, securing digital images during transmission and storage has become a critical concern. The traditional s-box substitution methods often fail to effectively conceal the information within highly auto-correlated regions of an image. This paper addresses the security issues presented by three prevalent S-box substitution methods, i.e., single S-box, multiple S-boxes, and multiple rounds with multiple S-boxes, especially when handling images with highly auto-correlated pixels. To resolve the addressed security issues, this paper proposes a new scheme SRSS--the Single Round Single S-Box encryption scheme. SRSS uses a single S-box for substitution in just one round to break the pixel correlations and encrypt the plaintext image effectively. Additionally, this paper introduces a new Chaos-based Random Operation Selection System--CROSS, which nullifies the requirement for multiple S-boxes, thus reducing the encryption scheme's complexity. By randomly selecting the operation to be performed on each pixel, driven by a chaotic sequence, the proposed scheme effectively scrambles even high auto-correlation areas. When compared to the substitution methods mentioned above, the proposed encryption scheme exhibited exceptionally well in just a single round with a single S-box. The close-to-ideal statistical security analysis results, i.e., an entropy of 7.89 and a correlation coefficient of 0.007, validate the effectiveness of the proposed scheme. This research offers an innovative path forward for securing images in applications requiring low computational complexity and fast encryption and decryption speeds. S-Box, chaos, image encryption, correlation, single round, single S-Box ## I Introduction With the rapid development of digital communication, social media, telemedicine (to transmit or store clinical image), online biometric systems (to store and transmit face portraits or fingerprints), and the Internet of Things, a large amount of digital images is transmitted over the internet and stored in cloud storage [1]. The information in these digital images may be illegally intercepted, destroyed, or tampered with during transmission or storage [2, 3]. Therefore, digital images need a high level of security. Image encryption plays an indispensable role in securing digital images. Image encryption involves two basic processes, i.e., confusion and diffusion. According to Claude Shannon [4], confusion refers to changing the values of the pixels based on a key and is usually achieved by substituting one value for another. Diffusion, on the other hand, refers to changing the position of the pixels based on a key. This is usually achieved through mechanisms like the permutation. The basic workflow of image encryption using confusion and diffusion processes is given in Fig. 1 and is mathematically expressed as follows [5]: \[C=\delta^{n}\left(\gamma^{m}(P,K_{\delta}),K_{\gamma}\right) \tag{1}\] where \(P\) is the plaintext image, \(C\) is the ciphertext image, \(\delta\) and \(\gamma\) represent the confusion and diffusion processes, respectively, \(K_{\delta}\) and \(K_{\gamma}\) are the confusion and diffusion secret keys, and \(n\) and \(m\) are the number of rounds for confusion and diffusion. A secure image encryption algorithm should be sensitive to the cipher key, with a larger key space to be effective against brute force and other attacks. The key space for a general image encryption system can be computed by Equation 2[5]. \[KS=(KS_{\delta}^{n}\cdot KS_{\gamma})^{m} \tag{2}\] where \(KS_{\delta}\) and \(KS_{\gamma}\) represent the key spaces of the confusion and diffusion processes, respectively. Recently, chaos theory has proven to be an effective and efficient tool in image encryption, owing to its high sensitivity to initial conditions [6], randomness [7], unpredictability [8], and ergodicity [9]. When combined with the confusion and diffusion processes in image encryption, it induces non-linearity in the encrypted image and significantly enhances Fig. 1: Image encryption basic workflow with confusion and diffusion processes. the security of the encryption algorithm. Fig. 2 depicts where the chaos comes in an image encryption algorithm. In most cryptographic systems, the fundamental non-linear component of the confusion process is the S-Box (substitution box) [10, 11, 12]. The S-box substitution method transforms inputs into altered outputs. Usually, three common types of S-box substitution methods are utilized: single S-box using bijective mapping [13, 14, 15], multiple S-boxes [16, 17, 18, 19, 20, 21], and multiple rounds of encryption with multiple S-boxes [22, 23, 24]. However, a common drawback of these methods is their inability to handle images with high auto-correlation, where sections of similar pixel values simply transform into different brightness levels rather than becoming adequately encrypted. This issue has also been addressed and analyzed in detail in Section 2. To address these concerns, this paper aims at proposing a new image encryption scheme that effectively scrambles the image and also mitigates the computational and latency problems in existing schemes. The proposed scheme utilizes a single S-box and only a single round of substitution and breaks the correlations in the image, even in areas of high auto-correlation. The main contributions of this paper are: 1. A new image encryption scheme 'SRSS - Single Round Single S-Box' is proposed to resolve the security, complexity, and latency issues identified in traditional s-box substitution methods. This scheme breaks the correlations in the pixels and encrypts the image by utilizing a single S-box for substitution in only a single round. 2. A new chaos-based random operation selection system - CROSS - is introduced, which eliminates the need for multiple s-boxes and hence, reduces the complexity of the encryption scheme. 3. Three types of substitution methods, i.e., single s-box, multiple s-boxes, and multiple s-boxes with multiple rounds, have been implemented and analyzed to highlight the security issues, especially for images with highly auto-correlated pixels and lower gray scales. ## II Problem Formulation Three types of s-box substitution methods have been implemented and analyzed in detail to highlight their security issues. ### _Single S-Box Substitution Method_ The substitution mapping used in single s-box substitution methods is called bijective mapping. In bijective mapping, pixels are replaced with only one unique S-box value, and the S-box is considered as the bijective function \(f(x)\). The substitution algorithm that utilizes bijective mapping is given in Fig. 3 (a) and the S-box bijective substitution function is given in Fig. 3 (b). This function can be realized mathematically as: \[S:\text{GF}(2^{p})\rightarrow\text{GF}(2^{q}) \tag{3}\] if \(x_{1}=x_{2}\), then \[f(x_{1})=f(x_{2}) \tag{4}\] In such s-box substitution function, the image is encrypted with only one unique element of the utilized S-Box. Pixels having identical values will be replaced with the same unique number from the S-Box and hence, will result in a change in the brightness level of the region only. The results of the Fig. 3: (a) Single S-box substitution algorithm, (b) bijective mapping Fig. 2: Image encryption with chaos-based confusion and diffusion processes. single s-box substitution algorithm given in Fig. 4 show that the Coins image is not scrambled efficiently and all edges are visible. ### _Multiple S-Box Substitution Method_ The most commonly used multiple S-Box substitution method is shown in Fig. 5. Here, chaos is used in conjunction with multiple S-boxes. Chaotic sequences are generated by using logistic map, which is given in Equation (5). \[x_{n+1}=\mu\cdot x_{n}\cdot(1-x_{n}) \tag{5}\] where \(\mu\in(0,4)\) and \(x_{0}\in(0,1)\). This scheme somehow resolves the issues of single s-box substitution, but the problem of visible edges continues to exist and is evident from the results shown in Fig. 6. ### _Multiple S-Boxes and Multiple Rounds of Encryption_ In addition to single s-box and multiple s-box substitution methods, multiple rounds with multiple s-boxes-based methods are also utilized. We analyzed this method for 5 rounds of substitution and used three different s-boxes for substitution. It can be seen from the results in Fig. 7 that this method also fails to scramble the pixels effectively. Furthermore, the statistical security analysis in Table 1 also shows that there's no change in the entropy of the encrypted images after every substitution round. The results of GLCM (Gray Level Co-occurrence Matrix) parameters, i.e., correlation, contrast, energy, and homogeneity are almost the same after all rounds. ### _Problem Statement_ In traditional S-box substitution methods, information within highly auto-correlated regions is not adequately concealed, i.e., the areas where pixel values are identical, such as sharp edges in an image. The fact that edges remain highly visible raises significant security concerns about the effectiveness of such substitution methods. This paper focuses on creating a substitution method based on a single round and single S-box that effectively scrambles the pixels of a plaintext image, eliminating the need for multiple rounds and S-boxes. Such methods are advantageous in applications demanding low computational complexity and Fig. 4: Single s-box substitution results; (a-b) Coins image with its histogram, (c-d) Encrypted Coins image with its histogram. Fig. 5: Multiple S-Box Chaotic Substitution Algorithm. Fig. 6: Multiple s-box substitution results; (a-b) Coins image with its histogram, (c-d) Encrypted Coins image with its histogram. faster encryption and decryption speeds. ## III The Proposed Encryption Scheme The proposed encryption scheme utilizes a single S-box and only a single round of substitution. Each pixel value is replaced by a value from the S-box, but before substitution, it undergoes a randomly selected operation. The proposed scheme is explained in two parts: (a) SRSS - Single Round Single S-box Encryption Scheme, and (b) CROSS - Chaos-based Random Operation Selection System. The SRSS represents the entire encryption scheme. The CROSS, on the other hand, entails the random operation selection component of the scheme. ### _SRSS - Single-Round Single S-box Encryption Scheme_ The complete steps involved in the proposed SRSS encryption scheme, depicted in Fig. 8, are as follows. * **Step 1:** Input the plaintext image \(P^{(M\times N)}-M\times N\) denoting the dimension of the plaintext image. Also, initiate the secret keys for the chaotic map \((\mu,x_{0})\), i.e., the control parameter and the initial condition, analyzed in Section 2.2. * **Step 2:** Iterate the chaotic map equation '\((M\times N)+I\)' times to generate a chaotic sequence \(C=\{x_{1},x_{2},\cdots,x_{(M\times N)+I}\}\). Here, \(I\) is the number of initial iterations to be discarded to avoid transients. * **Step 3:** Discard the initial \(I\) iterations to avoid transients and keep the last \(M\times N\) values, i.e., \(C^{\prime}=\{x_{I+1},\cdots,x_{(M\times N)+I}\}\). * **Step 4:** The generated chaotic sequence \(C^{\prime}\) has fractional values between 0 and 1. Apply finite digital format to convert these fractional values to a sequence of integers \(D\), i.e., \(D=\text{mod}(\text{round}(C^{\prime}\times 10^{3}),3)\). * **Step 5:** The modulus 3 operation in Step 3 makes sure that the chaotic sequence contains the values 0, 1, and 2. This makes our operation selection sequence of \(M\times N\) dimension, i.e., \(O=D(1:M\times N)\to O=\{o_{1},\cdots,o_{M\times N}\}\) such that \(o\in\{0,1,2\}\). * **Step 6:** Convert each pixel of the plaintext image \(P_{i,j}\) into 8-bit binary and split the 8-bit binary into two equal parts, making the first 4 bits the Most Significant Bits (MSBs) and the last 4 bits the Least Significant Bits (LSBs). * **Step 7:** To find the indices of the S-box values, which will replace the pixel of the plaintext image, convert the MSBs to decimal \(p\) and LSBs to decimal \(q\). \(p\) corresponds Fig. 8: SRSS –The proposed single-round single S-box encryption scheme. Fig. 7: Results of multiple rounds substitution showing no significant improvement; (a) Plaintext image, (b-c) Rounds 1 to 5. to the row number of the S-box and \(q\) corresponds to the column number of the S-box, locating the S-box value \(S_{p,q}\). * **Step 8:** The operation selection sequence \(O\) containing values 0, 1, and 2, selects one of the three operations to be performed on the selected S-box value \(S_{p,q}\). 0 selects Operation 1, 1 selects Operation 2, and 2 selects Operation 3. This selection is random based on the value in the operation selection sequence \(O\). * **Step 9:** The selected operation is performed on the selected S-box value \(S_{p,q}\) and converts it into a new transformed value \(T_{p,q}\). This transformed value then replaces the original pixel \(P_{i,j}\) in the plaintext image. ### _CROSS - Chaos-based Random Operation Selection System_ The Chaos-based Random Operation Selection System makes sure that every time for each pixel, a random operation is selected from the three operations. The operation selection sequence \(O\) is generated via a chaotic logistic map and contains random values of \(0\), \(1\), and \(2\). \(0\) corresponds to operation 1, \(1\) corresponds to operation 2, and \(2\) corresponds to operation 3. For the sake of simplicity, the operation chosen for all three operations is Bit X-OR. Three modifier constants or CROSS- secret keys, i.e., \(M_{1}\), \(M_{2}\) and \(M_{3}\) are chosen. \(M_{1}\), \(M_{2}\), and \(M_{3}\in\{0,\ldots,255\}\). In operation 1, the selected S-box value is first Bit XORed with \(M_{1}\) before replacing the original pixel value of the plaintext image, similarly, in operations 2 and 3, the selected S-box value is bit XORed with \(M_{2}\) and \(M_{3}\), respectively. The designed chaos-based random operation selection system is given in Fig. 9. ## IV Results of the proposed SRSS scheme ### _Encryption Results of the Proposed Encryption Scheme_ The proposed SRSS exhibited effective confusion of the plaintext image in just one round. The random selection of operations performed on the selected S-box values ensured that no edges are visible and all pixels have been replaced with several distinct values. The SRSS encrypted image with its histogram is given in Fig. 10. Furthermore, the results of the statistical security analysis given in Table 2 showing close to ideal values of entropy and correlation also validated the effectiveness of the proposed encryption scheme. ### _Comparison with Multiple S-boxes and Multiple Rounds Algorithm_ When compared with the results of substitution methods under study, the proposed scheme exhibited considerably good security performance. It is evident from Fig. 11 that the proposed SRSS encryption scheme encrypts the plaintext image more effectively as compared to the round 5 encrypted image of the multiple s-box system. ## V Conclusion To resolve the security, latency, and computational concerns associated with traditional S-box substitution methods, this paper addressed some inherent security vulnerabilities in three types of S-box substitution methods, especially when dealing with images that have highly auto-correlated pixels and lower gray scales. Furthermore, to resolve the highlighted security concerns, this paper proposed a robust Single Round Single S-Box (SRSS) encryption scheme that simplifies the encryption process while enhancing its security efficacy. In addition to the proposed SRSS, this paper introduced a new Chaos-based Random Operation Selection System (CROSS), a mechanism designed to reduce the complexity of the encryption scheme by negating the need for multiple S-boxes. The new methods demonstrated their potency by outperforming the existing substitution methods in terms of statistical security analysis. \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{2}{|c|}{**Security Parameter**} & \multicolumn{1}{c|}{**Single Round**} \\ \hline \multicolumn{2}{|c|}{Entropy} & 7.989 \\ \hline \multirow{4}{*}{GLCM} & Contrast & 10.45 \\ \cline{2-3} & Correlation & 0.0007 \\ \cline{2-3} & Energy & 0.015 \\ \cline{2-3} & Homogeneity & 0.389 \\ \hline \end{tabular} \end{table} TABLE II: Statistical Security Analysis of the Proposed SRSS encryption scheme \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Security Parameter**} & \multicolumn{1}{c|}{**Round 1**} & \multicolumn{1}{c|}{**Round 2**} & \multicolumn{1}{c|}{**Round 3**} & \multicolumn{1}{c|}{**Round 4**} & \multicolumn{1}{c|}{**Round 5**} \\ \hline \multirow{4}{*}{GLCM} & Entropy & 6.316 & 6.316 & 6.316 & 6.316 & 6.316 \\ \cline{2-7} & Contrast & 10.57 & 7.89 & 10.63 & 8.41 & 9.32 \\ \cline{2-7} & Correlation & 0.144 & 0.199 & 0.126 & 0.194 & 0.250 \\ \cline{2-7} & Energy & 0.025 & 0.025 & 0.037 & 0.025 & 0.032 \\ \cline{2-7} & Homogeneity & 0.48 & 0.51 & 0.51 & 0.51 & 0.52 \\ \hline \end{tabular} \end{table} TABLE I: Statistical Security Analysis of Multiple Rounds Substitution Fig. 9: CROSS – The proposed chaos-based random operation selection system. The SRSS and CROSS collectively achieved near-ideal results with an entropy of 7.89 and a correlation coefficient of 0.007, thus substantiating their effectiveness.
2304.08376
Zero sum subsequences and hidden subgroups
We propose a method for solving the hidden subgroup problem in nilpotent groups. The main idea is iteratively transforming the hidden subgroup to its images in the quotient groups by the members of a central series, eventually to its image in the commutative quotient of the original group; and then using an abelian hidden subgroup algorithm to determine this image. Knowing this image allows one to descend to a proper subgroup unless the hidden subgroup is the full group. The transformation relies on finding zero sum subsequences of sufficiently large sequences of vectors over finite prime fields. We present a new deterministic polynomial time algorithm for the latter problem in the case when the size of the field is constant. The consequence is a polynomial time exact quantum algorithm for the hidden subgroup problem in nilpotent groups having constant nilpotency class and whose order only have prime factors also bounded by a constant.
Muhammad Imran, Gabor Ivanyos
2023-04-17T15:40:30Z
http://arxiv.org/abs/2304.08376v1
# Zero sum subsequences and hidden subgroups ###### Abstract We propose a method for solving the hidden subgroup problem in nilpotent groups. The main idea is iteratively transforming the hidden subgroup to its images in the quotient groups by the members of a central series, eventually to its image in the commutative quotient of the original group; and then using an abelian hidden subgroup algorithm to determine this image. Knowing this image allows one to descend to a proper subgroup unless the hidden subgroup is the full group. The transformation relies on finding zero sum subsequences of sufficiently large sequences of vectors over finite prime fields. We present a new deterministic polynomial time algorithm for the latter problem in the case when the size of the field is constant. The consequence is a polynomial time exact quantum algorithm for the hidden subgroup problem in nilpotent groups having constant nilpotency class and whose order only have prime factors also bounded by a constant. Keywords:hidden subgroup problem, nilpotent group, zero sum subsequence, exact quantum algorithm. Acknowledgments.The research of the second author was supported by the Hungarian Ministry of Innovation and Technology NRDI Office within the framework of the Artificial Intelligence National Laboratory Program. Introduction The standard version of the hidden subgroup problem (HSP for short) is the following. Given a function \(f\) on the group \(G\to\{0,1\}^{r}\) with the property that there is a subgroup \(H\) such that \(f(x)=f(y)\) if and only if \(x\) and \(y\) are in the same left coset of \(H\), find the subgroup \(H\). Perhaps Kitaev was the first who observed that Shor's factoring and discrete logarithm algorithms can be generalized to solve the HSP in finite abelian groups (and also in certain infinite commutative groups) in polynomial time. Much less is known about the complexity of the problem in non-commutative groups. The most general result is due to Ettinger, Hoyer and Knill. They showed in [1] that the _query complexity_ of the problem in finite not necessarily abelian groups is polynomial. Regarding the time complexity, Kuperberg's subexponential time quantum algorithm [13] for the HSP in dihedral and very similar groups is perhaps the best known result. It has a remarkable extension by Alagic, Moore and Russell [1]to a special HSP in a class including non-solvable groups. There are some classes of groups in which the HSP can be solved in polynomial time. See the survey papers by Lomont [14] and by Wang [15] for early results of this kind. The paper [16] by Lomonaco and Kauffman proposes interesting derivatives and generalizations of the Shor-Kiteav algorithm. The paper [17] by Horan and Kahrobaei discusses cryptographic aspects of the HSP and reports also on more recent results. The hidden shift problem in abelian groups (and hence the HSP in the related semidirect product groups) appears to be quite popular in post-quantum cryptography, see, e.g., [18] by Castryck and Vander Meeren and [1] by Alagic and Russell. In [1], Bae and Lee propose a polynomial time solution to a continuous version of the hidden shift problem. A quantum procedure is exact if it returns a correct output (after a final measurement) with probability one. Besides that exact quantum algorithms can be considered as counterparts of deterministic classical methods, their measurement-free versions can serve as ingredients of larger unitary procedures. The method of [1] has an exact version, so it is natural to ask that in which classes of groups can the HSP be solved by an exact quantum algorithm in polynomial time. Brassard and Hoyer [2] presented a polynomial time exact method that works in \(\mathbb{Z}_{2}^{n}\). In [19], Cai and Qiu proposed a simpler efficient exact method for Simon's problem (a special, though arguably the hardest instance of the HSP in \(\mathbb{Z}_{2}^{n}\)). Efficient exact algorithms with optimal query complexity for the HSP in \(\mathbb{Z}_{2}^{n}\) appeared independently in [10] by Bonnetain and in [20] by Wu et al. Mosca and Zalka in [21] proposed an efficient exact solution of the discrete logarithm problem in cyclic groups of known order. An exact quantum algorithm for the HSP in \(\mathbb{Z}_{m^{k}}^{n}\) for general \(m\) was presented recently in [12], settling the case of abelian groups under the assumption that a multiple of the prime factors of the order of the group is known. In this paper we present an approach to solving the hidden subgroup problem in nilpotent groups that have nilpotency class \(O(1)\). Our main result is a polynomial time exact quantum algorithm for the HSP in such groups only having prime factors also of size \(O(1)\) in their order. We assume that the group \(G\) is given as a black-box group with unique encoding. The main strategy of our algorithm is essentially a reduction to instances of the hidden subgroup problem in quotient groups of subgroups of \(G\). We choose an input model suitable for such a reduction. In the standard version, the input is given by an oracle which is a unitary map computing \(|x\rangle|f(x)\rangle\) from \(|x\rangle|0\rangle\). The usual hidden subgroup algorithms start with computing the superposition \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle|f(x)\rangle\) using the oracle and most of them ignore the second register that holds the value of \(f\) and work with the coset superpositions \(\lvert xH\rangle=\frac{1}{\sqrt{|H|}}\sum_{y\in H}\lvert xy\rangle\) in the sequel, see e.g., [10]. These methods, as noted in [11], remain applicable in the context where the oracle is assumed to generate copies of a mixture of the coset superpositions. This holds in particular in the case of the exact abelian hidden subgroup algorithm of [12]. Specifically, we consider the state \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle|f(x)\rangle\) as a purification of the mixed state \(\Xi_{G,H}=\frac{1}{|G:H|}\sum_{x\in X}\lvert xH\rangle\langle xH\rvert\), where \(X\) is any left transversal of \(H\) in \(G\), in order to have a unitary oracle. The state \(\Xi_{G,H}\) is referred to as a (hidden) subgroup state. We assume that our hidden subgroup \(H\) is given by a unitary map (referred as oracle) that, on zero input, returns a copy of an _arbitrary_ (though fixed) purification of the subgroup state \(\Xi_{G,H}\). It will be convenient to introduce a subtask of the HSP, namely computing the hidden subgroup modulo the commutator subgroup of \(G\), that is, the subgroup \(HG^{\prime}\) where \(H\) is the hidden subgroup. We use the shorthand HSMC for this problem. To illustrate the power of HSMC in nilpotent groups note that it naturally includes the commutative case of the HSP and that having computed the subgroup \(HG^{\prime}\) and it is a proper subgroup of \(G\) then we can descend to it to compute \(H\), while if \(HG^{\prime}=G\) then \(H=G\) because in a nilpotent group every maximal subgroup contains the commutator. We give a high-level description of a strategy for solving the problem HSMC in a class of nilpotent groups. We call a group \(G\)_semi-elementary_ if \(G\) is a \(p\)-group for some prime \(p\) such that \(G/G^{\prime}\) is elementary abelian. In a semi-elementary group \(G\), our strategy for computing the hidden subgroup modulo the commutator is based on iterating the following procedure. Assume that \(L\) is an elementary abelian subgroup contained in the center of \(G\). Then we create a copy of the subgroup state corresponding to \(HL/L\) in the quotient group \(G/L\) from sufficiently many copies of the subgroup state for \(H\) in \(G\). We refer to this procedure (as well as some simpler ones) as _subgroup state conversion_. This conversion is based on finding zero sum subsequences of sufficiently long sequences of elements of \(L\). Eventually, in \(c-1\) rounds of iteration, where \(c\) is the nilpotency class of \(G\), we compute a copy of the subgroup state corresponding to \(HG^{\prime}/G^{\prime}\) in \(G\). (The semi-elementary property ensures the existence of a standard central series of length \(c\) with elementary abelian factors.) Finally, from sufficiently many copies of such subgroup states we compute \(HG^{\prime}/G^{\prime}\) using the exact abelian hidden subgroup algorithm of [12]. Fortunately, semi-elementary groups occur as factor groups of subgroups of nilpotent groups frequently enough to make a reduction from the HSP to the special case of HSMC possible, see Proposition 1 for details. The main result we obtain is the following. **Theorem 1**.: _Suppose that \(G\) is a nilpotent group of class bounded by a constant and that the prime factors of \(|G|\) are also bounded by a constant. We assume that \(G\) is a black-box group with unique encoding of elements by \(\ell\)-bit strings. Then there is an exact quantum algorithm that solves the hidden subgroup problem in \(G\) using \(\operatorname{poly}(\ell)\) operations and \(\operatorname{poly}(\log[G])\) calls to the subgroup state creating oracle and its inverse._ _Related results._ In the exact setting, [12] efficiently solves the abelian case without the restriction on the prime factors of \(|G|\). There are quite a few related non-exact polynomial-time algorithms. Among them, the result of [14], which solves the HSP in _solvable_ groups that have derived series and exponent bounded by constants, is perhaps the closest to Theorem 1. This class of groups covers the groups for which our result is applicable, except those that have exponent divisible by large powers of small primes. Note however, that the case of these groups could be efficiently treated by a combination of the reduction of Proposition 1 with the algorithm of [14]. We remark that the semidirect product group in which the HSP is equivalent to the hidden shift problem over \(\mathbb{Z}_{2^{k}}^{n}\) is a nilpotent group of class \(k\). Bonnetain and Naya-Plasencia [1] propose a non-exact method whose main ingredient can be considered as a combination of Kuperberg's sieve with finding zero sum subsequences in \(\mathbb{Z}_{2}^{n}\) using linear algebra. The case of nilpotency class at most two is efficiently treated by the non-exact method of [15], without any restriction on the size of the prime factors of \(|G|\). It is worth mentioning that by technical content, [15] can be considered as the closest relative of the present paper. The idea of reducing the HSP to HSMC stems from there and many ingredients of the reduction appeared in that paper. Also, the key tool of [15], using several coset superpositions and the quantum Fourier transform of a central subgroup can be considered as some (though less transparent) form of subgroup state conversion. In the class two case, however, there is a more powerful tool to cancel out characters of the subgroup: one can also apply twists with certain nice automorphisms of the group that do not change the hidden subgroup too much. Unfortunately, such automorphisms do not exist in general nilpotent groups of class greater than two. The methods of [16, 15] offer efficient solution to the HSP in certain nilpotent groups of higher class, again with potentially large prime factors in there orders. These groups have a normal subgroup with an abelian factor group of restricted kind (e.g., cyclic). These methods as well as that of [14] are of highly non-exact nature. Probably, the technique of [16] can be made exact with some efforts. The _Davenport constant_\(S(A)\) of a finite abelian group\(A\) is the smallest number \(s\) such that any sequence of \(s\) elements of \(A\) contains a nonempty subsequence adding up to the zero element of \(A\). The name comes from that H. Davenport proposed determining \(S(A)\) in the case when \(A\) is the ideal class group of a number field as a measure for non-uniqueness of factorization of the integers of the field. The general problem has become a famous question of additive combinatorics. Olson [10] determined the exact value of the Davenport constant of \(p\)-groups; in particular for \(\mathbb{Z}_{p}^{n}\) it is \(1+n(p-1)\). What we are looking for is an "effective" Davenport constant: what is the smallest number \(S^{\prime}=S^{\mathcal{B}}(A)\) such that from any sequence of \(S^{\prime}\) elements of \(A\), algorithm \(\mathcal{B}\) finds a non-empty zero sum subsequence in time polynomial in \(S^{\prime}\log\lvert A\rvert\) (roughly this is the bit size of the input sequence). In this paper we give a deterministic algorithm \(\mathcal{B}\) running in time \(\operatorname{poly}(n)\), that, for \(p=O(1)\), given a sequence of \(S^{\mathcal{B}}(\mathbb{Z}_{p}^{n})=\operatorname{poly}(n)\) vectors from \(\mathbb{Z}_{p}^{n}\), returns a zero sum subsequence. The structure of the rest of the paper is the following. In Section 2, we give some background material on exact quantum procedures, on nilpotent black-box groups and on computations with them, on (hidden) subgroups states and their purifications and present methods to convert subgroup states in the entire group to those in subgroups and - in certain very easy cases - in factor groups. Proposition 1, the existence of an exact polynomial time reduction from the HSP in general nilpotent groups to the problem HSMC in semi-elementary groups is proved in Section 3. Section 4 is devoted to converting several copies of a subgroup state in a semi-elementary group to a copy of a subgroup state in the abelian factor of the group. As an application of the technique, we prove Proposition 2 which tells us that we can solve by a polynomial time exact quantum algorithm the problem HSMC in a semi-elementary \(p\)-group of constant nilpotency class provided that we can find zero sum subsequences of sequences consisting of \(\operatorname{poly}(n\log p)\) vectors from \(\mathbb{Z}_{p}^{n}\) in time \(\operatorname{poly}(n\log p)\). In Section 5, we prove Theorem 2 on efficiently solvability the latter task in the case when \(p\) is bounded by a constant. Propositions 1 and 2, together with Theorem 2, immediately imply Theorem 1. Section 6 is devoted to concluding remarks. ## 2 Preliminaries ### On exact quantum computations To obtain sufficiently general intermediate results, we use the model of uniform circuit families described by Nishimura and Ozawa [11]. This is because some of the exact methods of [2] as well as our main conversion technique work under the assumption that the quantum Fourier transforms and their inverses modulo the prime factors of \(\lvert G\rvert\) can be exactly implemented. As it is pointed out in [11], this task cannot be accomplished using a fixed finite gate set. For the sake of transparency, we state our intermediate result using assumptions on availability of the quantum Fourier transforms rather than on gates required by the exact implementations of them. (See the implementation of the Fourier transform modulo general numbers proposed by Mosca and Zalka [12].) Note however, that for the case of our Theorem 1, where these primes are assumed to be bounded by a constant, a constant number of gates are sufficient and hence, by [11], the theorem remains valid in the quantum Turing machine model of Bernstein and Vazirani [1]. ### Groups For standard notations and concepts from group theory such as subgroups, normal subgroups, cosets, conjugates, commutators, commutator subgroup, center, etc., we refer the reader to the textbooks, e.g., to [10]. For subsets \(U\) and \(V\) of \(G\) we denote by \(UV\) the set \(\{uv:u\in U,v\in V\}\). If both \(U\) and \(V\) are subgroups and either \(U\) or \(V\) is normal in \(G\) then \(UV\) is a subgroup. For subgroups \(U,V\), by \([U,V]\) we denote the _subgroup generated by_ the commutators \([u,v]\) (\(u\in U,v\in V\)). Recall that the lower central series of a finite group \(G\) is the sequence \(G=G_{0}>G_{1}>\ldots>G_{c}\) of normal subgroups \(G_{i}\lhd G\) recursively defined as \(G_{i}=[G,G_{i-1}]\). Here we assume that \(c\) is the smallest index \(i\) such that \(G_{i}=[G,G_{i}]\). The group \(G\) is nilpotent if \(G_{c}=\{1\}\) and then \(c\) is called the (nilpotency) class of \(G\). A finite group is nilpotent if and only if it is the direct product of its Sylow subgroups. To obtain sufficiently general results, we work over _black-box_ groups with unique encoding of elements. The concept captures various "real" groups such as permutation groups and matrix groups over finite fields. Elements of a black-box group are represented by binary strings of a certain length \(\ell\) and the group operations are given by oracles and as input, a generating set for the group is given. Subgroups will also be given by sets of generators. One can use the exact polynomial time quantum membership test of [13] to reduce the size of generating sets to at most \(\log\lvert G\rvert\). During the rest of this part, we assume that \(G\) is a nilpotent black-box group of class \(c\) and the prime factors of \(\lvert G\rvert\) are known. For a normal subgroup \(N\) of \(G\), the subgroup \([G,N]\) is a normal subgroup of \(G\) contained in \(N\). If \(\Gamma\) and \(\Delta\) are sets of generators for \(G\) and \(N\), respectively, a generating set for \([G,N]\) can be obtained by taking the commutators \([x,y]\) for \(x\in\Gamma,y\in\Delta\) and then adding iterated commutators with elements of \(\Gamma\) until the subgroup generated by the elements stabilizes. For testing stabilization, one can use the exact quantum subgroup membership algorithm of [13]. This gives a polynomial time exact method in particular to compute the lower central series. Below we describe efficient solutions to some further group theoretic tasks that we use in our hidden subgroup algorithm. The \(p\)-Sylow subgroup of \(G\) can be computed as follows. Let \(\Gamma\) be a generating set for \(G\). Then for each \(g\in\Gamma\) we compute the order \(o_{g}\) of \(g\) and decompose \(o_{g}\) as the product \(p^{\alpha}{o_{g}}^{\prime}\) where \({o_{g}}^{\prime}\) is coprime with \(p\). Then the \({g^{o_{g}}}^{{}^{\prime}}\) (\(g\in\Gamma\)) generate the (unique) \(p\)-Sylow subgroup of \(G\). We shall compute hidden subgroups in \(G\) by computing the intersections with the Sylow subgroups. The normalizer of a subgroup of \(G\) can be computed using the deterministic polynomial method of Kantor and Luks [12]. It was originally described for nilpotent permutation groups but it also finds normalizers in any nilpotent black box group of order having small prime factors only. Assume that \(L\) is a subgroup of \(G\). It will be useful to decompose elements \(x\) of \(G\) as products of the form \(\alpha_{L}(x)\beta_{L}(x)\) where \(\beta_{L}(x)\in L\) and \(\alpha_{L}(x)\) depend only on the coset \(xL\). (Thus the range of \(\alpha_{L}\) is a transversal of \(L\) in \(G\).) To this end, compute a chief series (a series of normal subgroups with cyclic factors of prime order) \(G=K_{0}>K_{1}>\ldots>K_{r}=1\). Perhaps the easiest way to obtain such series is taking a refinement of the lower central series. By taking the subgroups \(K_{i}L\), and removing repeated elements, we obtain a subnormal series \(G=M_{0}>M_{1}>\ldots>M_{s}=L\) with cyclic factors of prime order. Also take elements \(a_{i}\in M_{i-1}\setminus M_{i}\) and denote by \(p_{i}\) the order of \(M_{i-1}/M_{i}\) (\(i=1,\ldots,s\)). Then the elements \(a_{1}^{\gamma_{1}}a_{2}^{\gamma_{2}}\ldots a_{s}^{\gamma_{s}}\) (\((\gamma_{1},\ldots\gamma_{s})\in\prod_{i=1}^{s}\mathbb{Z}_{p_{i}}\)) are a left transversal of \(L\) in \(G\). For an element \(x\in G\), the representative of the coset in this transversal can be computed as follows. First we find the smallest non-negative integer \(\gamma_{1}\) such that \(xa_{1}^{-\gamma_{1}}\in M_{1}\) by computing the base \(a_{1}\) discrete logarithm of \(x\) modulo \(M_{1}\). This can be done by solving an instance of the hidden subgroup problem in \(\mathbb{Z}_{p_{1}}^{2}\). Specifically, we define the function \((\beta,\gamma)\mapsto|x^{\gamma}\rangle|a_{1}^{-\beta}M_{1}\rangle\). The function can be evaluated with the aid of computing the uniform superposition \(|M_{1}\rangle\) using the exact version [11] of Watrous's method [20]. The values are \(p\) pairwise orthogonal states and the hidden subgroup is \(\{(\delta,\gamma):x^{\delta}a_{1}^{-\gamma}\in M_{1}\}\). We use the exact hidden subgroup algorithm of [11] to find a generator of this group. From this, \(\gamma_{1}\) can be obtained in an obvious way. Now we proceed with \(xa_{1}^{-\gamma_{1}}\) to compute \(\gamma_{2}\), and so on. We set \(\alpha_{L}(x)=a_{1}^{\gamma_{1}}\ldots a_{r}^{\gamma_{r}}\) and \(\beta_{L}(x)=\alpha_{L}(x)^{-1}x\). If \(L\) is a normal subgroup of \(G\), we can encode the coset \(xL\) by \(\alpha_{L}(x)\). This makes the factor group \(G/L\) a black-box group: the elements are encoded by the elements of the transversal \(\{\alpha(x);x\in G\}\) and the multiplication oracle is obtained as a composition of the multiplication oracle for \(G\) with the computation of the function \(\alpha_{L}\). ### Subgroup states and purifications Let \(G\) be a finite group and let \(H\) be a subgroup of \(G\). We consider elements of the group algebra \(\mathbb{C}G\) as pure quantum states. (The "natural" scalar product \((\sum_{x}\alpha_{x}|x\rangle,\sum_{y}\alpha_{y}|y\rangle)=\sum\alpha\overline {\beta}xy^{-1}\) makes \(\mathbb{C}G\) a Hilbert space where the group elements form an orthonormal basis.) A (left) coset superposition of \(H\) in \(G\) is the uniform superposition \(|aH\rangle=\frac{1}{\sqrt{|H|}}\sum_{h\in H}|ah\rangle\) where \(a\in G\). The (left) subgroup state of \(H\) in \(G\) is the mixed state with the density matrix \[\Xi_{G,H}=\frac{1}{|G|}\sum_{a\in G}|aH\rangle\langle aH|=\frac{1}{|G:H|} \sum_{a\in X}|aH\rangle\langle aH|,\] where \(X\) is any left transversal (a set of representatives of the left cosets) of \(H\) in \(G\). A _purification_ of \(\Xi_{G,H}\) is any pure state \(|\psi\rangle\in\mathbb{C}G\otimes V\) for some Hilbert space \(V\) such that \(\Xi_{G,H}\) is the relative trace of \(|\psi\rangle\langle\psi|\) with respect to the second subsystem. For general facts about purification of mixed states, in particular for the connection with Schmidt decompositions, we refer the reader to Section 2.5 of [10]. The following lemma gives a characterization of purifications of subgroup states. **Lemma 1**.: _The pure state \(|\psi\rangle\in\mathbb{C}G\otimes V\) is a purification of the subgroup state \(\Xi_{G,H}\) if and only if it can be written as_ \[|\psi\rangle=\frac{1}{\sqrt{|G|}}\sum_{x\in G}|x\rangle|v(x)\rangle,\] _where the states \(|v(x)\rangle\) and \(|v(y)\rangle\) are equal if \(x\) and \(y\) are in the same left coset of \(H\) and orthogonal otherwise._ Proof.: The "if" part follows easily from that the conditions on \(|v()\rangle\) imply \(|\psi\rangle=\frac{1}{\sqrt{k}}\sum_{a\in X}|aH\rangle|v(a)\rangle\). To see the the "only if" part, recall that a Schmidt decomposition of a state \(|\psi\rangle\in\mathbb{C}G\otimes V\) is of the form \(|\psi\rangle=\sum_{i=1}^{m}\lambda_{i}|u_{i}\rangle|v_{i}\rangle\) where \(m=|G|\), \(|u_{1}\rangle,\ldots,|u_{m}\rangle\) is an _arbitrary_ orthonormal basis of \(\mathbb{C}G\) in which the relative trace of \(|\psi\rangle\langle\psi|\) w.r.t. to the second subsystem is diagonal (with entries \(\lambda_{1},\ldots,\lambda_{m}\)) and the system of the vectors \(v_{i}\) corresponding to nonzero eigenvalues \(\lambda_{i}\) is an orthonormal system of vectors in \(V\). The vectors \(v_{i}\) depend on the choice of the basis \(|u_{i}\rangle\) (\(i=1,\ldots,m\)). Notice that the only nonzero eigenvalue of \(\Xi_{G,H}\) is \(\frac{1}{k}\) with multiplicity \(k\), where \(k=|G:H|\). The coset superpositions give an orthonormal basis of the corresponding eigenspace. Thus, if \(|\psi\rangle\) is a purification of \(\Xi_{G,H}\) then a Schmidt decomposition of \(|\psi\rangle\) which is a purification of \(\Xi_{G,H}\) is of the form \(|\psi\rangle=\frac{1}{\sqrt{k}}\sum_{i=1}^{k}|u_{i}\rangle|v_{i}\rangle\) where \(|u_{1}\rangle,\ldots,|u_{k}\rangle\) is an arbitrary orthonormal basis of the \(\frac{1}{k}\)-eigenspace of \(\Xi_{G,H}\) and \(|v_{1}\rangle,\ldots,|v_{k}\rangle\) is an orthonormal system of \(V\). In particular, if \(X=\{a_{1},\ldots,a_{k}\}\) then by taking \(|u_{i}\rangle=|a_{i}H\rangle\) and by defining \(|v(x)\rangle=|v_{i}\rangle\) for \(x\in a_{i}H\), we obtain \(|\psi\rangle=\frac{1}{\sqrt{k}}\sum_{a\in X}|aH\rangle|v(a)\rangle=\frac{1}{ \sqrt{|G|}}\sum_{x\in G}|x\rangle|v(x)\rangle\). ### Basic subgroup state conversions Given a subgroup \(L\) of \(G\), a copy of (a purification of) the subgroup state \(\Xi_{G,H}\) can be "converted" to a copy of (a purification of) \(\Xi_{L,H\cap L}\) by replacing \(|x\rangle\) with the decomposition \(|\beta_{L}(x)\rangle|\alpha_{L}(x)\rangle\) obtained by the method outlined in Subsection 2.2 for \(x\in G\), and "ignoring" \(|\alpha_{L}(x)\rangle\) (passing this part to the purifying subsystem). To see this, let \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}|x\rangle|\psi(x)\rangle\) be a purification of \(\Xi_{G,H}\), with \(|\psi(x)\rangle\) and \(|\psi(y)\rangle\) are equal if and only if \(y^{-1}x\in H\), and orthogonal otherwise. Then, the substitution gives the state \(\frac{1}{\sqrt{|L|}}\sum_{x\in L}|x\rangle\frac{1}{\sqrt{|G:L|}}\sum_{y\in Y} |y\rangle|\psi(yx)\rangle\) where \(Y=\{\alpha_{L}(z):z\in G\}\). Now if \(x_{1},x_{2}\in L\) are from the same left coset of \(H\cap L\) then \(|\psi(yx_{1})\rangle=|\psi(yx_{2})\rangle\) for every \(y\in Y\) and hence the states \(\frac{1}{\sqrt{|G:L|}}\sum_{y\in Y}|y\rangle|\psi(yx_{i})\rangle\) are equal (\(i=1,2\)), while otherwise they do not overlap as for \(y_{1},y_{2}\in Y\) either \(|y_{1}\rangle\) and \(|y_{2}\rangle\) are orthogonal or (for \(y_{1}=y_{2}\)) \(|\psi(y_{1}x_{1})\rangle\) and \(|\psi(y_{1}x_{2})\rangle\) are orthogonal. We shall refer to this procedure as _restriction_. The term is justified by that in the standard version of the HSP, one could obtain an instance of the HSP in the subgroup \(L\) by restricting the "hiding function" to \(L\). Similarly, assume that \(L\) is a normal subgroup of \(G\) contained in \(H\). Then a copy of (a purification of) the subgroup sate \(\Xi_{G,H}\) can be converted to a copy of (a purification of) \(\Xi_{G/L,H/L}\) by replacing \(x\) with \(|\alpha_{L}(x)\rangle|\beta_{L}(x)\rangle\) and passing \(|\beta_{L}(x)\rangle\) to the purifying subsystem. This corresponds to the technique called "pushing" in [10, 11]. ## 3 A group-theoretic reduction In this section we prove the following. **Proposition 1**.: _Let \(G\) be a nilpotent black-box group of class at most \(c\) and assume that the prime factors of \(|G|\) are given as part of the input and that for each such prime \(p\) the quantum Fourier transform modulo a multiple of \(p\) and its inverse can be implemented by an efficient exact quantum procedure. Then, the HSP in \(G\) can be reduced by an exact procedure in time \(\operatorname{poly}(\log\ell)\) to \(\operatorname{poly}(\log|G|)\) instances of the problem HSMC in semi-elementary quotient groups of subgroups of \(G\). (The elements of \(G\) are assumed to be uniquely encoded by strings of length \(\ell\).)_ Proof.: A finite nilpotent group \(G\) is the direct product of its Sylow subgroups. Therefore any subgroup \(H\) is the product of its Sylow subgroups. The \(p\)-Sylow subgroup of \(H\) is \(P\cap H\) where \(P\) is the \(p\)-Sylow subgroup of \(G\). The Sylow subgroups of \(G\) can be computed using the method outlined in Subsection 2.2. One can convert subgroup states in \(G\) to subgroup states in \(P\) using restriction, see Subsection 2.4. In the rest of the description of the reduction we assume that \(G\) is a \(p\)-group. We maintain a subgroup \(H_{0}\) of \(H\). Initially \(H_{0}=\{1_{G}\}\). In each round of an outer loop of the algorithm \(H_{0}\) will be increased if \(H_{0}<H\). If \(H_{0}\) is already \(G\) then we can obviously stop. We will also maintain a subgroup \(K\) of \(G\) such that if \(H_{0}<H\) then even \(H_{0}<K\cap H\). Initially \(K=N_{G}(H_{0})\). This is a good choice because in a nilpotent group every proper subgroup has a strictly larger normalizer, therefore if \(H_{0}<H\) then \(H_{0}<N_{H}(H_{0})=H\cap N_{G}(H_{0})\). In an inner loop \(K\) will be decreased until either \(H_{0}\) is increased or \(K\) becomes identical with \(H_{0}\). In the latter case we can conclude that \(H=H_{0}\) and stop the whole procedure. If the abelian factor \(K/(K^{\prime}H_{0})\) is not elementary then we can replace \(K\) with a proper subgroup as follows. Let \(L>K^{\prime}H_{0}\) be the subgroup of \(K\) such that \(L/(K^{\prime}H_{0})\) contains all the elements of order \(p\) of \(K/(K^{\prime}H_{0})\). To compute \(L\), first compute \(K^{\prime}\) and \(K^{\prime}H_{0}\). Then take the set of generators \(\Gamma\) for \(K\) and for each element \(g\in\Gamma\), compute the smallest _positive_ integer \(\alpha_{g}\) such that \(g^{p^{\alpha_{g}}}\in K^{\prime}H_{0}\). The elements \(g^{p^{\alpha_{g}-1}}\) (\(g\in\Gamma\)) generate \(L\). If \(L\) is a proper subgroup of \(K\) then we replace \(K\) with \(L\) and repeat the step above. (Correctness of this is justified by observing that \(L/H_{0}\) contains all the elements of order \(p\) of \(K/H_{0}\), whence if \(H\cap K>H_{0}\) then also \(H\cap L>H_{0}\).) Otherwise we have achieved that \(K/(K^{\prime}H_{0})\) is elementary abelian. Then we compute \((H\cap K)K^{\prime}/H_{0}\) using HSMC. If \((H\cap K)K^{\prime}=K\) then \(H\cap K=K\) because \(K^{\prime}\) is contained in every maximal subgroup of \(K\). Then we can increase \(H_{0}\) by replacing \(H_{0}\) with \(K\) and continue the outer loop. If \((H\cap K)K^{\prime}<K\) we can replace \(K\) with \((H\cap K)K^{\prime}\) and continue the inner loop. Based on the descriptions above, we summarize the exact algorithm in the pseudocode below. ``` 1:Initialize:\(H_{0}\gets 1_{G}\); 2:while\(H_{0}<G\)do 3:\(K\gets N_{G}(H_{0})\); 4:\(Found\leftarrow\) False; 5:while\(Found=\) False do 6:if\(K/(K^{\prime}H_{0})\) is elementary then 7: Use HSMC to compute \((H\cap K)K^{\prime}/H_{0}\); 8:if\((H\cap K)K^{\prime}=K\)then 9:\(H_{0}\gets K\); 10:\(Found\leftarrow\) True; 11:else 12:\(K\leftarrow(H\cap K)K^{\prime}\); 13:if\(K=H_{0}\)then 14:return\(H=K\). 15:endif 16:endif 17:else 18: For each \(g\in\Gamma_{K}\) compute the smallest positive integer \(\alpha_{g}\) with \(g^{p^{\alpha_{g}}}\in K^{\prime}H_{0}\); 19: Compute \(L=\langle g^{p^{\alpha_{g}-1}}\mid g\in\Gamma_{K}\rangle\); 20:\(K\gets L\); 21:endif 22:endwhile 23:endwhile ``` **Algorithm 1** Reduction to HSMC If \(|G|=p^{n}\) then the outer loop is executed at most \(n\) times while within each round of the outer loop the inner loop has at most \(n\) rounds. Thus we need at most \(n^{2}\) calls to the HSMC procedure for factors of subgroups of \(G\) and further \(n^{2}\operatorname{poly}(\ell)\) group and other operations. Note that all the groups we need to apply the HSMC procedure are of class at most \(c\) because the family of nilpotent groups of class at most \(c\) is closed under taking subgroups and factor groups. ## 4 The main conversion Let \(L\) be a subgroup of the center of \(G\) isomorphic to \(\mathbb{Z}_{p}^{n}\) where \(p\) is a prime. Then \(L\) is a normal subgroup of \(G\). Our aim is to convert a copy of the subgroup state \(\Xi_{G,H}\) to a copy of \(\Xi_{G/L,HL/L}\). In the light of the second conversion ("pushing") described in Subsection 2.4, one could do it by converting first to a copy of \(\Xi_{G,HL}\). To this end, it would be desirable to have a procedure that converts the coset superposition \(|aH\rangle\) to \(|aHL\rangle\). A possible approach would be computing \(|L\rangle=\frac{1}{\sqrt{|L|}}\sum_{z\in L}|z\rangle\) in a new register, multiplying \(|aH\rangle\) with it to obtain \(\frac{1}{\sqrt{|HL|}}\sum_{z\in L}\sum_{x\in H}|azx\rangle|z\rangle\) and then trying to "disentangle" \(|z\rangle\) from \(|azx\rangle\) The quantum Fourier transform of \(L\) almost does this job: if we apply it to the second register, we obtain the state \[\frac{1}{\sqrt{|L|}}\sum_{y\in L}\frac{\omega^{(y,z)}}{\sqrt{|HL|}}\sum_{z\in L} \sum_{x\in H}\lvert azx\rangle\lvert y\rangle,\] where \(\omega=e^{\frac{2\pi i}{p}}\) and by \((,)\) we denote the standard scalar product of \(L\) modulo \(p\). For \(y\in L\), let us denote by \(P_{y}\) the linear transformation of \(\mathbb{C}G\) mapping \(\lvert x\rangle\) to \(\frac{1}{\sqrt{|L|}}\sum_{z\in L}\omega^{(y,z)}\lvert xz\rangle\). With this notation, the state we have can be rewritten as \[\frac{1}{\sqrt{|L|}}\sum_{y\in L}\lvert P_{y}(aH)\rangle\lvert y\rangle.\] Using the assumption that \(L\) is in the center of \(G\), a direct calculation shows that for every \(x_{1},x_{2}\in G\), we have \[\lvert P_{y}(x_{1}x_{2})\rangle=\lvert x_{1}P_{y}(x_{2})\rangle=\lvert(P_{y}(x_ {1})x_{2})\rangle. \tag{1}\] It is also straightforward to see that for every \(x\in G\) and for every \(w\in L\), we have \[\lvert wP_{y}(x)\rangle=\lvert P_{y}(x)w\rangle=\omega^{-(y,w)}\lvert P_{y}(x)\rangle. \tag{2}\] We define the support of an element \(\lvert u\rangle\) of \(\mathbb{C}G\) as the set of elements appearing with nonzero coefficient in the decomposition of \(\lvert u\rangle\) as a linear combination of group elements. Using equality (1), one can show that if \(x_{1}\) and \(x_{2}\) are not in the same left coset of \(LH\) then the states \(\lvert P_{y}(aH)x_{1}^{-1}\rangle\) and \(\lvert P_{y}(aH)x_{2}^{-1}\rangle\) are orthogonal. This is because the support of \(\lvert P_{y}(aH)x_{i}^{-1}\rangle\) is contained in \(LHx_{i}^{-1}=(x_{i}LH)^{-1}\) (\(i=1,2\)). On the other hand, if \(x_{1}H=x_{2}H\), then \({Hx_{1}^{-1}}={Hx_{2}^{-1}}\) and the two states are equal. By the characterization given in Lemma 1, it follows that for any left coset \(aH\), the state \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle\lvert P_{0}(aHx^{-1})\rangle\) is a purification of \(\Xi_{G,LH}\). Of course it is hopeless to enforce \(y=0\) in \(\lvert P_{y}(aH)x_{1}^{-1}\rangle\). However, we can compute a state with essentially the same effect using several copies of the subgroup state and by applying an algorithm that finds zero sum subsequences of sufficiently long sequences of elements of \(L\). Assume that we have a procedure that, for some \(S=S(p,n)\), given an element \(\underline{y}=(y_{1},\ldots,y_{S})\in L^{S}\) computes a non-empty subset \(J(\underline{y})\) of \(\{1,\ldots,S\}\) such that \(\sum_{j\in J(\underline{y})}y_{j}=0\). Then, for a sequence \(\lvert a_{1}H\rangle\ldots\lvert a_{S}H\rangle\) we compute first \[\lvert L\rvert^{-S/2}\sum_{\underline{y}\in L^{S}}\lvert\underline{y}\rangle \lvert P_{y_{1}}(a_{1}H)\rangle\ldots\lvert P_{y_{S}}(a_{S}H)\rangle\] by applying the Fourier method outlined above component-wise. We next compute \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle\) in a fresh register and multiply by \(x^{-1}\) the \(j\)th component of \(\lvert P_{y_{1}}a_{1}H\rangle\ldots\lvert P_{y_{S}}a_{S}H\rangle\) if \(j\in J(\underline{y})\). Let \(\chi_{\underline{y}}:\{1,\ldots,S\}\to\{0,1\}\) denote the characteristic function of \(J(\underline{y})\). Then the state we obtained is \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle\lvert\psi(x)\rangle\) where \[|\psi(x)\rangle=|L|^{-S/2}\sum_{\underline{y}\in L^{S}}|\underline{y}\rangle|P_{y_ {1}}a_{1}Hx^{-\chi_{\underline{y}}(1)}\rangle\ldots|P_{ys}a_{S}Hx^{-\chi_{ \underline{y}}(S)}\rangle.\] Consider the term of \(|\psi(x)\rangle\) corresponding to any \(\underline{y}\). As \(J(\underline{y})\) is non-empty, we have that for \(x_{1},x_{2}\) not in the same left coset of \(L\overline{H}\), the appropriate terms of \(|\psi(x_{i})\rangle\) are orthogonal. As \(|\underline{y}\rangle\) also appears in the corresponding term, we have that \(|\psi(x_{i})\rangle\) are also orthogonal. On the other hand, if \(x_{1},x_{2}\) are in the same left coset of \(H\) then these states are equal term by term. Finally, for \(x\in L\), by (2, the term for \(\underline{y}\) only gets a phase change by \(\prod_{j\in J}(\underline{y})\omega^{-(y_{j},x)}=\omega^{\sum_{j\in J( \underline{y})}(y_{j},x)}=\omega^{0}=1\) by the choice of \(J(\underline{y})\). It follows that if \(x_{1}\) and \(x_{2}\) are in the same left coset of \(LH\), \(|\phi(x_{1})\rangle=|\phi(x_{2})\). Thus our state is a purification of \(\Xi_{G,LH}\). As this holds for any fixed \(S\)-tuple of left cosets of \(H\), by linearity we also obtain a purification of \(\Xi_{G,LH}\) if we apply the procedure to copies of a purification of \(\Xi_{G,H}\). We obtained the following. **Lemma 2**.: _Assume that we have an exact quantum procedure (e.g., a deterministic polynomial time algorithm) that, given any sequence \(y_{1},\ldots,y_{S}\) of \(S=S(p,n)\) elements of \(\mathbb{Z}_{p}^{n}\), in time \(T(p,n)\geq S(p,n)\) finds a non-empty subset of \(\{1,\ldots,S\}\) such that \(\sum_{j\in J}y_{j}=0\). Then we have an exact quantum procedure using \(n\) quantum Fourier transforms modulo \(p\) that converts \(S(p,n)\) copies of (a purification) of \(\Xi_{G,H}\) to a copy of (a purification of) \(\Xi_{G/L,HL/L}\) where \(L\) is subgroup of the center of \(G\) isomorphic to \(\mathbb{Z}_{p}^{n}\) in time \(T(p,n)\operatorname{poly}(\log|G|)\)._ Using the lemma in iteration and applying the exact abelian hidden subgroup algorithm of [12], we can derive the following. **Proposition 2**.: _Let \(G\) be a semi-elementary black-box group with unique encoding of order \(p^{n}\). Assume that the quantum Fourier transform modulo \(p\) and its inverse can be implemented by an efficient exact algorithm and that, like in Lemma 2, we have an exact method to find zero sum subsequences of sequences of \(S(p,n)\) elements of \(\mathbb{Z}_{p}^{n}\) in time \(T(p,n)\geq S(p,n)\). Then the problem HSMC can be solved by an exact quantum algorithm that uses \(\operatorname{poly}(T(p,n)^{O(c)}\ell)\) elementary operations, \(\operatorname{poly}(T(p,n)^{O(c)}\log|G|)\) applications of the group oracle, calls to the oracle computing the purification of the subgroup state; and the inverses of these. (The elements of \(G\) are assumed to be uniquely encoded by strings of length \(\ell\).)_ Proof.: We compute the lower central series \(G=G_{0}>G_{1}>\ldots>G_{c}=\{1\}\) using the method presented in Subsection 2.2. As \(G/G^{\prime}\) is elementary abelian, so are the factors \(G_{i-1}/G_{i}\) (\(i=1,\ldots,c\)). This is because the factor groups \(G_{i-1}/G_{i}\) are homomorphic images of tensor powers (as \(\mathbb{Z}\)-modules) of the \(G/G^{\prime}\), see Theorem 5.2.5 of [10]. Also, isomorphisms of \(G_{i-1}/G_{i}\) with \(\mathbb{Z}_{p}^{n_{i}}\) can be efficiently computed using the method of [12]. Iteration of Lemma 2 gives a procedure to convert \(\prod_{i=2}^{c}S(n_{i})\) copies of a purification of \(\Xi_{G,H}\) to a copy of a purification of \(\Xi_{G/G^{\prime},HG^{\prime}/G^{\prime}}\). The composition of instances of the original subgroup state creating procedure (the calls to the oracle) with the conversion gives a procedure for creating a purification of \(\Xi_{G/G^{\prime},HG^{\prime}/G^{\prime}}\). We can use this as the oracle input for the exact hidden subgroup algorithm of [12] in \(\mathbb{Z}_{p}^{n_{1}}\). For \(i=1,\ldots,c\), we have \(S(p,n_{i})\leq S(p,n)\) and \(T(p,n_{i})\leq T(p,n)\) because \(\mathbb{Z}_{p}^{n_{i}}\) can be embedded in \(\mathbb{Z}_{p}^{n}\) as a subgroup. In the non-exact setting, essentially the same proof gives the following. **Proposition 3**.: _Let \(G\) be a semi-elementary black-box group with unique encoding of order \(p^{n}\). Assume that there exists a quantum (or a randomized) algorithm that finds zero sum subsequences of sequences of \(S(p,n)\) elements of \(\mathbb{Z}_{p^{n}}\) in time \(T(p,n)\geq S(p,n)\) with high probability. Then the problem HSMC can be solved by a quantum algorithm that uses \(\operatorname{poly}(T(p,n)^{O(c)}\ell)\) elementary operations, \(\operatorname{poly}(T(p,n)^{O(c)}\log|G|)\) applications of the group oracle and calls to the oracle computing the purification of the subgroup state._ ## 5 Zero sum subsequences in \(\mathbb{Z}_{p}^{n}\) In this section, we assume that our input is a sequence of vectors from \(\mathbb{Z}_{p}^{n}\). We also assume that \(p\) is an odd prime as for \(p=2\) a zero sum subsequence can be obtained from \(n+1\) vectors in the form of a zero linear combination. As subsequences can be represented as subsets of the index set, it will not be too misleading to use the term (sub)set for a (sub)sequence. Our strategy will be finding \(p\) pairwise disjoint subsets of input vectors having equal sums. We will achieve this goal by designing a method for finding a nontrivial pair of subsets having equal sum and then, like in [13], applying the algorithm recursively to obtain 4, 8, 16, etc. disjoint subsets with equal sum. Note that a pair of disjoint subsets with equal sum can be interpreted as a representation of the zero vector by a linear combination of the input vectors with nonzero coefficients \(1\) or \(-1\) only. Based on this, it will be convenient to use the term _signed subsets_ and signed subset sums. A signed subset of a set \(S\) of vectors is formally a function from \(S\) to the set \(\{0,1,-1\}\). The support of such a signed subset is the set of elements on which the function takes nonzero values. With some sloppiness, we use the term _signed subset sum_ to refer both to the signed subset and to the value of the signed sum. (Technically, a signed subset sum could be a data structure consisting of the description of the signed subset and the value.) We call two or more subset sums disjoint if their supports are pairwise disjoint. Based on the observation that a signed subset sum of vectors that are results of pairwise disjoint subset sums is again a signed subset sum of the original vectors, one can build signed subset sums hierarchically from smaller disjoint signed subset sums. The trivial subset sum corresponds to the empty set with the zero vector as value. A _linear relation_ (or just relation for short) among a collection of vectors is an array of coefficients such that the corresponding linear combination is the zero vector. It is often useful to omit the vectors to which coefficient zero are assigned. By taking the signed subsets of the vectors having the same or the opposite coefficient in a linear relation, we obtain a linear relation among pairwise disjoint signed subset sums in which the coefficients are form \(\{1,\ldots,\frac{p-1}{2}\}\) and each coefficient appears at most once. We call such a relation of signed subset sums _standard_. We shall build standard linear relations among signed subset sums with smaller and smaller coefficients (among increasingly larger subset sums). The key idea is constructing first \(\frac{(p-1)^{2}}{4}\) pairwise signed subset sums arranged in a square matrix having a relation in each row as well as in each column and subtracting the sum of higher half of "horizontal" relations from the sum of the higher half of the "vertical" relations to obtain a relation with coefficients between \(1\) and \(\frac{p-1}{4}\), and iterating the construction. We give the details in the following lemma and its proof. (We present a version that even saves up maintaining the first half of vertical relations.) **Lemma 3**.: _Let \(d\) be a positive integer. Assume that there is a deterministic procedure \(\mathcal{A}\) that, given \(h(d,n)\) vectors from \(\mathbb{Z}_{p}^{n}\), in time \(\operatorname{poly}(h(d,n)\log p)\) finds \(d\) pairwise disjoint signed subset sums \(v_{1},\ldots,v_{d}\) of the input vectors, not all empty, such that \(\sum_{i=1}^{d}iv_{i}=0\). Then there also exists a deterministic procedure that, given \(h(d,n)h(d,\lceil d/2\rceil n)\) vectors, in time \(\operatorname{poly}(h(d,n)h(d,\lceil d/2\rceil n)\log p)\) finds pairwise disjoint signed subset sums \(w^{\prime}_{1},\ldots,w^{\prime}_{\lfloor d/2\rfloor}\), not all empty, such that \(\sum_{i=1}^{\lfloor d/2\rfloor}iw^{\prime}_{i}=0\)._ Proof.: We divide the input set into \(h(d,\lceil d/2\rceil n)\) pairwise disjoint parts of size \(h(d,n)\). We apply procedure \(\mathcal{A}\) within each part. This way for each \(k=1,\ldots,h(d,\lceil d/2\rceil n)\), we get \(d\) pairwise disjoint subset sums \(u_{k1},\ldots,u_{kd}\), not all empty, such that \(\sum_{j=1}^{d}ju_{kj}=0\). For each \(k\) we consider the concatenation \(u_{k}\) of the vectors \(u_{kj}\) (\(j=\lfloor d/2\rfloor+1,\ldots,d\)). These are vectors of dimension \(\lceil d/2\rceil n\). We apply procedure \(\mathcal{A}\) to find pairwise disjoint signed subsets \(M_{1},\ldots,M_{d}\) such that \(\sum_{i=1}^{d}iu^{\prime}_{i}=0\), where \(u^{\prime}_{i}\) is the signed sum of the \(u_{k}\)s corresponding to the signed subset \(M_{i}\). Now for each \(1\leq i\leq d\), \(u^{\prime}_{i}\) is the concatenation of vectors \(w_{ij}\) (\(j=\lfloor d/2\rfloor+1,\ldots,d\)). Here, for \(1\leq i,j\leq d\), \(w_{ij}\) stands for the signed subset sum obtained by joining the signed subset sums \(u_{kj}\) according to the signed subset \(M_{i}\). The signed subset sums \(w_{ij}\) are pairwise disjoint, not all of them are empty and they satisfy the relations \[\sum_{i=1}^{d}iw_{ij}=0\ \left(j=\lfloor d/2\rfloor+1,\ldots,d\right)\] and \[\sum_{j=1}^{d}jw_{ij}=0\ \left(i=1,\ldots,d\right).\] We subtract the sum of the last \(\lceil d/2\rceil\) ("horizontal") relations of the second kind from the sum the \(\lceil d/2\rceil\) ("vertical") relations of the first kind and obtain the relation \[\sum_{i=1}^{\lfloor d/2\rfloor}\sum_{j=\lfloor d/2\rfloor+1}^{d}iw_{ij}-\sum_{i= \lfloor d/2\rfloor+1}^{d}\sum_{j=1}^{\lfloor d/2\rfloor}jw_{ij}+\sum_{i,j= \lfloor d/2\rfloor+1}^{d}(i-j)w_{ij}=0.\] Notice that for \(\lfloor d/2\rfloor+1\leq i,j\leq d\), we have \(\lfloor i-j\rfloor\leq\lfloor d/2\rfloor\). Therefore, by flipping signs where appropriate and then joining the signed subsets with equal coefficients, we obtain pairwise disjoint subset sums \(w_{1}^{\prime},\ldots,w_{\lfloor d/2\rfloor}^{\prime}\) with \(\sum_{i=1}^{\lfloor d/2\rfloor}iw_{i}^{\prime}=0\). The subset sums \(w_{i}^{\prime}\) can all be empty only if each \(w_{ij}\) is empty when \(i\neq j\) and at least one of \(i\) and \(j\) is greater than \(\lfloor d/2\rfloor\). Assume that this is the case. Then, if there is an index \(i>\lfloor d/2\rfloor\) such that \(w_{ii}\) is non-empty then \(w_{ii}\) must be itself a nontrivial zero subset sum and gives a one-term solution. Otherwise not all \(w_{ij}\) are empty for \(i,j\leq\lfloor d/2\rfloor\) and \(\sum_{i,j=1}^{\lfloor d/2\rfloor}iw_{ij}=0\). Iterated application of the method of Lemma 3 gives the following result. **Proposition 4**.: _Given \(S_{\pm}(p,n)=p^{O(p\log p)}n^{O(p)}\) vectors from \(\mathbb{Z}_{p}^{n}\), a nontrivial signed subset sum representing the zero vector can be found in deterministic time \(\operatorname{poly}(S_{\pm}(p,n))\)._ Proof.: Put \(d_{0}=\frac{p-1}{2}\), \(h_{0}(n)=n+1\) and define \(d_{i}=\lfloor d_{i-1}/2\rfloor\) and \(h_{i}(n)=h_{i-1}(n)h_{i-1}(\lceil d_{i-1}/2\rceil n)\) recursively for \(i=1,\ldots,\lfloor\log d_{0}\rfloor\). As among any \(h_{0}(n)=n+1\) vectors from \(\mathbb{Z}_{p}^{n}\) a nontrivial linear relation can be found in time \(\operatorname{poly}(n\log p)\), recursive applications of Lemma 3 gives that among \(h_{\lfloor\log d_{0}\rfloor}(n)\) vectors a single nontrivial signed subset sum (that is, a linear relation with nonzero coefficients \(\pm 1\) only) can be found in time \(\operatorname{poly}(h_{\lfloor\log d_{0}\rfloor}(n)\log p)\). We show by induction that \[h_{i}(n)\leq\left(\prod_{j=0}^{i-1}\lceil d_{j}/2\rceil\right)^{2^{i-1}}(n+1 )^{2^{i}}. \tag{3}\] For \(i=0\), both sides are equal to \(n+1\). Assume that the inequality holds for \(0\leq i<\lfloor\log d_{0}\rfloor\). Then we also have \[h_{i}(\lceil d_{i}/2\rceil n)\leq\left(\prod_{j=0}^{i-1}\lceil d_{j}/2\rceil \right)^{2^{i-1}}(\lceil d_{i}/2\rceil n+1)^{2^{i}}.\] Using \(\lceil d_{i}/2\rceil n+1\leq\lceil d_{i}/2\rceil(n+1)\), we obtain \[h_{i}(\lceil d_{i}/2\rceil n)\leq\lceil d_{i}/2\rceil^{2^{i-1}}\left(\prod_{j =0}^{i}\lceil d_{j}/2\rceil\right)^{2^{i-1}}(n+1)^{2^{i}}. \tag{4}\] Multiplying inequalities (3) and (4) and using \(h_{i+1}(n)=h_{i}(n)h_{i}(\lceil d_{i}/2\rceil n)\), we obtain \[h_{i+1}(n)\leq\left(\prod_{j=0}^{i}\lceil d_{j}/2\rceil\right)^{2^{i}}(n+1)^ {2^{i+1}},\] which is inequality (3) for \(i+1\) in place of \(i\). Using \(d_{j}\leq d_{0}/2^{j}\leq d_{0}=\frac{p-1}{2}\), inequalities (3) for \(i=\lfloor\log d_{0}\rfloor\) gives \[h_{\lfloor\log d_{0}\rfloor} \leq\left(\prod_{j=0}^{\lceil\log\frac{p-1}{2}\rceil-1}\left\lceil \frac{p-1}{4}\right\rceil\right)^{2^{\left\lceil\log\frac{p-1}{2}\right\rceil-1}} (n+1)^{2^{\left\lceil\log\frac{p-1}{2}\right\rceil}}\] \[=\left\lceil\frac{p-1}{4}\right\rceil^{\left\lceil\frac{p-1}{4} \right\rceil\left\lceil\log\frac{p-1}{2}\right\rceil}(n+1)^{\left\lceil\frac{p -1}{2}\right\rceil}.\] Therefore, we have \[h_{\lfloor\log d_{0}\rfloor}(n)=p^{O(p\log p)}n^{O(p)}.\] We interpret a non-empty zero sum signed subset as a non-trivial collision between two disjoint subset sums. (Non-trivial means that at most one of the subsets can be empty.) We use the short term _collision_ for such a pair. We have the following. **Proposition 5**.: _Suppose that there is an algorithm \(\mathcal{B}\) that, given a set of vectors from \(\mathbb{Z}_{p}^{n}\) of size \(S_{\pm}(p,n)\) finds a collision. Then there is a deterministic procedure that, given \(S_{\pm}(p,n)^{\left\lceil\log p\right\rceil}\) vectors, finds a nontrivial zero sum subset using less than \(S_{\pm}^{\left\lceil\log p\right\rceil}\) applications of algorithm \(\mathcal{B}\) and \(\operatorname{poly}((S_{\pm}(p,n))^{\left\lceil\log p\right\rceil})\) other operations._ Proof.: Put \(S=S_{\pm}(p,n)\) and \(\ell=\left\lceil\log p\right\rceil\). We start with finding a collision \((H_{1}^{+},H_{1}^{-})\) among the first \(S\) vectors with common sum \(w_{1}\) using algorithm \(\mathcal{B}\). We continue with the next \(S\) input vectors and find a collision \((H_{2}^{+},H_{2}^{-})\) with sum \(w_{2}\), and so on. We then take the first \(S\) subset sums \(w_{1},\ldots,w_{S}\) and find a pair of disjoint subsets \((K^{+},K^{-})\) of \(\{1,\ldots,S\}\), not both empty, such that \(\sum_{i\in K^{+}}w_{i}=\sum_{i\in K^{-}}w_{i}=w\). The four subsets \(L^{++}=\bigcup_{i\in K^{+}}H_{i}^{+}\), \(L^{+-}=\bigcup_{i\in K^{+}}H_{i}^{-}\), \(L^{-+}=\bigcup_{i\in K^{-}}H_{i}^{+}\), and \(L^{--}=\bigcup_{i\in K^{-}}H_{i}^{-}\) of input vectors are pairwise disjoint, not all empty and have common sum \(w\). Iterating this we end up with at least \(p\) pairwise disjoint subsets (not all empty) with equal sum. If one of these sets is empty then the common sum is zero and we can take any of the non-empty subsets. Otherwise the union of the first \(p\) of the subsets has zero sum. The total number of applications of the collision finding algorithm \(\mathcal{B}\) is \(S^{\ell-1}+\ldots+S+1<S^{\ell}\). Propositions 4 and 5, together with the remark on the case \(p=2\) immediately give the following. **Theorem 2**.: _There is a deterministic algorithm that, given a sequence of \(S(p,n)=p^{O(p\log^{2}p)}n^{O(p\log p)}\) vectors from \(\mathbb{Z}_{p}^{n}\), finds a non-trivial zero sum subsequence in time \(\operatorname{poly}(S(p,n))\)._ We remark that in [17], an algorithm for a more general task is given. This task is finding a nontrivial representation of the zero vector as a linear combination of the input vectors with \(d\)th power coefficients. This includes our problem as the special case \(d=p-1\). The algorithm of [17] for \(d=p-1\) would give \(S(p,n)=p^{O(p^{2}\log p)}n^{O(p\log p)}\), a parameter somewhat worse than that we have in Theorem 2, though would be still polynomial in \(n\) for \(p=O(1)\). The method of [17] for finding a collision is more complicated than the present one: it is based on collecting relations organized in a \(d\)-dimensional hypercube rather than a square. (The method of doubling collisions is essentially identical with that described here in Proposition 5.) ## 6 Concluding remarks We have shown that the hidden subgroup problem in a nilpotent group \(G\) of class bounded by a constant can be solved in polynomial time by an exact quantum algorithm provided that there is a polynomial time (that is, time \(\operatorname{poly}(n\log p)\)) exact method that finds zero sum subsequences in sequences consisting of polynomially many elements of \(\mathbb{Z}_{p}^{n}\) for prime divisors \(p\) of \(|G|\). We have such a method for \(p=O(1)\). By Olson's theorem [10], the shortest length for which a not necessarily polynomial time zero sum subsequence finding algorithm exists is around \(np\). We propose the question of existence of a \(\operatorname{poly}(np)\)-time algorithm for finding zero sum subsequences from sequences of length \((np)^{d}\) for a sufficiently large constant \(d\) as a problem for further research. A positive answer would imply existence of an exact polynomial time quantum algorithm for the case when \(|G|\) is smooth, that is, the prime factors of \(|G|\) are of size bounded by a polynomial in \(\log|G|\). Even a non-exact method (e.g., a randomized algorithm) would be of great interest as, by Proposition 3, it would give a new result in the non-exact setting: existence of an efficient "probabilistic" quantum hidden subgroup algorithm for nilpotent groups of smooth order having \(O(1)\)-bounded nilpotency class. Even somewhat worse results would potentially lead to quantum hidden subgroup algorithms faster than the known ones. For the purposes of "probabilistic" quantum hidden subgroup algorithms even a method that finds a zero sum subsequence "on average", that is for at least a \(1/\operatorname{poly}(np)\) proportion of the possible sequences would be sufficient. However, as the following simple worst-case to average-case reduction shows, at least in the randomized setting, the gain cannot be better than polynomial. Assume that the classical randomized algorithm \(\mathcal{A}\) finds in time \(T=T(p,n)\) with probability at least \(\delta\) a subsequence of a random sequence of length \(S=S(p,n)\) of vectors from \(\mathbb{Z}_{p}^{n}\). Here, probability is taken for the uniform distribution of the array of the vectors together with the random bits of \(\mathcal{A}\). Then we can do the following. We start with an arbitrary sequence of \(\frac{1}{\delta}\cdot S^{2}\) input vectors, we draw \(\frac{1}{\delta}\cdot S^{2}\) uniformly random vectors, one for each input vector. Then we divide the input sequence into groups of length \(S\) and to each input vector we add the corresponding random vector. Within each group, we apply procedure \(\mathcal{A}\). As the sums are random vectors, in each group, procedure \(\mathcal{A}\) succeeds with probability at least \(\delta\) and, with probability at least \(\frac{1}{2}\), \(\mathcal{A}\) will succeed in at least \(S\) groups. If this is the case then we choose \(S\) "lucky" groups, in each group take the sum of the random vectors corresponding to the members of the zero sum subsequences. We apply algorithm \(\mathcal{A}\) for these \(S\) sums. It finds a nontrivial zero sum subsequence with probability at least \(\delta\). Finally, we take the union of the corresponding subsequences. This way we obtain a procedure that finds a nontrivial zero sum subsequence of _every_ sequence of length \(\frac{1}{\delta}\cdot S^{2}\) in time \(\operatorname{poly}(T+\frac{1}{\delta}\cdot ST)\) with probability at least \(\delta/2\).
2307.01164
High-performance ultrafast pulse compression in the visible spectral range for extreme nonlinear optics at kHz-MHz repetition rates
We demonstrate a remarkably effective single-stage compression technique for ultrafast pulses in the visible electromagnetic spectrum using second-harmonic pulses at 515 nmderived from a 1030 nm Yb-based femtosecond regenerative amplifier. By employing an advanced multi-plate scheme, we achieve more than fourfold compression from 180 fs to 40 fs with an extremely high spectral broadening efficiency of over 95%, and a temporal compression efficiency exceeding 75%. In addition, our method leverages a low nonlinearity medium to attain the shortest pulse durations for a single compressor while maintaining a superb spatial beam quality with 97% of the energy confined in the main lobe of the Arie disk. Moreover, our technique enhances the temporal pulse quality at 515 nm without generating substantial femtosecond-to-picosecond pulse pedestals. The resulting intense visible laser pulses with excellent spatio-temporal parameters and high repetition rate of 100 kHz to 1 MHz open up new frontiers for extreme nonlinear optics and ultrabright EUV and X-ray high-harmonic generation using short VIS wavelength.
Siyang Wang, Jieyu Yan, Sirius Song, Alexander Atanassov, Zhihan Wu, Will Brunner, Dimitar Popmintchev, Tenio Popmintchev
2023-07-03T17:16:47Z
http://arxiv.org/abs/2307.01164v1
High-performance ultrafast pulse compression in the visible spectral range for extreme nonlinear optics at kHz - MHz repetition rates ###### Abstract We demonstrate a remarkably effective single-stage compression technique for ultrafast pulses in the visible electromagnetic spectrum using second-harmonic pulses at \(515~{}nm\)derived from a \(1030~{}nm\) Yb-based femtosecond regenerative amplifier. By employing an advanced multi-plate scheme, we achieve more than fourfold compression from \(180~{}fs\) to \(40~{}fs\) with an extremely high spectral broadening efficiency of over \(95\%\), and a temporal compression efficiency exceeding \(75\%\). In addition, our method leverages a low nonlinearity medium to attain the shortest pulse durations for a single compressor while maintaining a superb spatial beam quality with \(97\%\) of the energy confined in the main lobe of the Arie disk. Moreover, our technique enhances the temporal pulse quality at \(515~{}nm\) without generating substantial femtosecond-to-picosecond pulse pedestals. The resulting intense visible laser pulses with excellent spatio-temporal parameters and high repetition rate of \(100~{}kHz\) to \(1~{}MHz\) open up new frontiers for extreme nonlinear optics and ultrabright EUV and X-ray high-harmonic generation using short VIS wavelength. 1University of California, San Diego, La Jolla, CA 92093, USA 2Photonics Institute, TU Wien, Vienna A-1040, Austria [email protected] ## 1 Introduction Fully spatially and temporally coherent light with ultrashort wavelengths and pulse durations is essential for studies of ultrafast dynamics in atomic and molecular systems, advanced nanomaterials, plasmas, and bio systems [1-12]. One of the most promising techniques for producing such light is the process of high-order harmonic generation (HHG), which involves the upconversion of UV-VIS-IR pulses from femtosecond lasers or optical parametric amplifiers (OPAs) to extreme ultraviolet (EUV) or soft X-ray frequencies [1-4, 6, 8, 9, 13-16]. While in general, high harmonic generation typically requires high peak-power pulses with optimal \(3-10\) laser cycles to reach high cutoff photon energy and record conversion efficiency greater than \(10^{-3}\) - \(10^{-7}\) using UV - to - mid-IR drivers [8, 9, 15, 16], simple post-compression techniques to reduce the laser pulses durations further, at shorter UV - VIS driving laser wavelengths, are in strong demand. Several methods have been shown to compress femtosecond pulses at near-infrared wavelengths to few-cycle durations, however, a practical scheme for spectral broadening and compression of UV - VIS pulses at high peak and high average power has not been reported yet [17, 18]. Moreover, most laser systems typically used for HHG generation operate at low repetition rates of up to a few kHz. At the same time, many EUV - X-ray imaging and spectroscopic applications would benefit from a high repetition rate high-flux tabletop source. In this paper, we present a technique for generating high-power UV-VIS pulses with several cycle pulse duration optimal for EUV - soft X-ray applications with high photon energy and efficiency. Specifically, we demonstrate that a kHz-to-MHz Yb:KGW sub-picosecond amplifier at \(1030~{}\mathrm{nm}\) can be used for highly efficient high harmonic generation in the EUV region by spectrally broadening and compressing its second harmonic at \(515~{}\mathrm{nm}\). The HHG process in gases driven by short-wavelength VIS lasers combines several advantages - very high single-atom efficiency due to low quantum diffusion of the rescattering electron, enhanced phase and group-delay matching due to high linear and nonlinear indices of refraction of atoms and ions, ultra-narrow linewidths of the harmonics and additional boost of the macroscopic efficiency due to broader temporal phase-matching window of 10s of laser cycles [9]. Moreover, the excellent spatial coherence and the extended soft X-ray cutoff with intrinsically-compressed near-transform limited attosecond pulses make this technique very attractive for high-resolution dynamic imaging and angle-resolved photoemission spectroscopies [7]. Finally, using enhanced UV-VIS laser beam parameters optimized for high harmonic generation in gas-filled capillaries could benefit from self-confinement of the driving pulses in both space and time. Here, we demonstrate a near spatio-temporal solitary propagation mode in periodic thin-plate media at VIS laser wavelengths and achieve spectral broadening via self-phase modulation (SPM) while maintaining an excellent spatial profile of nearly-identical intensity and similar pulse durations at each plate. This eliminates the substantial conical emission loss and enhances the efficiency of the pulse broadening geometry to above 93%, resulting in compression from \(180\,fs\) to \(40\,fs\) FWHM at \(515\,nm\), or 23 laser cycles, with \(42\)\(\upmu\)] post-compression pulse energy when using a prism-pair compressor. This pulse duration is optimal for efficient, fully phase-matched, high-order harmonic generation using shorter-wavelength UV - VIS drivers due to the extended phase matching window in this regime. The pulse compression in a prism pair is designed to minimize the higher order dispersion of the spectrally broadened pulses using an analytical Lah-Laguerre optimization method and can be further improved to optimize both the transmission and compression of the pulses by using chirped mirrors with custom designed \(2^{nd}\) - \(4^{th}\) dispersion orders based on advanced dispersion calculations. ## 2 Spectral Broadening in Low Nonlinearity Multi-Plate Geometry Self-phase modulation, arises from the laser-induced third-order Kerr nonlinearity. It modifies the spectral content of a pulse by changing the refractive index of the material \(n(t)\cong n_{0}+n_{2}I(t)\) based on the intensity of the laser \(I(t)\), and the nonlinear refractive index \(n_{2}\). Here, we choose Calcium Fluoride (\(CaF_{2}\)) as a nonlinear medium because of its large bandgap and relatively low linear and nonlinear refractive indices [19]. The nonlinear accumulated phase, often referred to as B-integral, can be evaluated as: \[B(t)=\frac{\omega_{0}}{c}\int_{0}^{\ell}n_{2}\,I(z,t)dz\approx\frac{2\pi}{ \lambda}\ell n_{2}I(t) \tag{1}\] , where \(\omega_{0}\) is the central pulse frequency, \(c\) is the speed of light, and \(\ell\) is the thickness of the media. This nonlinear phase results in a frequency shift in the laser spectrum: \(\Delta\omega(t)=-\frac{\partial}{\partial t}B(t)\). For a Gaussian pulse, this results to a maximum spectral broadening \(\Delta\lambda\) due to self-phase modulation of approximately \(\Delta\lambda=\Delta\lambda_{0}\sqrt{1+(0.88B)^{2}}\)[20]. Correspondingly, as a rule of thumb, a compression to a shorter pulse duration \(\tau\) on the order of \(\tau=\tau_{0}/\sqrt{1+(0.88B)^{2}}\) can be easily achieved with a second order phase compensation of \(GDD\cong 0.88\tau_{0}^{2}B/4\ln(2)\). The intensity slope of the laser field predetermines the maximum frequency change. In general, tightly focused beams with intensities lower than the damage critical intensity of the media can produce considerable spectral broadening with some high order dispersion being hard to be compensated. However, using spatial-temporal solitons at moderate intensities is more beneficial for self-phase modulation and pulse compression. Although there is an accumulation of nonlinear and linear phases due to the properties of the media, an external compressor can yield pulses close to the Fourier transform limit with a small amount of unbalanced higher dispersion orders. Nevertheless, the spectral broadening process can also cause temporal splitting of the laser pulse, or the media can significantly reshape the temporal and spatial profile of the laser, making it difficult to efficiently compress using standard chirp compensation techniques. In our design, we use thin plates of Calcium Fluoride cut in the [111] direction to minimize temporal splitting and substantial spatial deformation. For a broad range of laser wavelengths and relatively high dispersion orders, CaF\({}_{2}\) provides smaller dispersion compared to most commonly used materials, such as sapphire, fused silica, etc. Group delay dispersion (GDD) and third-order dispersion (TOD) are the most significant pulse-reshaping factors in this spectral range, and can be easily evaluated using [21-23]: \[POD(n)=\frac{\partial^{p}}{\partial\omega^{p}}\ k(\omega)=(-1)^{p}\frac{1}{c} \left(\frac{\lambda}{2\pi c}\right)^{p-1}\sum_{m=0}^{p}\mathcal{B}(p,m)\ \lambda^{m}\frac{\partial^{m}}{\partial\lambda^{m}}n(\lambda) \tag{2}\] The GDD and TOD at central wavelength 515nm are relatively small: \(48.619\frac{fs^{2}}{mm}\) and \(16.744\ \frac{fs^{3}}{mm}\), respectively. A single thin plate can provide a significant spectral broadening by tight focusing, however, accumulating a substantial nonlinear phase usually leads to a conical emission with an Arie ring pattern causing extensive energy losses. In addition, any formation of a single or multiple filaments inside the solid material leads to severe beam distortions or beam splitting. Alternatively, a larger beam waist can reduce the fast nonlinear phase accumulation per plate with a weaker self-refocusing. Thus, a considerable spectral broadening should preferably be realized using a soliton-like propagation in a set of thin plates. In our experiments, a 1030 nm \(Yb\):\(KGW\) laser amplifier produces \(250\ fs\) laser pulses with 8 W average power and a tunable repetition rate of 100 \(kHz\) - 1 \(MHz\). These pulses are focused into a second-harmonic Type I BBO crystal with a high conversion efficiency of \(\sim\)70% (see Fig. 1). The \(515\ nm\) beam of \(56\ \mu\)/ energy per pulse at 100 \(kHz\) and a shorter \(180\ fs\) pulse duration, due to the nonlinear intensity dependence in upconversion, is then focused by a long focal length lens of \(F=1\ m\) Figure 1: **Schematic of high performance, single-stage, spectral broadening, and compression of ultrafast VIS 515 nm laser pulses using thin plates with low nonlinearity.** A high power \(8\ W\), \(250\ fs\), near-IR laser pulses from a KGW regenerative laser amplifier are upconverted with 70% efficiency to its second harmonic of \(5.6\ W\), \(180\ fs\), \(515\ nm\) pulses in a BBO crystal. The VIS pulses are spectrally broadened in an array of thin CaF\({}_{2}\) plates with low nonlinearity in alternating Brewster angle geometry, with a high 93% efficiency. Then the pulses are compressed using a prism-pair compressor to sub-40 \(fs\) with an excellent Gaussian beam profile with 78% efficiency. The overall efficiency is 75%. The inset shows the generated input SHG beam and the output SHG beam after the prism-pair compressor. ensuring a long Rayleigh range of interaction of more than 18 \(cm\)[24]. While our 1030 nm amplifier has negligible femtosecond-to-picosecond pedestal in the time domain, the second harmonic generation process cleans any weak intensity temporal structure due to the nonlinear dependence on the intensity. Eight \(CaF_{2}\) thin plates of 1 \(mm\) thickness are used as a spectral broadening Kerr medium, equally spaced by \(L_{s}~{}=~{}50~{}mm\), with the first plate placed right at the flat wavefront of the beam focus. The plates are aligned at alternating Brewster angles to minimize reflection loss and to compensate for any wavefront distortion. Additionally, the precise [111]\(CaF_{2}\) crystal cut eliminates any polarization degradation and energy loss since \(CaF_{2}\) is known to possess wavelength-dependent depolarization in white-light generation for other orientations. One fascinating occurrence in self-confinement pulse propagation is the refocusing cycle of the laser. During laser self-confinement or filamentation, the beam rapidly shrinks down and expands, leading to a continuous series of divergence and refocusing that can persist over long distances. The formation of self-confinement generates plasma that transforms a narrowband laser pulse into a broadband pulse. An intriguing feature of this plasma is its ability to restrict the density of electrons, thereby averting optical breakdown. In a medium with Kerr nonlinearity, the focal length of self-focusing, to a second order approximation, scales with the square of the initial beam waist \(w_{0}\), and is inversely proportional to the nonlinear refractive index of the medium \(n_{2}\) and its length \(\ell\), as well as the laser peak intensity \(I\): \(f_{c}\approx\frac{w_{0}^{2}}{4n_{2}\ell}=\frac{z_{R}}{2B}\). In experiments, where tight focusing is needed, it can be beneficial to set the distance between the plates to approximately \(f_{c}-2f_{c}\), where \(f_{c}\) could vary on each plate, and fine tune the laser intensity. The nonlinear accumulated phase in Eq. (1) sets a restriction on the intensity length product for desired nonlinear phase accumulation of \(I\ell\left[\frac{W}{cm^{2}}\,mm\right]\cong 1.6\cdot 10^{-7}\,\frac{B[rad] \cdot\lambda[nm]}{n_{2}[cm^{2}/W]}\), while the damage threshold for \(CaF_{2}\) of near \(P\)\(\sim\)1\(GPa\), sets a limit to the intensity of approximately \(I_{dam}=Pc\cong 29.9~{}\frac{TW}{cm^{2}}\), where the speed of light is \(c\cong 0.299~{}\frac{GW}{N}\). In our SPM design, the intensity on the first plate is \(I\cong 601.2\,\frac{GW}{cm^{2}}\) (peak power \(P_{p}\cong 292.3MW\)) with a nonlinear phase \(B=1.43~{}rad\), and a Kerr lens focal length \(f_{c}\cong 65.9mm\). The critical power for self-focusing of \(CaF_{2}\) at the laser wavelength is \(P_{cr}=\lambda^{2}\,\frac{0.6\cdot 1^{2}\pi}{8n_{2}n_{0}}\cong 1.3~{}MW\). In our setup, we require a plate separation \(L_{s}\) much smaller than the Rayleigh range to ensure a better control of the propagation in a near plane-wave geometry: \(z_{R}=\frac{\pi\omega_{0}^{2}}{\lambda}>L_{s}\). Our theoretical design based on constraints of the nonlinear phase per plate suggests compression to a transform-limited pulse of \(35\,fs\) when using the minimum of four thin plates assuming a dispersion compensation of the second order only with an expected total value of \(GDD\cong 2300~{}fs^{3}\). ## 3 High-Performance VIS Laser Pulse Compression with Enhanced Spatial and Temporal Quality A large number of nonlinear phenomena related to third-order susceptibility affect the spatio-temporal propagation of intense laser light in media, making it challenging to optimize spectral broadening. In this work, we focus on experimentally maximizing the SPM and self-focusing at minimum high order phase distortions or wavefront distortions, and enhancing the spatio-temporal laser properties for applications in extreme nonlinear optics. To maintain a spatial soliton-like propagation, each plate must contribute to the SPM-induced spectral broadening and provide similar self-focusing. As the laser spot size on each plate, monitored using \(2f-2f\) imaging, reaches \(350\pm 10~{}\upmu\)m and remains unchanged, and the pulse duration after each plate measured using an autocorrelator and an scanning FROG apparatus (a self-diffraction and second harmonic FROG) stays approximately \(200\pm 10~{}fs\), the VIS laser pulse enters a spatio-temporal soliton-like mode that supports spectral broadening without splitting. This contrasts with many SPM regimes where strong asymmetric spectral modulations are observed experimentally. To minimize the spatio-temporal distortions, the B-integral for each plate must be nearly identical and low, such that each plate contributes to the spectral broadening while minimizing any distortions. Note that the gas environment also plays a role despite its nonlinearity being substantially smaller than the solid materials. A soliton mode of propagation in multiple plates in vacuum, in combination with controlled pressure and gas species, is expected to fine-tune the dispersion of the SPM continuously, in a non-discrete way. In addition, atomic or molecular gases with different polarization properties or different nonlinearity can be used to adjust the spectral broadening, the blue and red shift of the VIS spectrum, to enhance the spatial-spectral-temporal beam quality with a real time feedback from the extreme nonlinear optics experiments. Figure 2: **Experimental self-phase-modulation induced broadening of the spectrum of VIS femtosecond pulses in thin plates with fine spectral modulations.** A. Spectral broadening after each \(CaF_{2}\)plate in the wavelength and frequency domain. The broadening is symmetric at low B integral and becomes slight asymmetric for higher cumulative nonlinear phase originates due to the initial dispersion of the second harmonic pulse. B. Rapid decrease of the Fourier Transform Limited (FTL) pulse duration after each plate with asymptotic decrease of the slope after \(6^{\mathrm{th}}\) - \(8^{\mathrm{th}}\) plates. The broadest VIS spectrum which supports a \(35~{}fs\) transform-limited pulse duration (20 laser cycles) is compressed by a prism-pair compressor to sub-\(40\) fs (23 laser cycles). The calculated and experimental nonlinear phase and corresponding B-integral after each plate are shown in green. C. Design of the prism-pair compressor using Lah Laguerre formalism for \(GDD\)\(\sim\)\(2000-3000~{}fs^{2}\) and relatively low values of the high orders of dispersion (up to the tenth order) based on the numerically evaluated phase. Dispersion orders: \(\mathrm{GDD}\) - \(2^{\mathrm{nd}}\), \(\mathrm{TOD}\) - \(3^{\mathrm{rd}}\), \(\mathrm{FDD}\) - \(4^{\mathrm{th}}\), \(\mathrm{FiOD}\) - \(5^{\mathrm{th}}\), \(\mathrm{SiOD}\) - \(6^{\mathrm{th}}\), \(\mathrm{SOD}\) - \(7^{\mathrm{th}}\), \(\mathrm{EOD}\) - \(8^{\mathrm{th}}\), \(\mathrm{NOD}\) - \(9^{\mathrm{th}}\), \(\mathrm{TeOD}\) - \(10^{\mathrm{th}}\). FROG trace measurements can retrieve the spectral phase information as a direct experimental characterization of the amount of phase compensation needed at the different dispersion orders. Here, we start with an analytical calculation to estimate the dispersion required to cancel the second order while minimizing the third-order dispersion and potentially higher orders [21-23]. The Group Delay Dispersion (GDD) and next immediate higher-orders of dispersion (TOD, FOD, etc.) of the prism-pair compressor are easily evaluated [21-23], Fig. 2C: \[POD(n)=\frac{\partial^{p}}{\partial\omega^{p}}\ \varphi(\omega)=(-1)^{p}\frac{1}{c} \bigg{(}\frac{\lambda}{2\pi c}\bigg{)}^{p-1}\sum\nolimits_{m=0}^{p}\mathcal{B }(p,m)\ \lambda^{m}\ \frac{\partial^{m}}{\partial\lambda^{m}}\ OP(\lambda) \tag{3}\] where \(OP\) is the optical path of the prism-pair compressor, evaluated here for a pulse with finite spectral content [25]. A fused silica prism pair compressor with a 70 cm tip-to-tip distance and with the amount of insertion material as shown in Fig. 2C, can compensate for the low orders of dispersion (predominantly GDD with partial minimization of the immediate higher orders) and yield a sub-40 \(fs\) pulse with superb spatial and temporal quality (Fig. 3). In our experiments, the compressed pulse durations are measured using both self-diffraction FROG and second harmonic generation FROG showing some uncompensated phase stretching the pulse beyond the transform limit of 35 fs. Such multicycle pulses laser pulses (\(>\)20 cycle) in the VIS range are ideal for high harmonic generation since the phase matching window Figure 3: **FROG measurements of the compressed sub-40 fs VIS pulse.** A and B) Experimental traces from second harmonic and self-diffraction FROGs. C and D) Corresponding retrieved 515 nm pulse intensity in the time domain. The pulse shapes show negligible femtosecond-to-picosecond pedestals. increases for short-wavelength drivers compared to mid-IR laser where under full phase matching conditions this window closes to a sub-single laser cycle. The extremely low losses in focusing and near spatio-temporal solitary mode of propagation offers a fortunate possibility for post-laser compression at high laser energy. Scaling to higher energy and shorter-wavelength VIS-to-UV ultrafast pulses would involve using a collimated beam at the right intensity using a pair of down-scoping mirrors instead of a single lens with a long focal length. Since the beam size on the plates and the periodic spacing between plates depend on the critical phase on each plate, the soliton propagation is scalable. To estimate the parameters for spectral broadening at the fundamental and its perturbative harmonic (SHG, THG, FHG) of ultrafast Ti:Sapphire and Yb-based lasers at high energy, we assume a critical spectral phase of \(B=1.4\ rad\). We choose \(CaF_{2}\) thin plates with a nominal thickness of \(500\ \mu m\) and \(1000\ \mu m\) as the Kerr material and a plate spacing of 50 mm for scaling purposes [26]. The results are summarized in Table 1. In all estimates, we note that the dispersion lengths are much longer than the plate thicknesses \(L_{D}\gg\ell\), indicating that the pulse propagation is mainly affected by nonlinearity in the positive GDD dispersion medium as the nonlinear lengths are much smaller than the plate thicknesses \(L_{NL}{<}\ell\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Wavelength [nm] & 800 & 400 & 266 & 1030 & 515 & 515 & 258 \\ \hline \(n_{2}\) & 1.80 & 2.50 & 10 & 1.70 & 2.30 & 2.30 & 10 \\ \(10^{-16}\ [cm^{2}/W]\) & & & & & & \\ \hline Input Pulse & 30 & 30 & 30 & 250 & 180 & 200 & 240 \\ Duration [\(f\)s] & & & & & & \\ \hline Pulse Energy [\(m\)] & 7 & 2.4 & 1 & 10 & 0.056 & 1.6 & 1.1 \\ \hline Beam radius on & 2425.55 & 2354.75 & 3693.67 & 1217.46 & 176.03 & 631.18 & 1391.68 \\ plates \(\omega\ [\mu m]\) & 426.453 & 422.015 & 414.311 & 854.366 & 848.65 & 424.325 & 413.273 \\ \hline \(L_{s}\) & 50 & 50 & 50 & 50 & 50 & 50 & 50 \\ \([mm]\) & & & & & & \\ \hline Laser Intensity on & 2.37 & 0.86 & 0.15 & 1.61 & 0.60 & 1.20 & 0.142 \\ \(cm2\) & & & & & & \\ \hline \(P\) & 219202 & 75154.98 & 31314.58 & 37577.49 & 292.2694 & 7515.498 & 4305.754 \\ \([MW]\) & & & & & & \\ \hline \(P_{cr}\) & 4.00 & 7.19E-1 & 7.95E-2 & 7.01 & 1.30 & 1.30 & 7.42E-2 \\ \([MW]\) & & & & & & \\ \hline \(\frac{P}{p_{cr}}\) & 5.48E4 & 1.04E5 & 3.94E5 & 5.36E3 & 2.25E2 & 5.803 & 5.80E4 \\ \hline \(\frac{\omega^{2}}{\lambda L_{s}}\) & 147.0829 & 277.2424 & 1025.802 & 28.78062 & 1.203315 & 15.47119 & 150.7211 \\ \hline \(L_{D}\) & 3.23E1 & 1.33E1 & 7.04 & 3.383 & 6.65E2 & 8.22E2 & 4.22E2 \\ \([mm]\) & & & & & & \\ \hline \(B\) & 1.43 & 1.43 & 1.43 & 1.43 & 1.43E & 1.43 & 1.43 \\ \([rad]\) & & & & & & \\ \hline \end{tabular} \end{table} Table 1: **Parameters for UV-VIS-IR pulse broadening using Ti:Sapphire and Yb-based lasers and their perturbative harmonics (SHG, THG, FHG).** Finally, we perform a VIS-UV pulse propagation simulation through 8 thin plates using 515 nm and 258 nm laser pulses (Hussar) [27]. The contribution of the plates to spectral broadening after the fourth plate is decreasing as observed in the experiments, with some distortions in the time domain. Nevertheless, these simulations allow us to extract the phase, which is then decomposed into high chromatic dispersion orders (see Table 2). These values are used as the basis for the simulation and design of prism-pair compressors and custom chirp mirrors for fine compensation of the first several orders of dispersion, Fig. 2C [21-23]. ## 4 Ultrabright High Harmonic Generation Using Short-Pulse Short-Wavelength UV-VIS Lasers Femtosecond Yb-based lasers have experienced a surge in popularity within academic research and industry over the past decade. Their scalability in terms of high repetition rates, power output, and stability has made them particularly suitable for extreme nonlinear processes, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Plates & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline GDD [\(fs^{2}\)] & 1965.30 & 2177.43 & 1908.10 & 1906.33 & 1984.99 & 2076.44 & 2171.23 & 2267.30 \\ \hline TOD [\(fs^{3}\)] & 4983.74 & 7028.24 & 3572.69 & 1377.66 & 737.89 & 607.20 & 579.98 & 590.04 \\ \hline FOD [\(fs^{4}\)] & -21641.88 & -28921.41 & -16860.46 & -9654.81 & -7768.13 & -7585.83 & -7751.99 & -8043.90 \\ \hline FiOD [\(fs^{5}\)] & 6.399E+04 & 8.442E+04 & 5.078E+04 & 3.108E+04 & 2.611E+04 & 2.584E+04 & 2.653E+04 & 2.756E+04 \\ \hline SiOD [\(fs^{6}\)] & -1.816E+05 & -2.381E+05 & -1.452E+05 & -9.137E+04 & -7.806E+04 & -7.762E+04 & -7.982E+04 & -8.297E+04 \\ \hline SeOD [\(fs^{7}\)] & 5.326E+05 & 6.961E+05 & 4.280E+05 & 2.733E+05 & 2.356E+05 & 2.349E+05 & 2.417E+05 & 2.513E+05 \\ \hline EOD [\(fs^{8}\)] & -1.654E+06 & -2.157E+06 & -1.333E+06 & -8.595E+05 & -7.449E+05 & -7.437E+05 & -7.658E+05 & -7.964E+05 \\ \hline NOD [\(fs^{9}\)] & 5.481E+06 & 7.138E+06 & 4.427E+06 & 2.873E+06 & 2.499E+06 & 2.497E+06 & 2.573E+06 & 2.676E+06 \\ \hline TeOD [\(fs^{10}\)] & -1.941E+07 & -2.525E+07 & -1.570E+07 & -1.024E+07 & -8.929E+06 & -8.931E+06 & -9.202E+06 & -9.571E+06 \\ \hline \end{tabular} \end{table} Table 2: **Numerically extracted dispersion orders from the simulated VIS spectral broadening at 515 nm.** Figure 4: **Pulse broadening simulation in multiple thin plates using 515 nm and 258 nm laser pulses.** A and B) Eight plates separated by 5 cm are illuminated by VIS-UV beams with an intensity of near \(\sim\)6 \(10^{11}W/cm^{2}\). The spectral distribution after each plate is shown in for 515 nm (A) and for 258 nm (B). C and D). The temporal spread after each plate is shown in matching colors in for 515nm (C) and for 258 nm (D). including secondary light sources based on high harmonic generation. Recent studies have revealed that employing shorter wavelength laser drivers can significantly enhance the efficiency of high-order harmonic generation [9, 15, 16]. The single atom yield of high-order harmonics varies depending on the experimental configuration, ranging from \(\lambda_{LASER}^{-5.5}\) when using constant intensity, or \(\lambda_{LASER}^{(-7.5,-9)}\) for phase-matched cases where the driving laser intensity changes with the laser wavelength. By reducing the driver wavelength by a factor of 2, the single atom yield can increase by one to three orders of magnitude depending on the driving laser wavelength. Additionally, the plasma dispersion, which hampers the phase matching for long infrared drivers can be effectively compensated by leveraging the dispersion of ions for drivers in the UV-VIS spectral range. This compensation expands the temporal phase matching window for UV-VIS drivers, enabling the highly efficient generation of high-order harmonics using lasers with multiple cycles (10-to-50 pulse duration). Notably, compressing a long 180 fs second harmonic pulse in a single self-phase modulation (SPM) stage, rather than compressing the fundamental laser beam, can yield higher second harmonic conversion efficiency while significantly improving laser quality especially in the time domain. This improvement is vital for an efficient high harmonic generation. We have performed calculations to estimate the tunnel ionization induced by the 515 nm laser with the compressed pulses using the Ammosov-Delone-Krainov (ADK) approach. Phase matching of high harmonic generation using UV-VIS lasers favors longer pulse multi-cycle durations since the temporal phase-matching window for efficient upconversion in the EUV - X-ray regime increases with the decrease of the laser wavelength. The demonstrated straightforward compression scheme provides a driver of 23-cycle pulse duration and a feasible peak intensity of \(>1.0x10^{15}\frac{w}{cm^{2}}\). This compression setup is expected to enables emission at \(>100\)\(eV\) in the EUV - soft X-ray range, at \(100\)\(kHz\) and higher. These repetition rates are two orders of magnitude greater compared to those achieved by most conventional laser amplifiers. Theoretically, with the use of Ar ions or neutral He (as shown in Fig. 4), the emission can reach the technologically relevant \(13.5\)\(nm\) EUV wavelength with very narrow line widths. Figure 5: **Single atom high harmonic generation at \(>\)100 eV and 100 kHz using a 23-cycle VIS driver.** Theoretical high harmonic cutoffs near the technologically relevant 13.5 nm EUV wavelength (91.7 \(eV\)) for a 515 \(nm\) driver with a 40 \(fs\) pulse duration for an experimentally feasible peak intensity of \(1.0x10^{15}\)\(W/cm^{2}\) (ionization of \(Ar\) atoms and ions in blue and neutral \(He\) gas in red). Conclusion In summary, we present a robust single-stage pulse compression technique in the visible spectrum that achieves unprecedented performance in terms of spectral broadening efficiency, temporal compression efficiency, spatial beam quality, and temporal pulse quality. Our technique utilizes second-harmonic pulses at \(515\:nm\) generated by a \(1030\:nm\) femtosecond Yb-based regenerative amplifier and a simple multi-plate scheme with low nonlinearity that enables compression from \(180\:fs\) to \(40\:fs\) with minimal energy loss and distortion. Our technique offers a simple and robust solution for generating high-quality VIS pulses with durations close to the transform limit and peak powers exceeding \(10\:\mathrm{MW}\). Such pulses are ideal for driving extreme nonlinear optics processes and generating ultrabright EUV and X-ray high-harmonic radiation at high repetition rates of 10-100 kHz and above. Furthermore, this technique can be extended to shorter VIS and UV wavelengths and pulse durations by adjusting the plate number and thickness, opening up new possibilities for ultrafast science and technology. These features make the single-stage scheme an appealing frontend for ultrabright attosecond high harmonic generation using VIS-UV driving lasers. **Funding.** H2020 European Research Council(100010663), XSTREAM-716950, Alfred P. Sloan Foundation(100000879)FG-2018-10892. **Acknowledgments.**TP acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement XSTREAM-716950), and from the Alfred P. Sloan Foundation (FG-2018-10892). **Disclosures.** The authors declare no conflict of interest. **Data availability.** Data underlying the results are presented in the plotted graphs.
2308.01939
Numerical Uncertainty of Convolutional Neural Networks Inference for Structural Brain MRI Analysis
This paper investigates the numerical uncertainty of Convolutional Neural Networks (CNNs) inference for structural brain MRI analysis. It applies Random Rounding -- a stochastic arithmetic technique -- to CNN models employed in non-linear registration (SynthMorph) and whole-brain segmentation (FastSurfer), and compares the resulting numerical uncertainty to the one measured in a reference image-processing pipeline (FreeSurfer recon-all). Results obtained on 32 representative subjects show that CNN predictions are substantially more accurate numerically than traditional image-processing results (non-linear registration: 19 vs 13 significant bits on average; whole-brain segmentation: 0.99 vs 0.92 S{\o}rensen-Dice score on average), which suggests a better reproducibility of CNN results across execution environments.
Inés Gonzalez Pepe, Vinuyan Sivakolunthu, Hae Lang Park, Yohan Chatelain, Tristan Glatard
2023-08-03T02:17:07Z
http://arxiv.org/abs/2308.01939v1
# Numerical Uncertainty of Convolutional Neural Networks Inference for Structural Brain MRI Analysis ###### Abstract This paper investigates the numerical uncertainty of Convolutional Neural Networks (CNNs) inference for structural brain MRI analysis. It applies Random Rounding--a stochastic arithmetic technique--to CNN models employed in non-linear registration (SynthMorph) and whole-brain segmentation (FastSurfer), and compares the resulting numerical uncertainty to the one measured in a reference image-processing pipeline (FreeSurfer recon-all). Results obtained on 32 representative subjects show that CNN predictions are substantially more accurate numerically than traditional image-processing results (non-linear registration: 19 vs 13 significant bits on average; whole-brain segmentation: 0.99 vs 0.92 Sorensen-Dice score on average), which suggests a better reproducibility of CNN results across execution environments. Keywords:Numerical Stability Convolutional Neural Networks Nonlinear Registration Whole-Brain Segmentation ## 1 Introduction A motivating factor to study numerical uncertainty in neuroimaging is to establish measures of reliability in the tools observed, particularly in light of the reproducibility crisis [1, 2, 3]. Numerical uncertainty is key to the robustness of neuroimaging analyses. Small computational perturbations introduced in execution environments-- including operating systems, hardware architecture, and parallelization--may amplify throughout analytical pipelines and result in substantial differences in the final outcome of analyses [4, 5]. Such instabilities have been observed across many different tools and imaging modalities [6, 7], and are likely to impact the reproducibility and robustness of analyses. Convolutional Neural Networks (CNNs) are increasingly adopted for registration [8, 9, 10] and segmentation [11, 12, 13, 14] of structural MRIs. Once trained, CNNs are orders of magnitude faster than traditional image-processing methods, achieve comparable accuracy, and seem to exhibit better generalizability to image modalities and orientations. However, the numerical uncertainty associated with CNN predictions in neuroimaging remains largely unexplored. While previous works suggested that CNNs might be subject to numerical instability [15, 16, 17], it is unclear how such instabilities manifest in specific CNN architectures used in structural brain MRI, and how the resulting numerical uncertainty compares to the one of traditional methods. This paper measures the numerical uncertainty associated with CNN inference in neuroimaging, focusing specifically on non-linear registration and whole-brain segmentation of structural MRIs. To do so, it applies Random Rounding (RR) [18, 19]--a practical stochastic arithmetic technique to estimate numerical uncertainty--to state-of-the-art CNN models SynthMorph [8] and FastSurfer [12], and compare their numerical uncertainty to the one measured from the FreeSurfer [20] "recon-all" reference neuroimaging tool. ## 2 Materials & Methods We measured the numerical uncertainty of CNN models SynthMorph (non-linear registration) and FastSurfer (whole-brain segmentation) using RR. We applied these models to 35 subjects randomly selected in the CoRR dataset, using the FreeSurfer recon-all pipeline as a baseline for numerical uncertainty comparison. ### Random Rounding Random Rounding (RR) [18] is a form of Monte-Carlo Arithmetic (MCA) [21] that simulates rounding errors by applying the following perturbation to all floating-point (FP) operations of an application: \[random\_rounding(x\circ y)=round(inexact(x\circ y))\] where \(x\) and \(y\) are FP numbers, \(\circ\) is an arithmetic operation, and \(inexact\) is a random perturbation defined at a given virtual precision: \[inexact(x)=x+2^{e_{x}-t}\xi\] where \(e_{x}\) is the exponent in the FP representation of \(x\), \(t\) is the virtual precision, and \(\xi\) is a random uniform variable of \((-\frac{1}{2},\frac{1}{2})\). To measure numerical uncertainty, we applied a perturbation of 1 ulp (unit of least precision, a.k.a the spacing between two consecutive FP numbers), which corresponds to a virtual precision of \(t=24\) bits for single-precision and \(t=53\) bits for double-precision. We applied RR to the CNN models using Verrou [19][22], a tool that implements MCA through dynamic binary instrumentation with Valgrind [23], without needing to modify or recompile the source code. We instrumented the entire executables with RR, additionally using Verrou's custom libmath implementation named Interlibmath to avoid incorrect random perturbations in mathematical functions. We applied RR to FreeSurfer using "fuzzy libmath" [6], a version of the GNU mathematical library instrumented with the Verificarlo [24] compiler following the same principle as Verrou's Interlibmath instrumentation. ### Numerical Uncertainty Metrics We quantified numerical uncertainty by calculating the number of significant bits across multiple independent RR samples. The number of significant bits is informally defined as the number of bits in common between RR samples for a given FP value. We estimated the number of significant bits using the general non-parametric method described in [25] and implemented in the significant_digits package [26]. Given an RR sample \(X_{i}\) (\(i\leq n\)), this method computes the significance \(S_{i}^{k}\) of the \(k^{th}\) bit in the mantissa of \(X_{i}\) as: \[S_{i}^{k}=\mathbb{1}_{|Z_{i}|<2^{-k}}\] where \(Z_{i}=X_{i}-x_{\text{IEEE}}\) and \(x_{\text{IEEE}}\) is the unperturbed result computed with IEEE. The \(k^{th}\) bit in the mantissa is considered significant if the absolute value of \(Z_{i}\) is less than \(2^{-k}\). The number of significant bits across samples, \(\hat{s_{b}}\), is then obtained as the maximal bit index that is significant for all samples: \[\hat{s_{b}}=\max\left\{k\in\llbracket 1,m\rrbracket\text{ such that }\forall i\in \llbracket 1,n\rrbracket,\ S_{i}^{k}=\mathbb{1}\right\}\] where \(m\) is the size of the mantissa, i.e., 53 bits for double precision numbers and 24 bits for single precision numbers. A value of 0 significant bits means that \(X\) bears no information while a value of \(m\) means that it has maximal information given the FP format used. The difference between the maximal and achieved values quantifies the information loss resulting from numerical uncertainty. The number of significant bits is a versatile metric that applies to any program that produces results encoded as FP values. This is, however, not the case of segmentation tools that generally produce categorical variables encoded as integers representing segmentation labels despite the use of intermediate FP operations. Therefore, in order to assess the impact of stochastic rounding in these intermediate FP operations we used the minimum Sorensen-Dice scores computed pairwise across RR samples as uncertainty metric for segmentations. In addition, to have a more local uncertainty metric for segmentation results, we defined an entropy metric at each voxel: \[E=-\sum_{i=1}^{r}p_{i}\ln p_{i} \tag{1}\] where \(r\) is the number of segmented regions, and \(p_{i}\) is the probability of region \(i\) at the voxel, computed across \(n\) Random Rounding samples. The entropy is 0 when the voxel is labeled with the same region across all RR samples, and it is maximal when the voxel is labeled differently for each RR sample. ### Non-Linear Registration SynthMorph[8] is a 3D convolutional U-Net [27] that performs non-linear image registration robustly across MRI contrasts. The encoding section of the U-Net consists of 4 convolutional blocks, while the decoding section consists of 3 blocks with a skip connection to its respective encoding block. SynthMorph was trained from synthetic label maps created from geometric shapes from which images were generated with various contrasts, deformations, and artifacts. A contrast-invariant loss that measures label overlap was used to optimize model performance. SynthMorph's registration accuracy was shown to outperform state-of-the-art registration methods both within and across contrasts. We applied the SynthMorph "sm-brains" pre-trained model available on GitHub [28] to the linearly registered image produced by FreeSurfer recon-all (output of recon-all's -canorm step), using the MNI305 atlas as reference image. The subject and atlas images were cropped through visual inspection to match the 160 x 192 x 224 dimension required by the model, and intensities were min-max scaled to [0-1]. We applied Verrou directly to the sm-brains model, by instrumenting the entirety of the model and using Verrou's Interlibmath implementation. SynthMorph takes a couple of minutes to run, but when instrumented with Verrou, the runtime increased to a span of 2-3 days. FreeSurfer "recon-all" [20] is a widely-used neuroimaging pipeline that implements spatial normalization to a brain template, whole-brain segmentation, and cortical surface extraction. The non-linear registration algorithm (mri_ca_register tool) minimizes through gradient descent an error functional that includes an intensity term, a topology constraining one, a metric preservation term, a smoothness term, and a label term [29]. We first ran recon-all on all the subjects with steps --motioncor --talairach --nuitnensitycor --normalization --skullstrip --gcareg --canorm without RR, to obtain the linear registration used as input of SynthMorph. Then we ran the FreeSurfer recon-all step --careg that implements non-linear registration using our FreeSurfer version instrumented with fuzzy libmath. This command typically takes 3 hours to run, but when instrumented with fuzzy libmath, the runtime increased to a span of 6 hours. ### Whole-Brain Segmentation FastSurfer [12] is a CNN model that performs whole-brain segmentation, cortical surface reconstruction, fast spherical mapping, and cortical thickness analysis. The FastSurfer CNN is inspired from the QuickNAT model [11], which is composed of three 2D fully convolutional neural networks--each associated with a different 2D slice orientation--that each have the same encoder/decoder U-net architecture with skip connections, unpooling layers and dense connections as QuickNAT. The FastSurfer segmentations were shown to surpass state-of-the-art methods, as well as being generalizable to unseen datasets and having better test-retest reliability. We used the pre-trained model from FastSurfer available on GitHub [30] and we applied Verrou directly to this model, in the same way it was applied to SynthMorph. FastSurfer typically takes 30 minutes to run, but when instrumented with Verrou, the runtime increased to a span of 16-17 days. FreeSurfer recon-all also implements whole-brain segmentation [20], through a maximum a-posteriori estimation of the segmentation based on the non-linear registration of the subject image to an atlas. Due to time constraints, only the subcortical structures and brain tissues were segmented by FreeSurfer whereas FastSurfer also segmented cortical structures, therefore a mask was applied to FastSurfer's cortical labels to identify them by the super classes "Left/Right Cerebral Cortex". Only regions common to both were further analysed (see list in Fig. 2a). Similar to FreeSurfer recon-all's non-linear registration, RR was applied to FreeSurfer recon-all's whole brain segmentation through Verificarlo's fuzzy libmath. The FreeSurfer recon-all commands --motioncor up to --calabel, the command that specifically performs subcortical segmentation were run. Typically, the segmentation takes around 4 hours to complete, but with FreeSurfer instrumented with Verificarlo, the runtime increased to 10-12 hours. ### Dataset and processing We used the Consortium for Reliability and Reproducibility (CoRR) dataset [31], a multi-centric, open resource aimed to evaluate test-retest reliability and reproducibility. We randomly selected 35 T1-weighted MRIs from 35 different subjects, one from each CoRR acquisition site, and accessed them through Datalad [32][33]. The selected images included a range of image dimensions, voxel resolutions and data types (Appendix A). We excluded 2 subjects that failed linear registration with FreeSurfer recon-all and a third subject that failed segmentation with FastSurfer. Each pipeline or model was run in Singularity containers over 10 RR samples from which we measured numerical uncertainty. Due to long computing times induced by Verrou instrumentation (\(\approx 17\) days per subject) we were only able to get 4 RR samples for FastSurfer, which we complemented with an IEEE (non-RR) sample conceptually identical to an RR sample. We processed the data with SynthMorph and FreeSurfer recon-all on the Narval cluster from Ecole de Technologie Superieure (ETS, Montreal), managed by Calcul Quebec and The Digital Alliance of Canada which include AMD Rome 7502, AMD Rome 7532, and AMD Milan 7413 CPUs with 48 to 64 physical cores, 249 GB to 4000 GB of RAM and Linux kernel 3.10. We executed FastSurfer on the slashbin cluster at Concordia University with 8 \(\times\) compute nodes each with an Intel Xeon Gold 6130 CPU, 250 GB of RAM, and Linux kernel 4.18.0-240.1.1.el8_Justre.x86_64. We used FreeSurfer v7.3.1, SynthMorph v0.2, FastSurfer v2.1.1, Fuzzy v0.9.1, and Singularity/Apptainer v1.1. Verrou v3.21.0 was used for FastSurfer, while Verrou v3.20.0 with a special fix available on GitHub [34] was used for SynthMorph due to compatibility issues between the model and Verrou's Interlibmath. The scripts and Dockerfiles for this experiment can be found on GitHub [35]. ## 3 Results The numerical uncertainty measured for the SynthMorph CNN model was lower than for Freesurfer recon-all (Fig. 1a, as measured both in the resampled images (\(p<10^{-6}\), two-tailed paired t-test) and in the warp fields (\(p<10^{-5}\)) despite only the libmath libraries in FreeSurfer being instrumented in contrast to the entirety of SynthMorph. The number of significant bits in warp fields was computed as the average number of significant bits across the x,y and z components. On average, out of 24 bits available, the SynthMorph resampled image had 19.56 significant bits while FreeSurfer recon-all's had only 13.43 significant bits; the SynthMorph warp field had 18.55 significant bits while FreeSurfer recon-all's had only 14.12 significant bits. These important differences show a clear superiority of the CNN model compared to FreeSurfer recon-all in terms of numerical uncertainty. Moreover, we also observed a larger variability of the numerical uncertainty across subjects in FreeSurfer recon-all compared to SynthMorph. The differences in average numerical uncertainty observed between FreeSurfer and SynthMorph were confirmed by visual inspection of the non-linearly registered images and warp fields (Fig. 1b). The numerical uncertainty in registered images was structurally consistent, with higher uncertainty in the gray matter and at the border of the brain than in the white matter, both for SynthMorph and for FreeSurfer recon-all. The numerical uncertainty in warp fields exhibited interesting structural patterns that would benefit from further investigation. The numerical uncertainty of FastSurfer segmentations was significantly lower than for FreeSurfer recon-all in 31 out of 35 brain regions (Fig. 2a, with very substantial differences in some regions. Here again, a larger variability was observed in FreeSurfer recon-all segmentations than in FastSurfer segmentations. Overall, FastSurfer averages a Sorensen-Dice score of 0.99 across all regions, while FreeSurfer is at 0.92. The differences in Sorensen-Dice scores observed between FreeSurfer recon-all and FastSurfer were confirmed in local entropy maps (Fig. 2b where we visually noted a substantial discrepancy between both methods. For FreeSurfer recon-all, clusters of non-zero entropy values were observed across the brain, whereas for FastSurfer non-zero entropy values were limited to scattered voxels. The entropy maps, in addition to visual inspection, confirm that, despite the relatively high average Sorensen-Dice scores, FreeSurfer recon-all exhibited variability identifying the edges of subcortical structures, while FastSurfer remained certain in its segmentations. ## 4 Conclusion The numerical uncertainty measured in CNN models SynthMorph and FastSurfer was substantially lower than in FreeSurfer recon-all, amounting to differences in the order of 4 to 6 significant bits in non-linearly registered images, and of up to 0.4 Sorensen-Dice score values in segmentations. We believe that the high numerical uncertainty observed in FreeSurfer recon-all compared to CNN models results from the use of numerical optimization techniques in FreeSurfer recon-all while CNN models only involve low-dimensional convolutions, max-pooling operators, and simple activation functions. The low numerical uncertainty found in CNN models is consistent with previous observations in the very different task of protein function prediction [36]. The numerical uncertainty found in FreeSurfer recon-all is also consistent with previous observations on FreeSurfer recon-all non-linear registration [6] and segmentation [37]. Our results suggest that neuroimaging CNN models are significantly more robust to small numerical perturbations than traditional image processing approaches. Therefore, we expect CNN results to be more reproducible across execution environments than traditional image processing approaches, implying better portability across software and hardware systems. Our results report on the numerical uncertainty resulting from CNN _inference_, which is a relevant proxy for the uncertainty experienced by model end-users across different execution environments. However, the numerical uncertainty resulting from CNN _training_ was not measured in our experiments. We speculate that some of the numerical uncertainty observed in FreeSurfer recon-all results is intrinsic to the problems of subject-to-template non-linear registration and whole-brain segmentation, and should therefore manifest during CNN training. Mathematically, training CNN models involves numerical optimization in high-dimensional spaces, which we expect to be less numerically stable than CNN inference, and comparably stable to FreeSurfer recon-all. Should this assumption be accurate, the numerical uncertainty of predictions made by a sample of CNN models trained with Random Rounding should be substantial, Figure 1: Numerical uncertainty measured in the non-linearly registered images and warp fields produced by FreeSurfer recon-all and the SynthMorph CNN model. Figure 2: Numerical uncertainty measured in the segmentations produced by FreeSurfer recon-all and the FastSurfer CNN model. which we plan to leverage in our future work by building efficient ensemble models capturing the numerical variability associated with non-linear registration or segmentation, possibly resulting in improved predictions. ## 5 Acknowledgements Computations were made on the Narval and Beluga supercomputers from Ecole de Technologie Superieure (ETS, Montreal), managed by Calcul Quebec and The Digital Alliance of Canada. The operation of these supercomputers are funded by the Canada Foundation for Innovation (CFI), le Ministere de l'Economie, des Sciences et de l'Innovation du Quebec (MESI) and le Fonds de recherche du Quebec - Nature et technologies (FRQ-NT).
2307.07515
Artificial intelligence is algorithmic mimicry: why artificial "agents" are not (and won't be) proper agents
What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.
Johannes Jaeger
2023-06-27T19:25:09Z
http://arxiv.org/abs/2307.07515v4
## Artificial Intelligence is Algorithmic Mimicry ## Abstract What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems. **Keywords:** algorithmic mimicry, artificial general intelligence, natural agency, autopoiesis, embodiment, large/small world, relevance realization ## 1 Introduction There has been much discussion about the prospect of artificial general intelligence (AGI). This debate was triggered by recent advances in the field of artificial intelligence (AI), including the invention of _transformer models_, derived from multi-layered (deep) recurrent neural networks, and a massive scaling-up in _machine-learning (ML)_ research. This has resulted in the arrival of _large language models (LLMs)_, trained on enormous datasets of very broad range (Vaswani et al., 2017; Qiu et al., 2020; Bender and Koller, 2020; Bender et al., 2021; Han et al., 2021; Shanahan, 2023). Such models (e.g. BERT, Devlin et al., 2019; the GPT series, Radford et al., 2019; or LaMDA, Thoppilan et al., 2022) are capable of astonishing feats of inference in the domain of language, exhibiting text-generation and conversational capabilities that have impressed (and even overwhelmed) many a limited human observer, expert or otherwise. Further progress is expected from multimodal versions of such models, capable of handling not just language, but images, video, and music (see, for example, Takagi and Nishimoto, 2022; Huang et al., 2023). Opinions on the potential of these algorithms range from them being mere "stochastic parrots" "haphazardly stitching together sequences of linguistic forms" obtained from training data "without any reference to meaning" (Bender et al., 2021) to current LLMs already exhibiting some "sparks of AGI" (Bubeck et al., 2023). The latter view originates with corporate AI engineers directly working on these models, who justify their claims with the fact that these large inference machines can seemingly "solve novel and difficult tasks" in diverse areas "without needing any special prompting" (ibid.). In another, heavily publicized example, ChatGPT has been alleged to be proficient at chemistry, despite "nobody having programmed it to learn chemistry." Similar anecdotes abound, illustrating the widespread impression among researchers and the general public alike that LLMs may have acquired some capabilities akin to general intelligence. One common speculation is that the extremely high-dimensional nature of the weight matrix (and thus the configuration space) of an LLM allows for the emergence of "higher-level" abilities, such as the detection of meaning without external referents (see, for example, Piantadosi and Hill, 2022). It is a short step from there to claim that such higher-level phenomena could potentially include things such as agency, intelligence, or even consciousness. Due to our own innate cognitive limitations, no human being is able to understand the detailed mechanisms by which a model with billions of weights detects correlations that are of much higher dimensionality than the ones our brains could possibly pick up or process. This leads to predictive capabilities in LLMs that are mind-boggling and seemingly miraculous to any human observer. Humans have evolved to see agency everywhere in nature (Kuhmen & Kitayama, 2020). We are easily impressed (and fooled) that way. But does this imply that true higher-order effects could arise inside such a complicated algorithmic system? Could LLMs really become true agents, or even conscious, one day? Here, I focus on the concept of "agency," and conclude that the probability of an LLM ever achieving \(-\) as opposed to merely simulating, emulating, or mimicking \(-\) something analogous to natural or organismic agency is almost infinitesimally small. Achieving true artificial agency, and thus artificial cognition or consciousness, would require major conceptual breakthroughs in algorithm design as well as radical innovations in materials and computational architecture of a kind that are not currently on the horizon. In fact, I argue that no purely algorithmic (syntactic) computational system is capable of true embodied agency even if it is embedded in robotic hardware. It simply makes no sense to say that algorithms "act" in the world like living beings do. As a consequence, they do not "think" like humans either. In fact, they do not possess anything like animal or human cognition, meaning that they cannot become conscious (as humans are) no matter what improvements in computing power and training data await us in the future. Although they may emulate specific cognitive tasks to a degree where they outdo human performance, they will not surpass us in general intelligence, whatever that may mean. Artificial superintelligence is an unrealistic myth. Algorithms are and remain what they have always been: machines \(-\) automated tools for computation. We had better treat them as such. My argument places me very firmly at the "stochastic parrots" end of the debate. The truth does not always lie in the middle. To defend this seemingly extreme position, I will use arguments from biology that are surprisingly absent from the current discussion about artificial intelligence (AI). They complement powerful arguments against AGI based on (computational) linguistics, which show that LLMs cannot extract meaning from natural language despite appearing to emulate it with surprising accuracy and flexibility (Bender & Koller, 2020; Bender et al., 2021). In my argument, I have to deal with two fundamental problems. The first is rooted in the fact that concepts have very different meanings in different disciplines. Here, I focus on why the term "AI agent" is a gross misnomer, as algorithms have no agency in the biological sense of the term. Similar arguments can be made about the term "artificial intelligence" itself, which describes a research discipline which is not concerned with "intelligence" at all. We need better terminology to describe what is going on. In particular, I suggest supplanting "artificial intelligence" with "_algorithmic mimicry_" which circumscribes much better what the field is doing. The term "AI agent" should simply be avoided. "Algorithm" suffices completely. The second problem is that the current debate ignores the fact that organismic and algorithmic systems are built on architectures that are radically and fundamentally different. While computers are designed on the principle of a maximum _separation_ or _fractionability_ of hardware and software, organismic agency requires a completely different organization based on the maximum _integration_ of physical and symbolic2 (i.e., code-related) aspects (see section 4, and Rosen, 1991; Pattee & Raczaszek-Leonardi, 2012; Barbieri, 2015). In this particular sense, organisms and computers seem to be exact opposites of each other. This means that algorithms can only imitate (i.e., simulate, emulate, or mimic), but not truly reproduce or represent higher-level phenomena such as agency, which is exclusive to living matter and its peculiar organization (Rosen, 1991; Kauffman, 2000; Moreno & Mossio, 2015). This is the main reason I think that AGI is not possible in the current algorithmic framework of AI research: the architecture of this framework is too flat -- it does not allow for the kind of hierarchical circularity required for natural agency -- no matter how large or sophisticated its models. There is no spark of AGI anywhere to be seen. In fact, I predict that, if AGI is ever created, it will come out of a biological laboratory (see section 5). Footnote 2: It is hard to avoid the terms “symbol” and “symbolic” here. We mostly use “symbol” in a broad sense as “something that stands for something else by reason of a relation” (Pattee & Raczaszek-Leonardi, 2012). This is not to be confused with the more specific use of the term in AI research, where “symbolic” indicates a method based on high-level (human-readable) problem representations (in contrast to subsymbolic methods such as connectionist networks). I proceed by giving a brief introduction to what I mean by "general intelligence" and "agency." I then use these concepts to show three fundamental differences between organisms and algorithms (which are a kind of machine; see section 4, for a detailed explanation). The first difference is _autopoiesis_, the ability of organisms to self-manufacture, which essentially grounds their natural agency. The second is _embodiment_, the tight integration of "hardware" and "software," which governs an organism's interactions with its environment and renders the quality of those interactions fundamentally different from those of a machine. The third is the fact that organisms live in _a large world_, while algorithms exist in a small world, no matter how high-dimensional their parameter space or how enormous the dataset they have been trained on (Savage, 1954). In a large world, information is typically scarce, ambiguous, and often misleading, which makes it difficult to identify relevant problems and characterize them with any precision. In contrast, a small world is entirely made of data that are explicit and well-defined. Based on these three fundamental differences, I argue that it is unreasonable and dangerous to mistake the virtual for the real world. Unfortunately, this is exactly what is obscuring our debates about algorithmic mimicry and its real-world implications at the moment. ## 2 General Intelligence and Agency To rigorously assess the prospect of AGI, I need to first define what I mean by "_general intelligence_." Debates about this topic all too often reflect differences in operational definitions of concepts. One particular problem is that researchers in the field of algorithmic mimicry tend to have an overly simplistic conception of "intelligence." They consider it merely a matter of problem solving, often formalized as optimization in the context of some kind of general problem-solving framework, such as the one originally proposed by Newell and Simon (1972). This kind of approach is problematic in itself, since it comes up against the problem of relevance, _i.e._, how to formally define a real-world problem in the first place (see section5, and Vervaeke et al., 2012; Vervaeke & Ferraro, 2013). More importantly, it is far too narrow to embrace the everyday meaning of "general intelligence." To capture this broader conception, I take the definition to include the following minimal set of characteristics (Roitblat, 2020): * reasoning (especially inference-making), * problem solving, * learning, * using common-sense knowledge, * autonomously defining and adjusting goals, * dealing with ambiguity and ill-defined situations, and * creating new representations of the knowledge acquired. I have argued elsewhere that all of these characteristics of general intelligence are essential for an organism to get to know its world (Roli et al., 2022). But only the first three can be properly formalized (and only explicitly propositional forms of learning at that; see, for example, Polanyi, 1958). Algorithmic mimicry is exclusively concerned with these formalizable characteristics, but cannot deal with the other four that are equally important. This generates a number of largely intractable problems. I note that there has been very little progress on these matters in the 70 years since the goal of general intelligence was first posited at the Dartmouth Summer Research Project in 1956 (e.g., McCarthy & Hayes, 1969; Dreyfus, 1978; Dennett, 1984; Dreyfus, 1992; Cantwell Smith, 2019; Roitblat, 2020; Roli et al., 2022), and I see no realistic prospect of this state of affairs changing any time soon. AGI is not an imminent possibility, if it is achievable at all. Not enough people realize that the problems preventing AGI are of a philosophical rather than technological or practical nature (Roli et al., 2022): they all have to do with the fact that algorithms and the environment they exist in, by definition, are purely syntactic constructs without the possibility of acquiring semantic referents in the physical "outside" world (section4). Consider, for instance, that "common sense" cannot be precisely defined in any purely syntactic and formal (and hence general) sense, as it applies only in the contingent, real-world context of the semantics of a living social organism. Similarly, algorithms cannot freely set their own intrinsic goals because they remain bound by their instructions, input data, and computational environment which are provided externally (see below and section3). Neither can they deal with ambiguous, ill-defined problems (section5). There are no double-entendres in a purely syntactic construct. Last but not least, algorithms do not have the capacity to create new frames or representations of their "knowledge" since they exist in a completely formalized (small) world, where everything is already well-defined (section5). This world, in its entirety, consists of the algorithm's instructions and computational environment, plus training (and later input) data that have to be properly formatted with regard to some predefined objective (no matter how broadly defined). In other words, the algorithm's model of the world is its world. The algorithm cannot switch frames because there is only one frame: its complete _digital ontology_. Nothing and everything is relevant at the same time in such a situation. The digital ontology of an algorithm includes all correlations read out of data, no matter how high-dimensional and subtle. These correlations are not emergent in the same physical sense a candle flame or a living being is. They are not really higher-level phenomena, because they are fully precoded in the data, although perhaps hidden and undetectable for limited human observers. Furthermore, such correlations cannot be true semantic representations, because they have no real meaning about anything beyond the algorithm's limited digital ontology. Instead, they are implicit in purely syntactic programs and data structures (Bender & Koller, 2020; Piantadosi & Hill, 2022). If semantics are inferred, they are only derived from internal relations between symbols, as when LLMs haphazardly manage to arrive at the "meaning" of a term from clustering of terms into word classes in their underlying vector space (ibid.). According to the definition of "general intelligence" given above, whether or not an algorithm can acquire AGI hinges crucially (among other things) on whether or not it can set its own goals (Roitblat, 2020). Being able to set intrinsic goals, in turn, is a basic and essential property of what I call the "_natural agency_" of living beings (see, for example, Barandiaran et al., 2009; Kauffman, 2000; Moreno & Mossio, 2015; Walsh, 2015; Roli et al., 2022). Organisms, from bacteria to humans, are natural agents because they can define and pursue their own goals. There is nothing mysterious or unscientific about natural agency, and it does not require any cognition or intention, as we shall see (Barandiaran et al., 2009). Instead, natural agency is simply grounded in the ability of all organisms to self-manufacture (Hofmeyr, 2021) -- in the fact that they are _embodied autopoietic systems_ (see section3). An autopoietic system is organized in a way that enables it to produce and maintain itself by continuously fabricating and assembling its own physical components (Varela et al., 1974; Varela, 1979; Maturana & Varela, 1980). For example, a cell self-manufactures by producing macromolecular components (proteins, nucleic acids, lipids) through metabolism, by enabling the functional folding and assembly of those components (into protein complexes or membranes, for instance) through the maintenance of a specific cellular milieu, and by sustaining this milieu via the regulated transmembrane transport of nutrients, electrolytes, and waste products (Hofmeyr, 2021). More generally, the primary intrinsic goal of any organism is to continue existing, to go on self-manufacturing, to keep alive3. It is what it does, naturally, and nobody told it to do so. This is the constitutive dimension of being an autonomous living agent (Moreno & Mossio, 2015). Footnote 3: As always in biology, there are exceptions to this rule. Most cases where organisms sacrifice themselves as individuals can be explained by the higher-level perspective of inclusive fitness: they do it for the benefit of their offspring, relatives, or conspecifics. A curious case where this does not apply is the tragic phenomenon of human suicide. But even here, it takes considerable effort (and the very peculiar nature of the human condition) to subvert the innate survival instinct. But it is not sufficient. To stay alive, an organism also needs to be able to initiate actions that are aligned with its particular environment (Moreno & Mossio, 2015; Walsh, 2015). In other words, survival requires well-adapted, goal-oriented behavior. Such behavior can (and often does) result from evolution by natural selection. It requires internal predictive models of the world, understood very broadly as processes or structures within the organism that function in relation to projected outcomes of its actions (Rosen, 1985; Louie, 2017). These "models" need not be representational, nor do they need to be based on cognition or intention. Often, they are actualized by simple evolved mechanisms. Think of a bacterium able to discern toxins from nutrients and "going for" food by swimming up a nutrient gradient. Even the simplest organism is an anticipatory system (ibid.). All such systems use predictive models to pursue their intrinsic goals. This is what enables organisms to act on their own behalf (Kauffman, 2000). Or refuse to act, for that matter. It represents the interactive dimension of being an autonomous living agent (Moreno & Mossio, 2015). Algorithms are only superficially like that. Granted, they can have purposes, can even possess a certain degree of autonomy (the ability to select from a range of tasks or objective functions without direct human intervention), and can implement models of the physical world. Accordingly, the Encyclopedia Britannica defines a software "agent" as "a computer program that performs various actions continuously and autonomously on behalf of an individual or an organization."4 Think of self-driving cars, or any other autonomous robot, for example. The important part of the definition in the present context is that the algorithm's "actions" and "autonomy" always manifest themselves "on behalf of an individual or organization," which means they ultimately come from a true natural agent, a human programmer, systems designer, or data scientist, who provides instructions, objectives, labeled data, and computational environment. An algorithm's goals are ultimately always imposed from outside. An algorithm always acts on behalf of an agent, or it does not act at all. It has no natural agency. Sometimes, we are deceived into thinking this is not the case, because the coding of the tasks and objectives of an algorithm happens in a very implicit and indirect manner, as in current ML research with its large datasets and diversified broad-range "unsupervised" learning strategies. And yet, even the "smartest" LLM model cannot set its own goals. In fact, it never will. LLMs, by definition, are programmed to do a specific task, even if it is as abstract and general as "finding high-dimensional correlations in very large and intricate datasets in order to complete phrases formulated in some language." An LLM's autonomy is inflexible, predetermined externally, and thus remains much more circumscribed than that of a true natural agent. All of this means that an algorithm builds its model of the world (its "knowledge" if you will) in a very different way from a living being (Roli et al., 2022). In fact, the two have almost nothing in common. First, an algorithm's model of the world is always built with regard to an outside agent's goals and, second, the algorithm does not really have a model of the world but rather is some kind of model of something. For example, LLMs are models of natural language (as their name implies), while the software of a self-driving car is a model of that part of physical reality that represents traffic. It bears repeating: algorithms are (and will always remain) tools for real agents. They are not agents themselves. To better understand why this is the case, let us now look at the three fundamental differences between organisms and algorithms. ## 3 Autopoiesis As we have seen, the primary goal of an organismic agent is to remain alive, while machines have no such intrinsic drive towards self-preservation (section 2). Without this drive, there can be no natural agency, and without agency no true general intelligence. But what about the likelihood of such an intrinsic drive arising in an algorithmic system in the future? Could an algorithm eventually become a true agent, say, above a certain level of computational complexity? Would that mean it has become alive? To evaluate these questions, we must first better understand what life is or, more precisely, what distinguishes living from non-living matter. The difference is one of organization rather than composition. Organisms are composed of chemical elements that are also common in non-living systems and, like everything else in the universe, they must obey the fundamental laws of physics. What really sets living matter apart is not what it is made of, but the way in which the physico-chemical processes that are its components influence and constrain each other's dynamical behavior (Rosen, 1991; Juarrero, 1999; Kauffman, 2000; Deacon, 2011; Pattee & Raczaszek-Leonardi, 2012; Moreno & Mossio, 2015; Montevil & Mossio, 2015; Juarrero, 2023). In mathematical terms, such _constraints_ are called boundary conditions on the underlying flow. They limit the degrees of freedom that a particular component process can actualize. They narrow its range of possible dynamical behaviors. They restrict what it can do. Interestingly, life is all about constraints. In the words of Terrence Deacon (2011): living systems are _less_ than the sum of their parts! Life can only exist far from chemical equilibrium. As far-from-equilibrium systems, organisms must be thermodynamically open, constantly exchanging materials and energy with their environment. More specifically, organisms are _dissipative systems_(Prigogine & Lefever, 1973; Nicolis & Prigogine, 1977; Prigogine & Stengers, 1984): they deplete naturally occurring gradients of free energy (i.e., entropy) at the maximal possible rate. While dissipative systems include non-living self-organizing processes, such as hurricanes, eddies, and candle flames, organisms go one step further. They use physical work driven by a free energy gradient to generate internal constraints which, in turn, channel the underlying physico-chemical processes in specific directions that keep the whole process going (Kauffman, 2000; Deacon, 2011; Montevil & Mossio, 2015). Put simply, organisms keep themselves alive by using constraints to transform and build further constraints. The constructive dynamic of building constraints upon constraints is what leads to _autopoiesis_, the ability of the organism to self-manufacture (cf. section 2). Autopoiesis arises when each constraint within the system is not only generated by but also generates at least one other constraint (Montevil & Mossio, 2015; Mossio et al., 2016). The constraints that constitute the autopoietic _organization_ of an organism collectively produce each other. Think of the set of enzymes present in some cell. They collectively produce themselves through their role in metabolism and gene regulation. This defining property of a living system is called _organizational closure_(Piaget, 1967; Moreno & Mossio, 2015). Let us emphasize again that organizational closure requires thermodynamic openness. It is only possible in a system that remains far from equilibrium. As an illustrative example, consider the self-manufacturing organization of a free-living cell (Hofmey, 2021): metabolism produces nucleotides, proteins, and lipids (among many other things) through macromolecular synthesis. The resulting macromolecules, in turn, acquire their specific functional conformation (or assemble into higher-level structures such as ribosomes, protein complexes, and membranes) given the tightly regulated internal milieu of the cell. Finally, this milieu itself needs to be constantly monitored and maintained through the regulation of transmembrane transport, which requires an assembled membrane system and functional protein transporters. It is easy to see how all three aspects of this process constrain, but also rely on each other for their own existence. This dynamic is what is meant by "constraints building upon constraints." Through the dialectic interrelation of synthesis, milieu, and transport, the cell is able to self-manufacture, and ultimately reproduce by cell division. In other words: to stay alive (to be an autopoietic system), an organism must maintain organizational closure over time, throughout its entire life cycle. On top of that, the organism must pass on its organization to its offspring, across generations, if it is to be evolvable (Saborido et al., 2011; Mossio and Pontarotti, 2020). This is called _organizational continuity_(DiFrisco and Mossio, 2020). It is what underlies the individuality of living systems (ibid.). In addition, organizational continuity is an essential requirement for (open-ended) evolution of physical (embodied) systems by natural selection (Roli et al., 2022; Jaeger, 2023). Finally, and most importantly in the current context: it is what gives the organism a certain degree of _autonomy_ from its environment, based on its capacity for _self-determination_(Deacon, 2011; Moreno and Mossio, 2015; Mossio and Bich, 2017). Self-determination is possible because the particular direction that the constructive dynamic of autopoietic constraint generation is taking is not exhaustively determined by the laws of physics that govern the underlying processes that are being constrained (Deacon, 2011; Pattee and Raczaszek-Leonardi, 2012). Instead, constraint generation follows its own inner logic grounded in the hierarchical and self-referential interrelations of processes in a living system, and how they support each other and, at the same time, restrict each other's range of possible behaviors. This dynamic at the level of constraints is historically contingent. It is evolutionary because it not only depends on the particular environmental context of an organism (section 4), but is also _dynamically presupposed_ by earlier organized states _within_ the living system (and its ancestors; Bickhard, 2000; Mossio and Bich, 2017; DiFrisco and Mossio, 2020). In this sense, the process of constraint generation is fundamentally unpredictable, that is, _radically emergent_, to any outside observer (Kauffman, 2000; Roli et al., 2022). Living beings can behave and evolve in ways that are not foreseeable, even in principle. This is what it means to have _autonomy_. Moreover, the behavior of an organism originates within its own organization. This is what it means to have _agency_. Autonomous agents (and systems that contain them) do not break any laws of physics, but are not reducible to physics and chemistry either, since their behavior is not predictable by physical law. Can we reproduce autopoietic organization in an algorithmic system? Can we generate _artificial autopoiesis_? At some level, definitely yes: there is no reason to assume it is impossible to emulate autopoietic processes with algorithmic mimicry. In fact, we have known for a long time that self-producing dynamics can be generated in cellular automata (Von Neumann, 1966; Hofmeyr, 2018). Similarly, the hierarchically circular dynamics of closure and autopoiesis can be captured with _abstract rewrite systems_, for instance, computational models that are based on \(\lambda\)-calculus or related formalisms in which operations are allowed to redefine the rules of the program (Fontana & Buss, 1994, 1996; Mossio et al., 2009). Thus, at first sight, there is no reason to assume that the dynamics of living systems cannot be fully captured by algorithmic simulation. It is worth noting, however, that none of the formalisms or hardware implementations currently employed in the field of algorithmic mimicry are based on the autopoietic principles described here. The hierarchical self-referentiality of closure represents a kind of _organizational complexity_ that is not the same as the _computational complexity_ of cybernetic feedback and recursive computing (Rosen, 1991; Louie, 2009). The former has to do with the way the physical component processes of a living system interrelate to mutually support and restrict each other (constraints building upon constraints), while the latter is about the reducibility (i.e., compressibility) of the underlying processes themselves. An algorithm is irreducible, if we cannot skip any of its steps and still obtain the same behavior. It can produce surprising outcomes since we cannot follow each individual step of the calculation, but its boundary conditions (instructions and computational environment) remain predefined and fixed. An organism, in contrast, is irreducible in terms of its organization: it produces unpredictable behavior because of the way it constructs its own constraints. Autopoiesis is irreducible because it is the property of a whole, intact physical system. It should not be difficult to see that the two kinds of complexity are fundamentally different. Contemporary approaches to algorithmic mimicry, such as LLMs and other models based on recurrent neural nets (section 1), are rich in computational complexity, but lack the organizational complexity required for autopoiesis. To put it another way: even if "deep" or multi-layered, their architecture is too flat to truly capture the kind of hierarchical circularity required for autopoiesis. Since autopoiesis is a prerequisite for self-determination, it is safe to conclude that current AI algorithms will not exhibit true agency (as in "natural agency") any time soon, no matter how many parameters a model contains, how many network levels it features, or how large and complex the datasets used to train it. Agency is not a matter of size or scale. It is a matter of organization. But what if artificial autopoiesis _will_ be developed? Will an autopoietic simulation truly represent a living system in the sense of capturing all its essential characteristics, as well as its full behavioral and evolutionary potential? Will such a system qualify as being alive in some sense? I argue that this is extremely unlikely. First of all, there are convincing arguments - based on the mathematical theory of categories - which suggest that any algorithmic simulation of a living system must necessarily remain incomplete (Rosen, 1991; Louie, 2009; Hofmey, 2021). It cannot fully capture the behavioral and evolutionary potential of an organism due to the collectively impredicative nature of the latter (ibid.). I shall not go into that rather technical discussion here (but see Roli et al., 2022). Instead, I will focus on another limitation of algorithms that is often overlooked: it lies in the very different ways in which syntactic codes relate to the physical world in machines and living systems. This is the problem of embodiment, which is, in essence, a generalized variant of the symbol grounding problem in cognitive science (Harnad, 1990). ## 4 Embodiment Digital computers are designed with the cleanest possible separation between hardware and software in mind (Rosen, 1991; Pattee & Raczaszek-Leonardi, 2012). The rationale for this is practical: you want to be able to perform as many different automated computation tasks as possible on the same hardware. This is why a large majority of modern computers share the same basic architecture, which is derived from a general abstract model of computation: a _universal Turing machine_(Turing, 1937; Davis, 2001). This mathematical machine is a maximally flexible calculating device. It is conjectured to provide a general model of effective computation, that is, a general model of the kind of computation a human can perform by rote (Church, 1936; Turing, 1937). In this sense, Turing's machine can perform _any_ kind of automated calculation, run _any_ kind of software, implement _any_ possible algorithm5. Footnote 5: There is one important caveat here: Turing machines have no clocks that measure real time, only state transitions. This is why many real-time computing systems are not strictly Turing machines. But this fact does not affect our argument. I define an algorithm as a sequence of precisely defined logical or mathematical operations that reliably performs some computation, typically, to solve a specific problem. Because I am not concerned with the practical aspects of problem-solving here, I interpret "algorithm" in a broader sense than is usually done in the theory of computation. I do not require the computation to halt, that is, the number of steps involved to be finite. Also: "sequence" does not imply strict sequentiality. Computational threads can run concurrently, as long as the order of their interactions is well-defined. It is even possible to add a non-deterministic element by allowing for a set of possible operations to be applied probabilistically at each step. No matter how we specify the term, an algorithm represents a purely automatic procedure that requires no agency (Rosen, 1991). A Turing machine consists of an infinite tape with symbols drawn from a finite alphabet, and a reading head that can be in a finite number of different states (Turing, 1937). The head reads the symbol at the present position on the tape, then performs an operation that is determined by this symbol and the current state the head is in (or, if the machine is non-deterministic, it draws an operation probabilistically from a given set). The machine then either halts, or writes an output symbol to the tape while remaining where it is or moving to the left or right to continue its operation. The symbols on the tape are the data the machine is operating on. The transition table that determines (a set of) operations for each symbol and each state defines the algorithm it implements. This table can be stored inside the head, but it can also be read from the tape. The latter results in what is called a _stored-program computer_: not only the data, but also the algorithm it executes are on the tape (Davis, 2001). Apart from the infinite sequential tape, which is replaced by finite random-access memory, this is the basic architecture of today's digital computers. Thus, computers are imperfect real-world approximations of a (stored-program) Turing machine. A Turing machine with an infinite tape can implement any arbitrary sequence of symbols, and hence the complete set of codable algorithms. These algorithms are its software, while the reading head is the hardware. It is in this sense that this idealized model of a real-world computer is a _universal_ machine. Turing's model gives us a precise definition of what is meant by "computation." A process is computable if it can run on a universal Turing machine (Turing, 1937; Copeland, 2020). The model also yields a technical definition of a "machine," which does not coincide with the way we use the term in everyday language. Instead, a machine in the sense of Turing is a formal or natural system (a mathematical or physical automaton) that corresponds to a universal Turing machine (Rosen, 1991). Some physicists have speculated that _all_ natural processes must be machines in this sense (Gandy, 1980; Deutsch, 1985, 1997; Lloyd, 2007; recently reviewed in Piccinini & Maley, 2021). In this view, the whole world literally _is_ an automaton: any process that is real must be representable in algorithmic terms, including all living processes. If we subscribe to this pancomputationalist stance, Al algorithms _must_ have the capacity to become true agents, to become alive, to become conscious, if only we manage to capture the right set of computational properties of a living system. Pancomputationalism is behind the enthusiastic claims about AGI cited in section 1. Unfortunately, pancomputationalism is fundamentally missing the point. Turing's model of computation is concerned with the utterly human cognitive activity of "efficiently performing a calculation," not with physics (Copeland, 2020). Mistaking the two is so common it has its own name: _the equivalence fallacy_ (ibid.). What Turing's model _does_ is enable engineers to build powerful computers, which automate the rote activity of calculating. Our digital computers are tools that approximate the functionality of universal machines, capable of performing any kind of effective calculation. This is only possible _because_ of the strict software-hardware separation in the model and in real-world digital computers. However, this separation also means that algorithms exist strictly in the symbolic realm of software, isolated from the physical details of the hardware they run on. In this sense, running a simulation on a digital computer is an almost completely "physics-free" activity (Pattee & Raczaszek-Leonardi, 2012). Let us step back and reflect once more on the claim that all physical mechanisms must be Turing-computable. This claim goes far beyond the trivially true statement that we can _approximate_ physical processes by simulating them in a computer (Pattee & Raczaszek-Leonardi, 2012). It implies that every physical process must have some symbolic content (otherwise, it would not be strictly equivalent to an algorithm). But this is problematic because of the "aboutness" of symbols: they are about something else; they must have a referent outside their own strictly syntactic domain (Deacon, 2011). Furthermore, the meaning of a word in a natural language crucially depends on the communicative intent of the speaker and the interpretation by the listener (Bender & Koller, 2020). Similarly, the purpose of a calculation is that of the natural agent performing it, no matter whether it is performed in the agent's head or automated on a digital computer (section 2). There is no way around it: symbols with meaning are tightly tethered to the existence of living agents in the physical world. Physical mechanisms that are not machines (that are not designed by an agent for some purpose) have no intrinsic symbolic content. We can impute meaning on them by simulating them, but they are _not_ machines in the sense of being the same as a Turing machine. Most physical mechanisms do not perform any computation, only our simulations of them do. Thus, machines (in the technical sense used here) are the small subset of mechanisms which have a purpose imposed on them by an external agent. Most of them are _our_ machines, machines that humans have designed, constructed, or programmed. Once we know its purpose, it is possible to describe a machine in algorithmic terms, that is, in terms of its function (Kauffman, 1971; Rosen, 1991). Functional (symbolic) and mechanistic (physical) descriptions complement each other and, in the case of a machine, map onto each other in a straightforward manner: it is generally possible to localize specific functions to particular physical parts of the machine's mechanism. All algorithms are mechanisms if actualized in the context of a physical automaton: their operation is precisely defined by orchestrated sequences of accurately localizable cause-and-effect relations between the parts that constitute the machine (Bechtel, 2011; Nicholson, 2012; Glennan, 2017). Unlike digital computers, most mechanical machines only actualize a small set of algorithms that serve a specific function, e.g., solving a particular problem. A lawn mower implements the rote procedure of moving a lawn. A car serves our need for transportation. Like machines, organisms are natural systems with a purpose (see sections 2 & 3). This means they are amenable to functional descriptions, i.e., we can differentiate them into symbolic and physical aspects, which can (if one is so inclined) be analogized to the software and the hardware of a computer (e.g., Rosen, 1991; Pattee & Raczaszek-Leonardi, 2012; Barbieri, 2015). Unlike machines, though, the goals of an organism originate _within_ the organization of the living system itself, and are not imposed from outside (Nicholson, 2013, 2014). This is reflected in a fundamental difference between biological organization and computational architecture: organisms lack any separation of "hardware" and "software". Symbolic and physical aspects of a living system are intimately intermingled. They may be distinguishable conceptually, but cannot be disentangled from each other like they can in a computer (Pattee & Raczaszek-Leonardi, 2012). Take, for example, the central relation between genome and protein sequence in a living cell. This relation can be viewed as symbolic in the sense that it is mediated by the genetic code, where specific DNA codons "stand for" particular amino acids6. Yet, the meaning of a genomic sequence is only acquired through the physical processes that express a gene to produce a functional protein in a given context. This requires a set of appropriate enzymes to be present, whose primary structure is also coded in genomic sequence. However, expressed proteins only attain their functional three-dimensional conformation after folding occurs in the tightly regulated biochemical milieu of the cell (Pattee & Raczaszek-Leonardi, 2012; Barbieri, 2015; Hofmeyr, 2018, 2021). This last step is _not_ symbolically coded in the genome. Folding is purely physical, requiring a cellular milieu that is tightly regulated at the level of the whole cell (Hofmeyr, 2021). Footnote 6: I understand concepts like “code” and “symbol” to be explicitly context-dependent and historically contingent here, not suggesting any universal (but arbitrarily coded) “book of like” that “writes out” (and thus determines) the living form (see also Kay, 2000; Fox Keller, 2010, for criticisms of language metaphors in genetics and molecular biology). Here, the difference to machines should become immediately obvious: the coding process (the "software" - often mislabeled "genetic program") produces enzymes that (through folding in a particular milieu) constitute the physical "hardware" of the cell, which are in turn required to replicate and activate the "software." While genomic sequences are often analogized with the tape of a Turing machine, and transcription/translation enzymes with the reading head interpreting those symbolic sequences, the analogy breaks down when we consider that the reading head itself is generated by the coding process, and that its components require whole-cell regulatory processes outside the genome (plus the laws of thermodynamics) to take on their functional forms. In other words, in a cell, the "hardware" _is_ the "software" _is_ the "hardware," and so on. They are distinct but inseparable aspects of the _same_ overall process, similar to the difference between constraints and underlying flux described in section3. A symbolic system that works in this integrated way is no longer formally equivalent to a Turing machine (Rosen, 1991; Louie, 2009; Hofmeyr, 2021). It literally is what it _does_. The continual mutual interconversion between the symbolic and the physical is a fundamental organizational principle of living systems, and a type of embodiment that results in a deep and direct embedding of the symbolic aspects of the organism in its physical context. Howard Pattee has called this interconnectedness between symbol and matter _semantic closure_7(Pattee & Raczaszek-Leonardi, 2012). It is basically a version of von Neumann's (1966) self-producing automaton that is not isolated from, but compatible with, the laws of physics. It is what enables autopoiesis, self-manufacture, in the real world. Compare this to the almost complete separation of software and hardware in "physics-free" computational architectures. In this sense, organisms are the exact opposite of machines like digital computers and the algorithms they run. Footnote 7: The use of this term by Pattee is a bit confusing, since it is not the same concept as that first proposed by Alfred Tarski to denote a theory or language that can adequately express its own semantic concepts (Tarski, 1956; Priest, 1984). While a living system literally is what it does in the physical world, the relationship between an algorithm isolated in its purely syntactic domain (section2) and its physical surroundings is much more complicated and much less direct. This difference in the quality of embodiment does not depend on the details of the computation: it applies to _anything_ that runs on a digital stored-program computer, no matter what kind of algorithmic mimicry is performed, and no matter how complicated the algorithm or how comprehensive the dataset it was trained on. Not even embedding software in robotic hardware removes the disconnect, irrespective of what kind of peripherals are involved, as long as the algorithm is still bound by the principles of digital computer architecture. An algorithm has no possibility of transcending its digital ontology. By definition, it exists in a closed world that it cannot escape (see section5). If it has an effect on the physical world (beyond its consumption of electricity and taking up space), it either achieves it through influencing the behavior of the external agent that is using it, or (in robotics) through effectors that are part of the hardware it runs on. Both of these ways of interacting require additional components and interfaces based on rules that must be provided externally. They are not the direct product of the software, as enzymes (and other "hardware" components) are in living systems. To overcome this fundamental limitation would require an algorithm to generate its own hardware, according to its own intrinsic goals and specifications. This poses several rather serious challenges. First of all, intrinsic goals would require a computational architecture that enables true (natural) agency. We do not know at this point what such an architecture would look like (see section3). Second, direct embodiment necessitates a completely different computational paradigm, which achieves the greatest possible universality without any strict separation of hardware and software. Efforts towards neuromorphic computing can be seen as a tiny first step in this direction, but we are still very far from accurately emulating the development and function of nervous systems in animals, not even to mention self-manufacture (see, for example,, Schuman et al., 2017, 2022). And finally, autonomous hardware evolution will only be possible with much more flexible and configurable materials than we currently have available8 (cf. Moreno & Etxeberria, 2005; Nicholson, 2019). None of these daunting challenges are likely to be overcome any time soon, if they can be overcome at all. Therefore, Al algorithms will remain confined to the symbolic realm for the foreseeable future, relying on human beings to mediate their effect on the world. Taken together, this suggests that unaligned autonomous AGI, running free outside its preconfigured computational environment, is an extremely unlikely scenario at this point. Rather than worrying about the potential rise of superintelligent machines, we had better pay close attention to the pernicious effects narrow Al can have on our own, very human, behavior. Footnote 8: The best we currently have in terms of configurable hardware are the above-mentioned neuromorphic nets, and field-programmable gate arrays. The latter are electronic circuits that can be dynamically reconfigured by software (within predefined limits) but are nowhere near being _generated_ by the algorithm according to its own requirements. ## 5 Large Worlds In the previous two sections, we have seen that organisms are organized and embodied in a way that is very different from machines, including all current attempts at algorithmic mimicry. As a consequence, organisms and algorithms exist in vastly different worlds. Following Savage (1954), I shall call these worlds "large" and "small." The _small world_ of an algorithm is purely syntactic and symbolic (just like the algorithm itself), maximally isolated from the messy, ambiguous, and often misleading semantics of physical reality (section4). It is a formal construct encompassing the algorithm's own code, its formatted data (training as well as input), and the computational architecture it is embedded in (hardware design, operating system, and language environment). On the one hand, the algorithm has access to its world in its entirety -- nothing ever remains obscure or hidden. On the other hand, it is enclosed in its small world for good -- there is no way for it to go beyond its given digital ontology, to change its frame of reference. If the algorithm "perceives" the physical world, it is through an encoded interface with hardware sensors; if it is embedded in robotic hardware, it acts through a decoding interface with hardware effectors. We have seen in section4 that the interaction of an organism with its surroundings is much more immediate. Unlike the algorithm, whose interfaces must be provided by an external (human) agent, a living being generates the structures it needs to interact with the physical world from within its own organization. In an algorithm's closed small world, there is only one frame of reference that includes its entire "universe," a finite (though potentially astronomically vast) set of variables and rules defining their relations, all explicitly expressed in terms of a specific formalism. All accessible features of such a world are predetermined (however implicitly and indirectly) by this formalism, and get prioritized in regard to a target function (or a range of target functions). Again, both formalism and target function(s) are externally imposed by some (human) agent. In such a small world (no matter how vast), everything and nothing is relevant at the same time. Every problem the algorithm could possibly encounter is clearly defined: it can be assigned an initial program state, a target state (producing some desired output), and a discrete and finite search space containing possible computations that connect the two (Newell & Simon, 1972). Note that being well-defined does not mean all these problems are solvable, as some will turn out to be computationally complex, and hence intractable (as defined in section3). Often, we consider organisms to function in a similar way. But this is what philosophers call a _category error_. Computationalist approaches to cognition, for example, tend to see the mind as operating in a manner that is analogous to the software of a computer -- isolated from physical reality through the "hardware" making up our bodies, in particular, our channels of perception. In this view, sensory organs and nervous systems work like hardware sensors that encode input we receive from the physical world. The human mind, like an algorithm, appears to exist in a completely symbolic and syntactic (and thus small) world. However, the differences in organization and embodiment discussed in sections3 and 4 suggest an alternative scenario. Since organisms are self-manufacturing and have no software-hardware distinction, cognition itself must be treated as an autopoietic and embodied phenomenon (see, for example, Varela et al., 1991; Thompson, 2010). Even though there may be some encoding going on in sensory perception, and there may be abstract symbolic representations involved, at heart, our interaction with the physical world is much more intimate and immediate than that of an algorithm. This is because our hardware is made out of software and vice versa (section4). You may have sensors and effectors, but unlike the robot, these are made directly from your "software," that is, the coding processes involved in macromolecular synthesis. It all becomes clearer when we focus on simpler organisms without nervous systems, where the connection to the physical world is much less convoluted than in cognitive agents such as humans. Consider, again, a single free-living cell. Its autopoietic nature is characterized by two defining properties. First and foremost, it is a true agent, able to set and pursue its own goals (section 3). It needs no target function. Its primary goal is to remain alive. To achieve this goal, it must self-manufacture, which means it is constantly forced to invest physical work into maintaining a set of constraints that keeps it far from equilibrium. Hans Jonas (1966) called this the _thermodynamic predicament_. On the one hand, life is precarious that way. On the other, its autonomy gives the cell a certain degree of self-determination (Mossio & Bich, 2017). Jonas (1966) calls this flip side of the coin our _needful freedom_. In contrast, an algorithm does not (in fact, cannot) expend energy to persist. It can be stored, erased, and reloaded indefinitely without affecting its performance. It has no control - nor any kind of subjective awareness - of its own existence. Second, any living being is an open but bounded system, with its boundaries one of the vital constraints that it must continue generating from within itself (section 3). These boundaries mediate the interactions the cell can have with its physical surroundings, which make up its experience of the world. Note that "experience" in this sense does not imply any cognitive capabilities or mental representations, which a single cell does not have. What it does imply, however, is that a cell has some kind of "point of view," a peculiar frame of reference, which it creates itself from within its own organization and through which it experiences the world. In contrast, an algorithm always "sees" everything in its small world. It has a view "from nowhere." The reference frame of an organism is shaped by evolution: its interactions (and the structures that mediate them) must be reasonably well adapted to the environment to ensure the continued existence of the cell. Taken together, all this means that the experience of any living being is limited by necessity. This is exacerbated by the fact that organisms are tiny (in fact, almost infinitesimal in size) compared to their environment. For a living being, there is no "god's eye view" (Giere, 2006; Wimsatt, 2007; Massimi, 2022). Its experience of the world will always be biased and partial. In conclusion, living beings, unlike algorithms, live in a _large world_, a world far beyond their limited grasp (Stanford, 2010). This is not only a vast and ancient world (small worlds can be like that too), but a world in which most problems are _not_ well-defined in the sense introduced above. Thus, by definition, a large world is not formalized. And it is probably not formalizable, meaning that a limited observer would never be able to express every possible problem in terms of specific initial and end states, plus a discrete and finite search space. There is always a _semantic residual_ of phenomena that remain vague and mysterious. This is the main difference between the words of organisms and algorithms. Information, in a large world, is always scarce, often ambiguous, and sometimes outright misleading. This means that an organism, if it is to solve problems at all, must first define them, turn cryptic semantics into clear-cut syntax, which involves the dilemma of having to identify and tackle those aspects of the world that are relevant to survival. This is a dilemma, because it is generally not solvable algorithmically. There is no precisely circumscribed search space and, if there were such a thing, this would only lead us into an infinite definitional regress: in order to delimit a search space precisely we have to identify the relevant aspects of the problem, which defines an optimization problem that requires a well-defined search space, and so on. To avoid this regress means to overcome the _problem of relevance_(Veraveke et al., 2012), which is a generalized version of what researchers in the field of algorithmic mimicry know as the _frame problem_(reviewed in Shanahan, 2016). Even the simplest organism can deal with it. A bacterium, for example, has evolved mechanisms to distinguish between chemicals that are nutrients versus those that are toxins in its environment. In contrast, it is a notoriously intractable problem for AI (McCarthy & Hayes, 1969; Dreyfus, 1972; Dennett, 1984; Dreyfus, 1992; Cantwell Smith, 2019; Roitblat, 2020). And here is why: apart from the fact that nothing is intrinsically relevant to an algorithm because it is not an autopoietic agent (section3), there is only one frame in a small world, and thus no frame problem to be solved. In other words, the problem of relevance simply does not exist in the world of algorithmic mimicry. This is not a technological problem, but a philosophical one. There will be no technological solution to it in the current framework of algorithmic mimicry, no matter how complicated and powerful our methods for inference, our training data, and our hardware architecture will get. Expressed in colloquial terms, an algorithm cannot want or need anything, because it is not alive. In contrast, organisms are essentially driven by desire and impermanence. These are the foundations for meaning and relevance. Life is where the transition from matter to matter occurs (Roli et al., 2022). The precariousness of life is what motivates the actions of a living being, which are carried out with the primary goal of staying alive. As long as we don't generate "artificial intelligences" that exist in a large world, there will be no AGI. As already stated in section1, such a system is more likely to emerge from a biological laboratory than any current effort in artificial mimicry. ## 6 Conclusions In this paper, I highlight three basic differences between living and algorithmic systems. First, organisms are self-manufacturing (autopoietic) physical systems, which can identify, set, and pursue their own intrinsic goals, while algorithms are purely symbolic machines that are dependent on a suitable computational environment and target functions that must be provided by some external agent. Second, the prevalent computational architecture of today maximizes the isolation of software from hardware, meaning that interactions with the physical world require externally provided sensors and effectors, while no software-hardware distinction exists in organisms, which are embodied in a way that enables more immediate exchange with their physical surroundings. Finally, algorithms exist in a small world, in which all possible problems are well-defined, whereas organisms live in a large world, where most problems are ill-defined (and some are probably not properly definable). All of this goes to show just how different living systems are from algorithms. This means that algorithms and living beings have very different capabilities and limitations. Surprisingly, this fact is often overlooked in discussions that compare algorithmic mimicry with natural general intelligence. It is true that algorithms outperform humans in many tasks. Those tasks are typically well-defined but hard for us to solve, especially if they involve large amounts of calculation, require substantial working memory, and/or involve high-dimensional search spaces. Familiar examples are strategic games like chess or go, complicated scheduling or planning tasks, intricate mathematical proofs that require a large number of steps, or (in the case of LLMs) filling in the blanks of a text based on massive amounts of correlations between words and/or phrases in a training dataset. Because humans are cognitively rather limited in these areas, never having evolved the ability to solve complicated problems of this kind, we are easily impressed by the algorithms' performance. Sometimes, this causes us to lose perspective, to attribute capabilities to these machines that they cannot possibly have because of their architectural limitations. To paraphrase Robert Rosen (1991), algorithms represent automatic procedures that require no thought, no interpretation, no improvisation, and no intrinsic agency or creativity. And this, as explained in section4, is exactly the purpose of the Church-Turing theory of computation: it is a model of what humans can achieve by rate calculation. Only much later, after the computer came into widespread use -- after it became the defining high-tech of our age -- did it become widely accepted to apply the theory of computation as a general model of physics or cognition. However, most physical processes have no intrinsic symbolic content, and are therefore not "computation" in Church and Turing's sense of the term. It takes a living agent to impute symbolic meaning onto reality, either by simulating physical systems (to understand or control them), or by building machine-artifacts that materialize certain algorithmic processes. Similarly, in the domain of cognition, processes of rate calculation are only a tiny fraction of what an animal or human brain does. Other neural and cognitive processes may be meaningfully emulated by computation, but they are not necessarily computational in nature. In fact, brains did not evolve to be calculating machines in the sense of Turing at all. Instead, they arose as a means of coordinating an animal's embodied sensorimotor control in the context of a large and complex world. In the words of Paul Cisek (2019, p. 2270): "[T]he evolutionary history of the nervous system is essentially a history of the continuous extension of such control further and further into the world." Brains enable an organism to tackle the problem of relevance in ever more involved situations. I have established in section5 that this problem is not tractable algorithmically. To summarize: my argument shows that, even though computation can and does occur in physical and cognitive systems, it is a category error to consider such systems purely in terms of Turing-style computation. Computationalism is not wrong per se, it is a useful tool for simulating certain cognitive processes, but we must learn to recognize its limited domain of application. Ignoring these limitations turns a blind eye to the fact that algorithmic systems may outperform humans at complicated tasks of learning, inference-making, and problem-solving in well-defined situations, while they cannot compete with us in situations that demand choosing and setting intrinsic goals, using situated common sense, dealing with ill-defined problems and ambiguous or misleading information, and/or changing frame of reference. I, and others, have argued elsewhere that _all_ of these abilities constitute essential aspects of general intelligence (Roitblat, 2020; Roli et al., 2022). Therefore, thinking of algorithmic mimicry (including LLMs) in terms of "agency," "cognition," "thought," "understanding," or "intelligence" means using the wrong categories9, because none of these capabilities, as originally defined in biology, are achievable in today's algorithmic framework of AI research. This also implies that AGI, or conscious algorithmic "agents," are not a realistic imminent possibility. Footnote 9: Alison Gopnik has written an excellent and very accessible column on precisely this topic: [https://www.wsi.com/articles/what-ai-still-doesnt-know-how-to-do-11657891316](https://www.wsi.com/articles/what-ai-still-doesnt-know-how-to-do-11657891316). At the heart of this conceptual confusion lies a distinction that is rarely recognized. When we talk about the emergence of novel capabilities in algorithmic and living systems, we use two fundamentally different notions of "emergence," based on two completely distinct definitions of "complexity" (see section3). In an algorithmic context, we are dealing with _computational complexity_, which captures the irreducibility of certain computational processes, and the effort it takes to calculate them in a realistic time frame (e.g., Aaronson, 2013; Dean, 2021). Roughly, we have to perform certain computations step-by-step to arrive at the desired target state, but this may not be possible to do faster than in real time. In other words, there is no short-cut, for instance, through general dynamical laws that allow predicting the outcome without performing the entire calculation. In such systems, emergent properties are defined as features that are computationally irreducible. They are unexpected (i.e., emergent) only in the sense that we cannot easily predict them in advance. Philosophers call this _weak emergence_(Bedau, 1997; Humphreys, 2015). In contrast, the kind of complexity that characterizes living systems is not computational but _organizational complexity_. Autopoietic systems contain hierarchical cycles, which lie at the heart of their ability to self-manufacture. These cycles also implement organizational closure and underlie the tight integration of hardware (physical) and software (symbolic aspects) in such systems (Rosen, 1991; Deacon, 2011; Pattee & Raczaszek-Leonardi, 2012; Montevil & Mossio, 2015). The kind of emergence we get in this case is the emergence of new rules that govern the dynamics of the system. This is how new levels of organization arise (Wimsatt, 2007; Deacon, 2011). "Agency," "cognition," "thought," "understanding," and "intelligence" inhabit these higher levels of organization. The computational architecture of contemporary algorithmic mimicry has no way to allow such new levels to emerge. Its complexity, as breathtaking as it may be, remains strictly confined to the computational kind. I repeat once more: the emergence of AGI is not a matter of size or scale, it is a matter of organization. To conclude, I will state again how important it is to use appropriate language when talking about algorithms such as LLMs. These systems do not "conceptualize," "conceive," or "create." They "count," "calculate," and "compute." The term "artificial intelligence" itself is a gross misnomer: the work in this field, as it currently stands, has nothing to do with natural intelligence. I suggest calling it _algorithmic mimicry_ instead, which makes its nature explicit and helps to avoid category errors such as the ones described above. Or, when mimicry is well done and useful to human agents (which it often is), we could call it IA: _intelligence augmentation_. Algorithms are tools, admittedly more complex than a hammer, but still tools that we can use, if we choose to, for boosting our own cognitive capabilities. An AI "agent" is never an agent on its own. My argument supports those who see the dangers of AI not in the possibility of AGI (which is not a real possibility right now), but in dangerous and deceptive applications of narrow algorithmic mimicry. LLMs are and remain in this narrow category, no matter how astonishing their feats of imitation. The _problem of alignment_ is not one of adjusting ourselves to the presence of superior entities. Quite the contrary, we must recognize these algorithms for what they are: powerful tools to be adjusted to _our_ human needs (see, for example Werthner et al., 2022). This is an urgent and tremendous practical problem that will have to be solved using societal and political means. It is also, to a large degree, a problem of design: algorithms must be clearly distinguishable from real agents because the two are not at all similar in kind. As Daniel Dennett puts it: "[c]ounterfeit money has been seen as vandalism against society ever since money has existed. Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious10." I agree. Regulation is urgent and indispensable. Responsibility, in the end, lies with us human agents. _We_ are the ones with the agency. Why voluntarily delegate it to a machine that has none? Footnote 10: Dennett was quoted in a feature that New York magazine ran on computational linguist Emily Bender called “You are not a parrot:” [https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbotts-emily-m-bender.html](https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbotts-emily-m-bender.html). ## Acknowledgments Paul Poledna introduced me to the term "IA (Intelligence Augmentation)." The incentive to write this paper arose from discussions within our project "_Pushing the Boundaries_," which is led by myself and Tarja Knuuttila, and is hosted by the Department of Philosophy of the University of Vienna. I thank all participants for their support and inspiration. Specifically, I would like to thank Erich Prem, Tarja Knuuttila, Andrea Loettgers, Paul Poledna, Kevin Purkhauser, and Mortz Kriegleder for comments on the manuscript. Last but not least, I would like to extend a very special "thank you" to the late Brian Goodwin, and my good friend Jannie Hofmeyr, who are the ones who got me thinking about these kinds of topics in the first place. ## Conflicts of Interest The author declares no conflicts of interest. ## Funding The author is currently funded by the John Templeton Foundation (grant ID: 62581), "_Pushing the Boundaries: Agency, Evolution, and the Dynamic Emergence of Expanding Possibilities._" The funder had no role in designing or shaping this argument, or the decision to submit the work for publication.
2306.10340
Ultrahigh-fidelity composite quantum phase gates
A number of composite pulse (CP) sequences for four basic quantum phase gates -- the Z, S, T and general phase gates -- are presented. The CP sequences contain up to 18 pulses and can compensate up to eight orders of experimental errors in the pulse amplitude and duration. The short CP sequences (up to 8 pulses) are calculated analytically and the longer ones numerically. The results demonstrate the remarkable flexibility of CPs accompanied by extreme accuracy and robustness to errors -- three features that cannot be simultaneously achieved by any other coherent control technique. These CP sequences, in particular the Z, S and T gates, can be very useful quantum control tools in quantum information applications, because they provide a variety of options to find the optimal balance between ultrahigh fidelity, error range and speed, which may be different in different physical applications.
Hayk L. Gevorgyan, Nikolay V. Vitanov
2023-06-17T12:51:17Z
http://arxiv.org/abs/2306.10340v2
# Ultrahigh-fidelity composite quantum phase gates ###### Abstract A number of CP sequences for four basic quantum phase gates -- the Z, S, T and general phase gates -- are presented. The CP sequences contain up to 18 pulses and can compensate up to eight orders of experimental errors in the pulse amplitude and duration. The short CP sequences (up to 8 pulses) are calculated analytically and the longer ones numerically. The results presented in this article demonstrate the remarkable flexibility of CPs accompanied by extreme accuracy and robustness to errors -- three features that cannot be simultaneously achieved by any other coherent control technique. These CP sequences, in particular the Z, S and T gates, can be very useful quantum control tools in quantum information applications, because they provide a variety of options to find the optimal balance between ultrahigh fidelity, error range and speed, which may be different in different physical systems. ## I Introduction Phase coherence is of paramount importance in modern quantum information technologies and it is one of the most significant differences between classical and quantum computing [1; 2; 3; 4]. Phase coherence is created and controlled by quantum phase gates, such as the Z, S and T gates, which are key elements in any quantum circuit. Because of the vast number of such gates involved even in moderate quantum circuits their fidelity is of crucial significance for the success of any quantum algorithm. Among the existing quantum control techniques capable of efficient manipulation of quantum systems, composite pulse (CP) sequences [5; 6] stand out as a very powerful tool which offers a unique combination of accuracy of operations, robustness to experimental errors, flexibility and versatility as it can be adopted and applied to essentially any qubit control task -- a set of features that can only be found in composite pulses. A composite pulse is actually a train of pulses with well defined relative phases which are used as control parameters in order to shape the excitation profile, and generally, the propagator, in a desired manner. The vast majority of composite pulses are designed to produce complete and partial rotations on the Bloch sphere [5; 6; 7; 8; 9; 10; 11; 12]. Among these, a clear distinction exists between the so-called variable and constant rotations. Variable rotations start on one of the poles of the Bloch sphere and move the Bloch vector at a particular latitude, i.e. on a particular parallel, without controlling the longitude. Constant rotations do not require a specific initial condition and produce the desired rotation starting at any point on the Bloch sphere. In quantum control language, the variable rotations are characterized by well-defined absolute values (i.e. populations) of the propagator elements but not well-defined phases. Constant rotations (or phase-distortionless rotations) are characterized by both well-defined populations and phases of the propagator, i.e. the quantum gate. Obviously, constant rotations are much more demanding to generate, but they are exactly what is required for reliable and scalable quantum computing circuits. Over the year, variable and constant composite rotations have been demonstrated on multiple occasions in NMR [5; 6; 7; 8; 9; 14; 15; 16; 17], trapped ions [18; 19; 20; 21; 22; 23; 24; 25], neutral atoms [26; 27; 28; 29; 30], quantum dots [31; 32; 33; 34; 35; 36], doped solids [38; 39; 40; 41], superconducting qubits [42; 43], etc., featuring remarkable accuracy and robustness. A variation of the composite idea, with the detuning rather than the phase of each constituent pulse in the composite sequence used as the control parameter, has also been proposed and experimentally demonstrated [44]. Very few proposals exist for composite phase gates [45]. In the present paper, we make a step toward filling this gap: we supplement the library of composite pulses with composite pulses which produce arbitrary quantum phase gates, with a focus at the most important ones for quantum information processing: the S, T and Z gates. An arbitrary phase shift at an angle \(\phi\), being rotation around \(z\) axis, can be implemented by two resonant \(\pi\) pulses up to an undetectable global phase. However, resonant driving is prone to errors in the experimental parameters, e.g. the pulse amplitude, duration, and detuning. Here the phase gates are implemented as the sequences of \(\pi\) rotations with specific phases. Hence, the various quantum control techniques and proposals that make rotation gates error-resilient, are applicable in this context. Application of composite pulses to produce well-defined phase shifts of the two states of a qubit is presented in [45]. Here, we use analytic approaches and brute-force numerics to derive composite sequences for Z, S, T and general phase gates, which achieve error compensation of up to 8th order. Compared to Ref. [45], we go a step further: by compensating all elements in a general phase-gate matrix, we also ensure that these composite pulses are phase-distortionless. This paper is organized as follows. In Sec. II we explain the derivation method. Design and performance of phase gates are presented in Sec. III. Finally, Sec. IV presents the conclusions. ## II SU(2) approach Our objective in this article is to construct the quantum phase-shift gate \(\mathbf{F}(\phi)=e^{-i(\phi/2)\hat{\sigma}_{z}}\), or in matrix form, \[\mathbf{F}(\phi)=\mathbf{R}_{z}(\phi)=\left[\begin{array}{cc}e^{-i\phi/2}&0\\ 0&e^{i\phi/2}\end{array}\right]:=\left[\begin{array}{cc}1&0\\ 0&e^{i\phi}\end{array}\right], \tag{1}\] which is equal to the pure one up to the inefficient global phase factor \(e^{-i\phi/2}\). Derivation of the robust ultrahigh-fidelity quantum phase gates via composite pulses is similar to the rotation gates in our previous work [12]. Basically starting from the time-dependent Schrodinger equation for a two-level system, one can reach the evolution operator for a single-qubit, which is called a Rabi rotation gate from AMO (atomic, molecular and optical) devices in experimental quantum computing [1; 2], or theta pulse in nuclear magnetic resonance [9] \[\mathbf{U}(\mathcal{A},\phi)=\left[\begin{array}{cc}\cos(\mathcal{A}/2)&- ie^{i\phi}\sin(\mathcal{A}/2)\\ -ie^{-i\phi}\sin(\mathcal{A}/2)&\cos(\mathcal{A}/2)\end{array}\right]. \tag{2}\] where \(\mathcal{A}\) is the temporal pulse area \(\mathcal{A}=\int_{\epsilon_{i}}^{t_{\ell}}\Omega(t)\mathrm{d}t\) and \(\phi\) stands for the phase of the coupling. A train of \(N\) theta pulses, each resonant according to (2), produces the propagator \[\mathcal{U}=\mathbf{U}_{\phi_{N}}(\mathcal{A}_{N})\cdots\mathbf{U}_{\phi_{3}} (\mathcal{A}_{3})\mathbf{U}_{\phi_{2}}(\mathcal{A}_{2})\mathbf{U}_{\phi_{1}} (\mathcal{A}_{1}). \tag{3}\] Equivalent representation of (3) is called design or structure of composite pulses \[(\mathcal{A}_{1})_{\phi_{1}}(\mathcal{A}_{2})_{\phi_{2}}(\mathcal{A}_{3})_{ \phi_{3}}\cdots(\mathcal{A}_{N})_{\phi_{N}}. \tag{4}\] where each pulse has area \(\mathcal{A}_{k}\) and phase \(\phi_{k}\). In (3) evolution matrices \(\mathbf{U}_{\phi_{k}}(\mathcal{A}_{k})\) act chronologically, from right to left, when in (4) pulses \((\mathcal{A}_{k})_{\phi_{k}}\) are applied from left to right. Under the assumption of a single systematic pulse area error \(\epsilon\) (each pulse is replaced by the errant one, i.e. \(\mathcal{A}_{k}\rightarrow\mathcal{A}_{k}(1+\epsilon)\)), we can expand the errant composite propagator \[\mathcal{U}(\epsilon)=\left[\begin{array}{cc}\mathcal{U}_{11}(\epsilon)& \mathcal{U}_{12}(\epsilon)\\ -\mathcal{U}_{12}^{*}(\epsilon)&\mathcal{U}_{11}^{*}(\epsilon)\end{array} \right]. \tag{5}\] in a Taylor series versus \(\epsilon\). Because of the SU(2) symmetry of the errant overall propagator, it suffices to expand only two of its elements, say \(\mathcal{U}_{11}(\epsilon)\) and \(\mathcal{U}_{12}(\epsilon)\). We set their zero-error values to the target values, \[\mathcal{U}_{11}(0)=e^{-i\phi/2},\quad\mathcal{U}_{12}(0)=0, \tag{6}\] and we set as many of their derivatives with respect to \(\epsilon\), in the increasing order, as possible, \[\mathcal{U}_{11}^{(m)}(0)=0,\quad\mathcal{U}_{12}^{(m)}(0)=0,\quad(m=1,2, \ldots,n), \tag{7}\] where \(\mathcal{U}_{jl}^{(m)}=\partial_{\epsilon}^{m}\mathcal{U}_{jl}\) denotes the \(m\)th derivative of \(\mathcal{U}_{jl}\) with respect to \(\epsilon\). The largest derivative order \(n\) satisfying Eqs. (7) gives the order of the error compensation \(O(\epsilon^{n})\). Equations (6) and (7) generate a system of \(2(n+1)\) algebraic equations for the nominal pulse areas \(A_{k}\) and the composite phases \(\phi_{k}\) (\(k=1,2,\ldots,N\)). The equations are complex-valued and generally we have to solve \(4(n+1)\) equations with the \(2N\) free parameters (nominal pulse areas and phases). Only the equation (6) can be satisfied at least by two \(\pi\) pulses \[\mathbf{F}(\phi)=\mathbf{U}(\pi,\nu+\pi-\phi/2)\mathbf{U}(\pi,\nu)=\mathbf{U} (\pi,\nu)\mathbf{U}(\pi,\nu+\pi+\phi/2). \tag{8}\] Taking into account this fact, and because of the normalization condition \(|\mathcal{U}_{11}|^{2}+|\mathcal{U}_{12}|^{2}=1\), an error compensation of order \(n\) requires a CP sequence of \(N=2(n+1)\) pieces of \(\pi\) pulses. As stated above, the derivation of the CP sequences requires the solution of Eqs. (6) and (7). For a small number of pulses (up to eight \(\pi\) pulses), the set of equations can be solved analytically. For longer sequences, Eqs. (6) and the first, second and third two equations (\(n=3\)) of Eqs. (7) can be solved analytically, but the higher orders in Eqs. (7) they are solved numerically. We do this by using standard routines in Mathematica\({}^{\copyright.}\). ### Quantum gate fidelity If Eqs. (6) and (7) are satisfied, then the overall propagator can be written as \[\mathcal{U}(\epsilon)=\mathbf{F}(\phi)+O(\epsilon^{n+1}), \tag{9}\] with \(\mathbf{F}(\phi)=\mathcal{U}(0)\). Then the _Frobenius distance fidelity_, \[\mathcal{F}=1-\|\mathcal{U}(\epsilon)-\mathbf{F}(\phi)\|=1-\sqrt{\tfrac{1}{4} \sum_{j,k=1}^{2}|\mathcal{U}_{jk}-F_{jk}|^{2}}, \tag{10}\] is of the same error order \(O(\epsilon^{n})\) as the propagator, \(\mathcal{F}=1-O(\epsilon^{n+1})\). It is common in NMR QC community to use the _trace fidelity_, \[\mathcal{F}_{\mathrm{T}}=\tfrac{1}{2}\mathrm{Tr}\,[\mathcal{U}(\epsilon) \mathbf{F}(\phi)^{\dagger}]. \tag{11}\] As in our previous work [12], we will use Frobenius distance fidelity, because it explicitly includes information about both major and minor diagonal elements, hence shows performance of phase-distortionless phase gates. ## III Broadband composite phase gates ### Design for composite phase gates Based on numerical evidence, we consider symmetric type (in pulse areas) of CP sequences, designed by \(\pi\) pulses. Each symmetric sequence consists of a sequence of \(2(n+1)\) nominal \(\pi\) pulses, with asymmetrically ordered phases, \[R_{n+1}(\nu)\cdot R_{n+1}(\nu+\pi-\frac{1}{2}\phi), \tag{12a}\] \[R_{n+1}(\nu)\stackrel{{\Delta}}{{=}}\pi_{\nu}\pi_{\nu+\phi_{1}} \pi_{\nu+\phi_{2}}\cdots\pi_{\nu+\phi_{n}},\] (12b) equivalent to \[R_{n+1}(\nu+\pi+\frac{1}{2}\phi)\cdot R_{n+1}(\nu), \tag{13}\] where the structure of composite rotation (12b) is denoted as \(R_{n+1}(\nu)\) and the symbol "-" stands for the order of operations from left to right. These sequences generalize the initial two-pulse sequence (see (8)) and have similar design. Due to this specific structure of composite phases, the equations (6) are satisfied, all odd-order derivatives \(\mathcal{U}_{11}^{(2k+1)}(0)\) of the major-diagonal elements in Eq. (7) vanish, and so do all even-order derivatives \(\mathcal{U}_{12}^{(2k)}(0)\) of the minor-diagonal elements. Despite this fact, we call the compensation order \(n\) the maximum number \(m\) for which all major-diagonal and minor-diagonal elements are optimized simultaneously from \(1\) to \(n\). This can be obtained with the precise choice of the available composite phases in (12). From an infinite number of solutions, we choose solutions of the type (12) and with a free parameter \(\nu=0\), since the choice of relative phases \(\phi_{1},\phi_{2},\ldots,\phi_{n}\) is of importance. Henceforth, we target and use a design \[R_{n+1}\cdot R_{n+1}(\pi-\frac{1}{2}\phi), \tag{14a}\] \[R_{n+1}\stackrel{{\Delta}}{{=}}\pi_{0}\pi_{\phi_{1 }}\pi_{\phi_{2}}\cdots\pi_{\phi_{n}},\] \[R_{n+1}(\pi-\frac{1}{2}\phi)\stackrel{{\Delta}}{{=} }\pi_{\pi-\frac{1}{2}\phi}\pi_{\phi_{1}+\pi-\frac{1}{2}\phi}\cdots\pi_{\phi_{ n}+\pi-\frac{1}{2}\phi}, \tag{14b}\] and other possible solutions can be obtained by choosing an arbitrary parameter \(\nu\) in (12) or/and by passing to the type (13). ### General phase-shift gate As it is well known, such a gate can be produced by two resonant pulses of total temporal area \(2\pi\) (see (8) with \(\nu=0\)). The propagator of two pulses reads \[\left[\begin{array}{c}\mathcal{U}_{11}(\epsilon)\\ \mathcal{U}_{12}(\epsilon)\end{array}\right]=\left[\begin{array}{c}e^{-i\phi /2}\cos^{2}(\pi\epsilon/2)+\sin^{2}(\pi\epsilon/2)\\ \frac{1}{2}i(1-e^{-i\phi/2})\sin(\pi\epsilon),\end{array}\right] \tag{15}\] where \(\epsilon\) is the pulse area error. The Frobenius distance fidelity (10) reads for phase-shift gate \(\mathbf{F}(\phi)=\mathcal{U}(0)\) \[\mathcal{F}=1-\sqrt{2}\left|\sin\frac{\pi\epsilon}{2}\right|\left|\sin\frac{ \phi}{4}\right|. \tag{16}\] For comparison, the trace fidelity is \[\mathcal{F}_{T}=1-2\sin^{2}\frac{\pi\epsilon}{2}\sin^{2}\frac{\phi}{4}. \tag{17}\] Obviously the error stemming from the Frobenius distance fidelity (16), which is of order \(O(\epsilon)\), is far greater than the value of the error stemming from the trace fidelity (17), which is of order \(O(\epsilon^{2})\) (as for the rotation gate). Longer pulses have a higher order of compensation, which is noticeable in fidelity frames. Below we consider these sequences, in the increasing order of error compensation. #### ii.2.1 First-order error compensation The careful analysis of Eqs. (6) and (7) shows that the shortest possible CP which can compensate first-order errors (both in major and minor diagonal elements) consists of four pulses, each with a pulse area of \(\pi\), and asymmetric phases, with the structure similar to the two pulses, \[\pi_{0}\pi_{\phi_{1}}\pi_{\pi-\frac{1}{2}\phi}\pi_{\phi_{1}+\pi-\frac{1}{2} \phi}. \tag{18}\] Figure 1: Frobenius distance fidelity \(F\) (top) and infidelity (bottom) of composite Z gates. The infidelity is in logarithmic scale in order to better visualize the high-fidelity (low-infidelity) range. The numbers \(N\) on the curves refer to CP sequences \(ZN\) listed in the Table 1. Solving Eq. (6) along with Eq. (7) for the first derivatives gives two solutions for the phases, \[\pi_{0}\pi_{-\frac{1}{4}\phi}\pi_{-\frac{1}{2}\phi}\pi_{\frac{3}{4} \pi-\frac{1}{2}\phi}, \tag{19a}\] \[\pi_{0}\pi_{\frac{3}{4}\phi}\pi_{-\frac{1}{2}\phi}\pi_{\frac{7}{4} \pi-\frac{1}{2}\phi}. \tag{19b}\] These two sequences generate the same propagator and hence the same fidelity. The Frobenius distance and trace distance fidelities read \[\mathcal{F}=1-\sqrt{2}\left|\sin^{2}\frac{\pi\epsilon}{2}\right| \left|\sin\frac{\phi}{4}\right|, \tag{20a}\] \[\mathcal{F}_{T}=1-2\sin^{4}\frac{\pi\epsilon}{2}\sin^{2}\frac{ \phi}{4}. \tag{20b}\] Obviously, the Frobenius distance infidelity for four sequences is of order \(O(\epsilon^{2})\) and it is much larger than the trace distance infidelity, which is of order \(O(\epsilon^{4})\). The trace distance fidelity is much higher than the Frobenius distance fidelity, similar to rotation gates. With respect to the quantum computation benchmark fidelity value of \(1-10^{-4}\), the Frobenius distance fidelity (20a) for the four-pulse composite Z4 gates of Eqs. (19) remains above this value in the pulse area interval \((0.9936\pi,1.0064\pi)\), i.e. for relative errors up to \(|\epsilon|<0.0064\) to be more precise. For comparison, the trace distance fidelity (20b) remains above this value in the pulse area interval \((0.936\pi,1.064\pi)\), i.e. for relative errors up to \(|\epsilon|<0.064\), a factor of 10 larger. Again we notice that the Frobenius distance fidelity is a much more stringent measure of quality. For four-pulse composite S4 gate, the Frobenius interval is \((0.9913\pi,1.0087\pi)\) with the relative errors up to \(|\epsilon|<0.0087\), and the trace interval is \((0.913\pi,1.087\pi)\) with the relative errors up to \(|\epsilon|<0.087\), a factor of 10 larger. For four-pulse composite T4 gate, the Frobenius interval is \((0.9879\pi,1.0121\pi)\) with the relative errors up to \(|\epsilon|<0.0121\), and the trace interval is \((0.878\pi,1.122\pi)\) with the relative errors up to \(|\epsilon|<0.122\), a factor of 10 larger. Both the Frobenius and the trace distance fidelities depend on the phase flip angle \(\phi\). The pulse area intervals for the four-pulse composite S4 gates are larger than for the four-pulse composite Z4 gates and smaller than for the four-pulse composite T4 gates. This monotonic pattern persists for longer sequences as well. #### ii.2.2 Second-order error compensation For sequences of six-\(\pi\) pulses, it becomes possible to annul also the second-order derivatives in Eq. (7). Design of this asymmetric sequence make it possible to derive analytic solutions \[\pi_{\phi_{0}}\pi_{\phi_{1}}\pi_{\phi_{2}}\pi_{\phi_{0}+\pi-\frac{1}{2}\phi} \pi_{\phi_{1}+\pi-\frac{1}{2}\phi}\pi_{\phi_{2}+\pi-\frac{1}{2}\phi}, \tag{21}\] The careful analysis of these type of sequences shows that they can be written in a compact form as \[\pi_{\chi}(2\pi)_{0}\pi_{\chi+\pi-\frac{1}{2}\phi}(2\pi)_{\pi- \frac{1}{2}\phi}, \tag{22a}\] \[\pi_{\pi+\frac{1}{2}\phi-\chi}(2\pi)_{0}\pi_{-\chi}(2\pi)_{\pi- \frac{1}{2}\phi},\] (22b) \[(2\pi)_{0}\pi_{-\frac{1}{2}\phi+\chi}(2\pi)_{\pi-\frac{1}{2}\phi }\pi_{-\phi+\chi},\] (22c) \[(2\pi)_{0}\pi_{-\chi}(2\pi)_{\pi-\frac{1}{2}\phi}\pi_{-\chi+\pi- \frac{1}{2}\phi}, \tag{22d}\] where \(\chi=\frac{1}{4}\phi+\arcsin\left(\frac{1}{2}\sin\left(\frac{1}{4}\phi\right)\right)\). The Frobenius distance and trace distance fidelities for these second-order sequences read \[\mathcal{F}=1-\sqrt{2}\left|\sin^{3}\frac{\pi\epsilon}{2}\right| \left|\sin\frac{\phi}{4}\right|, \tag{23a}\] \[\mathcal{F}_{T}=1-2\sin^{6}\frac{\pi\epsilon}{2}\sin^{2}\frac{\phi }{4}. \tag{23b}\] #### ii.2.3 Third-order error compensation Nullification of the third-order derivatives in Eq. (7) as well, requires eight-\(\pi\) pulses. Here, in contrast to the ro Figure 2: Frobenius distance fidelity \(F\) (top) and infidelity (bottom) of composite S gates. The infidelity is in logarithmic scale in order to better visualize the high-fidelity (low-infidelity) range. The numbers \(N\) on the curves refer to CP sequences S\(N\) listed in the Table 2. tation gates, the composite phase gates with eight pulses \[\pi_{\phi_{0}}\pi_{\phi_{1}}\pi_{\phi_{2}}\pi_{\phi_{3}}\pi_{\phi_{0}+\pi-\frac{1 }{2}\phi}\pi_{\phi_{1}+\pi-\frac{1}{2}\phi}\pi_{\phi_{2}+\pi-\frac{1}{2}\phi} \pi_{\phi_{3}+\pi-\frac{1}{2}\phi}, \tag{24}\] can be simplified giving analytic solutions. Careful analysis of these type of sequences shows that they can be written in a compact form as \[\pi_{\chi}(2\pi)_{0}\pi_{\chi+\pi-\frac{1}{2}\phi}\pi_{\chi+\pi- \frac{1}{2}\phi}(2\pi)_{\pi-\frac{1}{2}\phi}\pi_{\chi-\frac{3}{2}\phi}, \tag{25a}\] \[\pi_{-\chi+\pi+\frac{1}{2}\phi}(2\pi)_{0}\pi_{-\chi}\pi_{-\chi- \frac{1}{2}\phi}(2\pi)_{\pi-\frac{1}{2}\phi}\pi_{-\chi+\pi-\frac{1}{2}\phi},\] (25b) \[(2\pi)_{0}\pi_{\chi+\pi-\frac{1}{2}\phi}\pi_{\chi+\pi-\frac{1}{2} \phi}(2\pi)_{\pi-\frac{1}{2}\phi}\pi_{\chi-\frac{1}{2}\phi}\pi_{\chi-\phi},\] (25c) \[(2\pi)_{0}\pi_{-\chi}\pi_{-\chi-\frac{1}{2}\phi}(2\pi)_{\pi-\frac {1}{2}\phi}\pi_{-\chi+\pi-\frac{1}{2}\phi}\pi_{-\chi+\pi-\frac{3}{2}\phi},\] (25d) \[\pi_{\chi+\frac{1}{2}\phi}\pi_{\chi}(2\pi)_{0}\pi_{\chi+\pi-\frac {1}{2}\phi}\pi_{\chi+\pi-\frac{1}{2}\phi}(2\pi)_{\pi-\frac{1}{2}\phi},\] (25e) \[\pi_{-\chi+\pi+\frac{1}{2}\phi}\pi_{-\chi+\pi+\frac{1}{2}\phi}(2 \pi)_{0}\pi_{-\chi}\pi_{-\chi-\frac{1}{2}\phi}(2\pi)_{\pi-\frac{1}{2}\phi}, \tag{25f}\] where \(\chi=\frac{1}{8}\phi+\arcsin\left(\frac{1}{2}\sin\left(\frac{1}{8}\phi\right)\right)\). The Frobenius distance and trace distance fidelities for these third-order sequences read \[\mathcal{F}=1-\sqrt{2}\left|\sin^{4}\frac{\pi\epsilon}{2}\right| \left|\sin\frac{\phi}{4}\right|, \tag{26a}\] \[\mathcal{F}_{T}=1-2\sin^{8}\frac{\pi\epsilon}{2}\sin^{2}\frac{ \phi}{4}. \tag{26b}\] #### iii.2.4 Higher-order error compensation For CP sequences of more than eight-\(\pi\) pulses, the equations for the composite phases quickly get very bulky and unattainable to guess analytically. General form for these sequences is (14). Despite this, they can be written in a concise form. They reiterate the pattern of the sequences of four, six and eight pulses above: the CP sequences of \(2(n+1)\) pulses have a total pulse area of \((2n+2)\pi\), with all pulses in the sequence being nominal \(\pi\) pulses. Sequences of \(2(n+1)\) pulses produce error compensation of the order \(O(\epsilon^{n})\) and fidelity profiles \[\mathcal{F}\cong 1-\sqrt{2}\left|\sin^{n+1}\frac{\pi\epsilon}{2} \right|\left|\sin\frac{\phi}{4}\right|, \tag{27a}\] \[\mathcal{F}_{T}\cong 1-2\sin^{2n+2}\frac{\pi\epsilon}{2}\sin^{2} \frac{\phi}{4}, \tag{27b}\] where fidelities are sensitive to the choice of the composite phases and are approximately equal to their precise values. This type of composite phase gates, the precision of which deviates from the theoretically optimal accuracy (27), which is too significant for sequences with twelve-\(\pi\) and fourteen-\(\pi\), but has the most concise form and shows a structural form for arbitrary phase shift angles. Design of this sequences is shown in the next indent. We have derived numerically the composite phases for higher order phase gates. The fourth-order compensating ten-\(\pi\) sequences can be written in a compact form \[(3\pi)_{0}\pi_{\phi_{3}}\pi_{\phi_{4}}(3\pi)_{\pi-\frac{1}{2}\phi}\pi_{\phi_{3 }+\pi-\frac{1}{2}\phi}\pi_{\phi_{4}+\pi-\frac{1}{2}\phi}. \tag{28}\] For brevity, we release other configurations with arrangements between \(3\pi\) pulse and \(\pi\) pulses, because all these designs have equal total pulse area, i.e. operation runtime. The reader can obtain such solutions by interchanging pulses in the sequence similar to (19), (22) and (25). The fifth-order compensating twelve-\(\pi\) sequences can be written in a compact form \[(3\pi)_{0}\pi_{\phi_{3}}\pi_{\phi_{4}}\pi_{\phi_{4}}\pi_{\phi_{4} -\phi_{3}-\frac{1}{4}\phi}\cdot \tag{29}\] \[\cdot(3\pi)_{\pi-\frac{1}{2}\phi}\pi_{\phi_{3}+\pi-\frac{1}{2} \phi}\pi_{\phi_{4}+\pi-\frac{1}{2}\phi}\pi_{\phi_{4}-\phi_{3}+\pi-\frac{3}{2} \phi},\] and the sixth-order compensating fourteen-\(\pi\) sequences can be written in a compact form \[(4\pi)_{0}\pi_{\phi_{4}}\pi_{\phi_{5}}\pi_{\phi_{6}}(4\pi)_{\pi- \frac{1}{2}\phi}\pi_{\phi_{3}+\pi-\frac{1}{2}\phi}\pi_{\phi_{4}+\pi-\frac{1}{2 }\phi}\pi_{\phi_{5}+\pi-\frac{3}{2}\phi}. \tag{30}\] The composite phases for this type of composite phase gates for arbitrary phase flip angles are presented in Table 4. The structure of these sequences corresponds to (12) with \(\nu=0\) and zero first phases for long sequences, i.e. with accordance to (19a), (22d), (25d), (28), (29) and (30). Note that the \(3\pi\) and \(4\pi\) pulses in the CP sequence are poor candidates for designing longer phase gates with higher order of compensation. We have derived numerically another type of sequences consisting of only \(\pi\) and \(2\pi\) pulses. Their precision exactly matches the theoretically optimal accuracy (27). The fourth-order compensating ten-\(\pi\) sequences can be written in a form \[\pi_{0}(2\pi)_{\phi_{1}}\pi_{\phi_{3}}\pi_{\phi_{4}}\pi_{\pi-\frac{1}{2}\phi}( 2\pi)_{\phi_{1}+\pi-\frac{1}{2}\phi}\pi_{\phi_{3}+\pi-\frac{1}{2}\phi}\pi_{ \phi_{4}+\pi-\frac{1}{2}\phi}. \tag{31}\] For brevity, we release other configurations with arrangements between \(2\pi\) pulse and \(\pi\) pulses, because all these designs have equal total pulse area, i.e. operation run-time. The reader can obtain such solutions by interchanging pulses in the sequence similar to (19), (22) and (25). The fifth-order compensating twelve-\(\pi\) sequences can be written in a form \[(2\pi)_{0}(2\pi)_{\phi_{2}}\pi_{\phi_{4}}\pi_{\phi_{4}}\pi_{\phi_{ 4}-\frac{1}{2}\phi}\cdot \tag{32}\] \[\cdot(2\pi)_{\pi-\frac{1}{2}\phi}(2\pi)_{\phi_{2}+\pi-\frac{1}{2} \phi}\pi_{\phi_{4}+\pi-\frac{1}{2}\phi}\pi_{\phi_{4}+\pi-\frac{3}{2}\phi},\] and the sixth-order compensating fourteen-\(\pi\) sequences can be written in a form \[(2\pi)_{\phi_{1}}(2\pi)_{\phi_{3}}\pi_{\phi_{5}}\pi_{\phi_{6}}\cdot \tag{33}\] \[\cdot\pi_{\pi-\frac{1}{2}\phi}(2\pi)_{\phi_{1}+\pi-\frac{1}{2}\phi}( 2\pi)_{\phi_{3}+\pi-\frac{1}{2}\phi}\pi_{\phi_{5}+\pi-\frac{1}{2}\phi}\pi_{ \phi_{6}+\pi-\frac{1}{2}\phi}.\] Similarly, the seventh-order compensating sixteen-\(\pi\) sequences can be written in a form \[(2\pi)_{0}(2\pi)_{\phi_{2}}(2\pi)_{\phi_{4}}\pi_{\phi_{6}}\pi_{\phi_{6}-\frac {1}{4}\phi}\cdot \tag{34}\] \[\cdot(2\pi)_{\pi-\frac{1}{2}\phi}(2\pi)_{\phi_{2}+\pi-\frac{1}{2} \phi}(2\pi)_{\phi_{4}+\pi-\frac{1}{2}\phi}\pi_{\phi_{6}+\pi-\frac{1}{2}\phi} \pi_{\phi_{6}+\pi-\frac{3}{2}\phi},\] and the eighth-order compensating eighteen-\(\pi\) sequences can be written in a form \[\begin{split}& R_{9}\cdot R_{9}(\pi-\frac{1}{2}\phi),\\ & R_{9}=\pi_{0}(2\pi)_{\phi_{1}}(2\pi)_{\phi_{3}}(2\pi)_{\phi_{5}} \pi_{\phi_{7}}\pi_{\phi_{8}}.\end{split} \tag{35}\] We have derived numerically the composite phases of this type of sequences of an even number of pulses. They are presented in Tables 1, 2 and 3 for Z, S and T gates correspondingly. The fidelities of these composite Z, S and T gates are plotted in Figures 1, 2, 3 respectively. It can be seen from the tables and figures that two pulses have very little room for error, since high-fidelity Z, S and T gates allow pulse area errors of less than 0.01%, about 0.01%, about 0.02%, respectively. The four-pulse composite phase gate offers some leeway, with the admissible error of 0.6%, 0.9% and 1.2% for Z, S and T cases. The significant pulse area error correction effect is achieved with the CP sequences of 6 to 10 pulses, for which the high-fidelity range of admissible errors increases from 3% to 10.1% for Z, from 3.6% to 11.5% for S, and from 4.5% to 13.1% for T. Quite notably, errors of up to 23.4%, 25.1% and 27.1% can be eliminated for Z, S and T, and ultrahigh fidelity maintained, with the 18-pulse composite phase gate. Note that these error ranges are calculated by using the rather tough Frobenius distance fidelity (10). Again, had we used the much more relaxed trace distance fidelity (11), these ranges would be much broader. Table 4 presents composite pulse parameters of general phase gates for different phase angles. Hereby, very long sequences are barely practical because the gate is much slower. The quantum computer is not required to operate with a pulse area error of 23% or more. Thereby, the CP sequences of 6, 8 and 10 pulses seems to offer the best fidelity-to-speed ratio. ## IV Comments and conclusions In this paper we presented a number of CP sequences for four basic quantum gates -- the Z gate, the S gate, the T gate and general phase gates. The CP sequences contain up to 18 pulses and can compensate up to eight orders of experimental errors in the pulse amplitude and duration. The short CP sequences (up to 8 pulses) are calculated analytically and the longer ones numerically. Only one class of asymmetric CP sequences, consisting of nominal \(\pi\) pulses with asymmetric phases (cf. (12) and (13)), has the role to provide composite phase gates. Although longer composite phase gates can be derived numerically, their fidelity profiles have analytic dependence on the pulse area error, correspond to (27a) and (27b), and show trigonometric relationship with phase-shift angles. Similar class of asymmetric CP sequences for phase gates is derived in [45], where they are build from the \(\theta\) rotation gates, having twice of total pulse area of them (similar to nesting approach). For this reason, four, eight, twelve, and sixteen CPs are missing, but six, ten, fourteen, and eighteen CPs are given by the simple analytic formula (are more convenient to apply) and have performance equal to the composite gates shown in this work. This does not apply to composite phase gates constructed by the universal CPs [39] in [45]. The target matrix differs from our (1) by changing the phase \(\phi\rightarrow-\phi\), hence, to compare the results from [45] with ours, it is necessary to change the sign of all phases, viz., the parameters \(\phi_{k}\rightarrow-\phi_{k}\) and \(\chi\rightarrow\chi=-\phi_{k}+\pi-\phi/2\) in the article. Hereby, we design twice-even number of CP sequences for composite phase gates, in addition to twice-odd number of pulses (already existing). For the general phase gates, we have presented another type of the asymmetric sequences in Table 4 for the sake of brevity. The results presented in this article demonstrate the remarkable flexibility of CPs accompanied by extreme accuracy and robustness to errors -- three features that cannot be achieved together by any other coherent control technique. We expect these CP sequences, in particular the Z, the S and the T gates, to be very useful quantum control tools in QI applications, because they provide a variety of options to find the optimal balance between ultrahigh fidelity, error range and speed, which may be different in different physical systems. Figure 3: Frobenius distance fidelity \(F\) (top) and infidelity (bottom) of composite T gates. The infidelity is in logarithmic scale in order to better visualize the high-fidelity (low-infidelity) range. The numbers \(N\) on the curves refer to CP sequences T\(N\) listed in the Table 3. Besides all, the results presented in this paper can be applied into PO to obtain broadband polarization rotators using stacked single polarization half-wave plates with the optical axes rotated by precisely chosen rotation angles (composite phases). It is able to be done due to quantum-classical analogy of composite rotations on the Bloch and the Poincare spheres. Hereby, we demonstrate the possibility to design the broadband polarization rotators with \(\pi/2\), \(\pi/4\), \(\pi/8\) and arbitrary phase shift angles, by up to 18 CP sequences. Composite phases in the rotation gate matrix and in the Jones matrix are related. Let's compare the results in the Table 1 and in the article [46]. Using asymmetric Z\(N\) sequences \(\varphi_{i}\rightarrow\mathcal{A}_{i}=\pi\) with composite phases \(\theta_{i}\rightarrow\phi_{i}/2\), the broadband \(\alpha=\pi/2\) (as the target rotation angle is assigned in the article) rotator is designed. Z6 sequence is equivalent to the six sequence in the article in the Table 2 of [46], as the absolute trace fidelity in both cases have equal broadness range. Z10 outperform the ten sequence in the ultrahigh precision and even in the 99.9% trace fidelity, but since high-precision (90%) is necessary in PO, the ten sequence is comparable with our Z14 sequence. Fourteen sequence is slightly worse than Z12. Eighteen and fourteen sequences have equal performance (may be due to over-approximation of composite phases). Despite this, to obtain the results for \(\alpha=\pi/4\) (S gate), \(\alpha=\pi/8\) (T gate) and arbitrary polarization rotators, it is necessary to apply the structure \((\theta_{1},\theta_{2},\ldots,\theta_{n+1},\theta_{1}+\alpha/2,\theta_{2}+ \alpha/2,\ldots,\theta_{n+1}+\alpha/2)\) of the composite phases (where the first few rotation angles can be taken as zero and \(n\) is the broadness order) and to compute the phases. Note that, hereby, we demonstrate the possibility to design broadband arbitrary rotators by up to 18 CP sequences. Additionally, the ultrabroadband and ultranarrowband subclasses of variable rotations [13] can be used to design phase gates and polarization rotators by reducing the fidelity benchmark. ###### Acknowledgements. HLG acknowledges support from the EU Horizon-2020 ITN project LIMQUET (Contract No. 765075), and also from the RA Science Committee in the frames of the research project 20TTATQTc004. NVV acknowledges support from the Bulgarian national plan for recovery and resilience, contract BG-RRP-2.004-0008-C01 (SUM-MIT), project number 3.1.4. ## Appendix A Composite phases Here we present the complete sets of phases of the composite pulse sequences generating phase gates with various orders of error compensation.
2304.11627
Lorenz Energy Cycle: Another Way to Understand the Atmospheric Circulation on Tidally Locked Terrestrial Planets
In this study, we employ and modify the Lorenz energy cycle (LEC) framework as another way to understand the atmospheric circulation on tidally locked terrestrial planets. It well describes the atmospheric general circulation in the perspective of energy transformation, involved with several dynamical processes. We find that on rapidly rotating, tidally locked terrestrial planets, mean potential energy (P$_{\rm M}$) and eddy potential energy (P$_{\rm E}$) are comparable to those on Earth, as they have similar steep meridional temperature gradients. Mean kinetic energy (K$_{\rm M}$) and eddy kinetic energy (K$_{\rm E}$) are larger than those on Earth, related to stronger winds. The two conversion paths, P$_{\rm M}\rightarrow$P$_{\rm E}\rightarrow$K$_{\rm E}$ and P$_{\rm M}\rightarrow$K$_{\rm M}\rightarrow$K$_{\rm E}$, are both efficient. The former is associated with strong baroclinic instabilities, and the latter is associated with Hadley cells. On slowly rotating, tidally locked terrestrial planets, weak temperature gradients in the free atmosphere and strong nightside temperature inversion make P$_{\rm M}$ and P$_{\rm E}$ are much smaller than those on Earth. Meanwhile, large day--night surface temperature contrast and small rotation rate make the overturning circulation extend to the globe, so that the main conversion path is P$_{\rm M}\rightarrow$K$_{\rm M}\rightarrow$K$_{\rm E}$. This study shows that the LEC analyses improve the understanding of the atmospheric circulation on tidally locked terrestrial planets.
Shuang Wang, Jun Yang
2023-04-23T12:00:21Z
http://arxiv.org/abs/2304.11627v1
Lorenz Energy Cycle: Another Way to Understand the Atmospheric Circulation on Tidally Locked Terrestrial Planets ###### Abstract In this study, we employ and modify the Lorenz energy cycle (LEC) framework as another way to understand the atmospheric circulation on tidally locked terrestrial planets. It well describes the atmospheric general circulation in the perspective of energy transformation, involved with several dynamical processes. We find that on rapidly rotating, tidally locked terrestrial planets, mean potential energy (\(\rm P_{M}\)) and eddy potential energy (\(\rm P_{E}\)) are comparable to those on Earth, as they have similar steep meridional temperature gradients. Mean kinetic energy (\(\rm K_{M}\)) and eddy kinetic energy (\(\rm K_{E}\)) are larger than those on Earth, related to stronger winds. The two conversion paths, \(\rm P_{M}\rightarrow\)\(\rm P_{E}\rightarrow\)\(\rm K_{E}\) and \(\rm P_{M}\rightarrow\)\(\rm K_{M}\rightarrow\)\(\rm K_{E}\), are both efficient. The former is associated with strong baroclinic instabilities, and the latter is associated with Hadley cells. On slowly rotating, tidally locked terrestrial planets, weak temperature gradients in the free atmosphere and strong nightside temperature inversion make \(\rm P_{M}\) and \(\rm P_{E}\) are much smaller than those on Earth. Meanwhile, large day-night surface temperature contrast and small rotation rate make the overturning circulation extend to the globe, so that the main conversion path is \(\rm P_{M}\rightarrow\)\(\rm K_{M}\rightarrow\)\(\rm K_{E}\). This study shows that the LEC analyses improve the understanding of the atmospheric circulation on tidally locked terrestrial planets. 0000-0002-4880-7880]Shuang Wang ## 1 Introduction The substellar point of 1:1 tidally locked (or synchronously rotating) terrestrial planet is fixed with time. Such a state can drive different atmospheric circulation compared to Earth, which has been simulated by general circulation models (GCMs) in previous studies (e.g., Joshi et al., 1997; Merlis and Schneider, 2010; Edson et al., 2011; Leconte et al., 2013; Wordsworth, 2015; Koll and Abbot, 2016; Noda et al., 2017; Haqq-Misra et al., 2018; Pierrehumbert and Hammond, 2019; Hammond and Lewis, 2021; Sergeev et al., 2022; Turbet et al., 2022; Wang and Yang, 2022). These simulations showed that the atmospheric circulation on tidally locked terrestrial planets is dominated by a global-scale overturning circulation, consisting of winds with upwelling in the substellar region, horizontally flowing from the dayside to the nightside in the upper troposphere, downwelling in the region away from the substellar point, and flowing back from the nightside to the dayside near the surface. Besides, there are a westerly jet over the equator (equatorial superrotation) and planet-sized wavenumber-1 stationary Rossby and Kelvin waves. For understanding the atmospheric circulation on tidally locked terrestrial planets, various methods have been used. Momentum budgets were widely used to explore the interactions between zonal jets and planetary waves (e.g., Showman and Polvani, 2010, 2011; Perez-Becker and Showman, 2013; Tsai et al., 2014; Hammond and Pierrehumbert, 2018; Mendonca, 2019; Debras et al., 2020; Hammond et al., 2020; Wang and Yang, 2021). It well demonstrated that up-gradient momentum transports by the stationary waves maintain the equatorial superrotation against friction. This method was also beneficial for the prediction of the equatorial jet speed (Hammond et al., 2020). Besides, a Helmholtz decomposition technique, which was suggested by Hammond and Lewis (2021), was also used in several recent studies (Ding and Wordsworth, 2021; Sergeev et al., 2022; Turbet et al., 2022). This technique separated the total circulation into divergent (overturning circulation) and rotational components (zonal jets and stationary waves). It was helpful to classify the dynamical regimes and to quantify the transports of energy and tracers. For example, Hammond and Lewis (2021) found that the global overturning circulation could dominate the day-night heat transport even when the zonal jet is strong. Another method is describing the atmospheric circulation from the perspective of energy and energy transformation. Atmospheric heat engine framework depicts the Earth's atmospheric circulation as an ideal Carnot's heat engine (Peixoto & Oort, 1992). Koll & Abbot (2016) applied this framework on the tidally locked terrestrial planet: the atmosphere absorbs heat on the dayside at a dayside surface temperature of \(T_{d}\) and emits it to space at a planet's equilibrium temperature of \(T_{eq}\), which allows the atmosphere to maintain the global overturning circulation against friction in the boundary layer, following a Carnot's efficiency of \(\eta=(T_{d}-T_{eq})/T_{d}\). By using this framework, they estimated the surface wind speed and developed an upper limit on the strength of the overturning circulation. In this study, the Lorenz energy cycle (LEC, Lorenz, 1955) is employed as another way to understand the atmospheric circulation on tidally locked terrestrial planets. It describes the general circulation from the perspective of energy transformation, and has been widely applied to Earth. For example, the LEC has been used to understand the atmospheric circulation and dynamical processes, such as waves, zonal jets, and their interactions (e.g., Peixoto & Oort, 1974, 1992; Ulbrich & Speth, 1991; Duan & Wu, 2005). Moreover, the ability to simulate LEC calculated from observational data is a useful diagnostic for climate models (e.g., Hernandez-Deckers & von Storch, 2010; Marques et al., 2011). Comparison of the LEC calculated from various reanalysis data is also beneficial to evaluate these data (e.g., Ulbrich & Speth, 1991; Li et al., 2007; Marques et al., 2009, 2010; Kim & Kim, 2013). Recently, the LEC has been applied to show the variability of the energy cycle in response to climate change over the last 40 years (e.g., Kim & Choi, 2017; Pan et al., 2017). The same framework also has been used to predict the future variability of the energy cycle in the Coupled Model Intercomparison Project (e.g., Michaelides, 2021; Kanno & Iwasaki, 2022). Briefly, the net incoming solar radiation, latent heat release in the tropics, and net infrared cooling in mid- and high-latitudes together generate mean potential energy (P\({}_{\rm M}\)) in Earth's atmosphere. The growing baroclinic eddies convert P\({}_{\rm M}\) to eddy available potential energy (P\({}_{\rm E}\)), and then convert P\({}_{\rm E}\) to eddy kinetic energy (K\({}_{\rm E}\)). A portion of K\({}_{\rm E}\) is converted to the mean kinetic energy (K\({}_{\rm M}\)) through wave-mean flow interactions. The bulk of K\({}_{\rm M}\) and K\({}_{\rm E}\) is ultimately dissipated through small-scale turbulence and surface friction (Figure 1). The main pathway of energy conversion on Earth follows P\({}_{\rm M}\rightarrow\)P\({}_{\rm E}\rightarrow\)K\({}_{\rm E}\rightarrow\)K\({}_{\rm M}\). In addition, the LEC has also been used to understand the oceanic circulation on Earth (e.g., Peixoto & Oort, 1992; Olbers et al., 2012; von Storch et al., 2012) and the atmospheric circulation on other planets such as Venus and Titan (e.g., Del Genio et al., 1993; Yamamoto & Takahashi, 2006; Lee & Richardson, 2010). A key point of this study is that we apply the LEC to tidally locked terrestrial planets. We evaluate the LEC on Earth-like tidally locked planets and compare it with that on Earth. We find that P\({}_{\rm M}\) is very small on slowly rotating tidally locked planets, and the main path of the energy conversion is P\({}_{\rm M}\rightarrow\)K\({}_{\rm M}\rightarrow\)K\({}_{\rm E}\). We also find that the LEC on rapidly rotating planets is like a combination of those on Earth and on slowly rotating planets. The structure of this paper is as follows. Section 2 describes the methodology and data. Section 3 shows the thermal structure, atmospheric circulation, and LECs on three different types of planets. Section 4 is the summary and discussions. ## 2 Methodology and Data ### LEC in tidally locked coordinates LEC is more regarded as a fundamental property of the Eulerian mean system rather than of the real atmosphere, because the Eulerian means are employed to define the mean energy, and the departures from the Eulerian means are employed to define the eddy energy (Chapter 10.4 in Holton & Hakim, 2013). Thus, it is important to employ a suitable Eulerian mean. For Earth, zonal means are always employed, because solar radiation, temperature, and winds are almost zonally homogeneous for long-term averages (Figure 2(c)). For tidally locked terrestrial planets, especially slowly rotating ones, the Eulerian mean should match the monotonically decreasing stellar radiation and temperature along an arbitrary direction from dayside to nightside (Figure 2(d)). So for slowly rotating tidally locked terrestrial planets, the tidally locked coordinates are employed in this study, similar to that used in previous studies (e.g., Koll & Abbot, 2015, 2016; Ding & Wordsworth, 2020; Ding & Pierrehumbert, 2020; Hammond & Lewis, 2021; Sergeev et al., 2022; Turbet et al., 2022; Wang & Yang, 2022). In the tidally locked coordinates, the nominal "North/South Pole" is the substellar/antistellar point, the tidally locked latitude lines are a series of concentric circles around the substellar point, and the tidally locked longitude lines are the great circles linking the substellar and antistellar points (Figure 2(b)). The transformation relations between the standard and the tidally locked coordinates are given in Appendix A. By the transformation, the overturning circulation (Figure 2(d)) become a zonal-mean component, i.e., homogeneous along the tidally locked longitudes (Figure 2(e)). That is, the Eulerian mean on a tidally locked planet could be defined as the zonal averages in the tidally locked coordinates. Note that a zonal-mean zonal jet in the standard coordinates would be transformed into an eddy component in the tidally locked coordinates. For example, a uniform zonal-mean zonal jet in the standard coordinates \(U_{0}\), as shown in Figure 3(a), would be transformed into corresponding winds in the tidally locked coordinates following the relations Figure 1: (a) Framework of the Lorenz energy cycle (LEC) on Earth: boxes represent the energy reservoirs; arrows represent generation (G), transformation (C), and dissipation (D) of energy. (b) Generation of \(\rm P_{M}\): uneven heating and cooling tilt the zonal-mean isotherms (black curves) departed from the global-mean isotherms (grey dashed lines), which generates \(\rm P_{M}\). (c) Heat transport by baroclinic eddies, converting \(\rm P_{M}\) to \(\rm P_{E}\): the zonal-mean isotherm (grey dashed line) is warped to the isotherm (black curve) by baroclinic eddies, which decreases zonal-mean meridional difference, i.e., \(\rm P_{M}\), but leads zonal variance, i.e., \(\rm P_{E}\). (d) Cross-isobaric motions, converting potential energy and kinetic energy with each other. (e) Wave–mean flow interactions, converting \(\rm K_{E}\) and \(\rm K_{M}\) with each other: under a positive \(\beta\)-plane, arbitrary waves with group velocity directed away from the source region in the y-direction (\(c_{gy}\)) can transport momentum back, along with conversion from \(\rm K_{E}\) to \(\rm K_{M}\); shear instabilities of zonal jets generate eddies, along with conversion from \(\rm K_{M}\) to \(\rm K_{E}\). of \[u_{TL} = \frac{\cos\lambda_{TL}\tan\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+ \tan^{2}\phi_{TL}}}U_{0}, \tag{1}\] \[v_{TL} = -\frac{\sin\lambda_{TL}\sec\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+ \tan^{2}\phi_{TL}}}U_{0}, \tag{2}\] where \(u_{TL}\) is tidally locked zonal wind, \(v_{TL}\) is tidally locked meridional wind, \(\lambda_{TL}\) is tidally locked longitude, and \(\phi_{TL}\) is tidally locked latitude (see Equation (A8) in Appendix A). Figure 3(b) shows the transformed winds from Equations (1) and (2), and suggests that the zonal-mean zonal jet becomes an eddy component. A critical step is deriving the governing equations in the tidally locked coordinates. We start this step from the primitive equations, including a momentum vector equation, \[\frac{D\mathbf{v}}{Dt}+2\mathbf{\Omega}\times\mathbf{v}=-\frac{1}{\rho}\nabla p +\mathbf{g}+\mathbf{F}, \tag{3}\] a mass continuity equation, \[\frac{D\rho}{Dt}+\rho\nabla\cdot\mathbf{v}=0, \tag{4}\] Figure 2: (a) Frame of standard (ST) coordinates. The substellar point is located at latitude/longitude \((\phi,\lambda)=(0^{\circ},180^{\circ})\). Three black arrows are unit vectors of the local Cartesian coordinates \((\mathbf{\hat{e}}_{1},\mathbf{\hat{e}}_{3},\mathbf{\hat{e}}_{k})\). Red vector is the rotational angular velocity, and its projections onto the local axes are shown at the top-right corner. (b) Frame of tidally locked (TL) coordinates. The substellar point is located at tidally locked latitude \(\phi_{TL}=90^{\circ}\). The projections of the rotational angular velocity are shown at the top-right corner, and the solid line represents the terminator, and dots represent the substellar points. The frames of the tidally locked latitude and longitude lines are shown at the bottom-right corner. (c) Annual-mean surface temperature (contours) and the near-surface winds (vectors) on Earth. (d) Same as (c) but on a tidally locked terrestrial planet with an orbit period of 60 Earth days. (e) Same as (d) but shown in the tidally locked coordinates. SP and AP are the substellar point and the antistellar point, respectively. and a thermodynamic equation, \[\frac{DT}{Dt}+\frac{p}{\rho c_{v}}\nabla\cdot\mathbf{v}=\frac{\dot{Q}}{c_{v}}, \tag{5}\] where \(D/Dt=\partial/\partial t+\mathbf{v}\cdot\nabla\), \(\mathbf{v}\cdot\nabla\) is advection operator, \(\mathbf{v}\) is velocity of flows, \(\mathbf{\Omega}\) is planetary rotation rate, \(\rho\) is air density, \(p\) is pressure, \(\mathbf{g}\) is effective gravity vector, \(\mathbf{F}\) is friction force, \(T\) is air temperature, \(c_{v}\) is specific heat at constant volume, and \(\dot{Q}\) is heating rate (Chapter 1 in Vallis, 2019). Mathematically, the transformation of coordinates does not change the values and forms of scalars and scalar operators, i.e., temperature, density, divergence of velocity, heating rate, and advection operator (Chapter 2 in Kundu et al., 2016). Thus, the two scalar equations, Equations (4) and (5), will not change. However, the transformation of coordinates will change the projections of vectors onto new axes, i.e., wind velocity, rotation rate, and pressure gradient, so that the projections of Equation (3) will change. Considering that the directions of unit vectors of axes change as these vectors move with the atmosphere, it would introduce an effective rotation rate, so that Equation (3) in the tidally locked coordinates is written as \[\frac{Du_{TL}}{Dt}\mathbf{\hat{e}_{i}}+\frac{Dv_{TL}}{Dt}\mathbf{\hat{e}_{j}}+ \frac{Dw_{TL}}{Dt}\mathbf{\hat{e}_{k}}=-\mathbf{\Omega}_{flow}\times\mathbf{v} -2\mathbf{\Omega}\times\mathbf{v}-\frac{1}{\rho}\nabla p-g\mathbf{\hat{e}_{k} }+\mathbf{F}, \tag{6}\] where \(u_{TL}\), \(v_{TL}\), and \(w_{TL}\) are the tidally locked zonal, meridional, and vertical winds, respectively. \(\mathbf{\hat{e}_{i}}\), \(\mathbf{\hat{e}_{j}}\), and \(\mathbf{\hat{e}_{k}}\) are the unit vectors of the tidally locked axes, respectively. \(\mathbf{\Omega}_{flow}\) is the effective rotation rate (Equation 2.31 in Vallis, 2019), with formula of \[\mathbf{\Omega}_{flow}=-\frac{v_{TL}}{r}\mathbf{\hat{e}_{i}}+\frac{u_{TL}}{r} \mathbf{\hat{e}_{j}}+\frac{u_{TL}\tan\phi_{TL}}{r}\mathbf{\hat{e}_{k}}, \tag{7}\] \(r\) is the radial distance from the center of the planet, and \(\phi_{TL}\) is the tidally locked latitude. From Figure 2(b), \(\mathbf{\Omega}\) can be written as \[\mathbf{\Omega}=\Omega\sin\lambda_{TL}\mathbf{\hat{e}_{i}}+\Omega\cos\lambda _{TL}\sin\phi_{TL}\mathbf{\hat{e}_{j}}-\Omega\cos\lambda_{TL}\cos\phi_{TL} \mathbf{\hat{e}_{k}}, \tag{8}\] where \(\Omega\) represents the magnitude of planetary rotation rate, and \(\lambda_{TL}\) is the tidally locked longitude. Combining Equations (6)-(8) yields \[\frac{Du_{TL}}{Dt}=-\frac{u_{TL}w_{TL}}{r}+\frac{\tan\phi_{TL}}{r}u_{TL}v_{TL }-\frac{1}{\rho r\cos\phi_{TL}}\frac{\partial p}{\partial\lambda_{TL}}-2 \Omega v_{TL}\cos\lambda_{TL}\cos\phi_{TL}-2\Omega w_{TL}\cos\lambda_{TL} \sin\phi_{TL}+F_{\lambda}, \tag{9}\] \[\frac{Dv_{TL}}{Dt}=-\frac{v_{TL}w_{TL}}{r}-\frac{\tan\phi_{TL}}{r}u_{TL}^{2} -\frac{1}{\rho r}\frac{\partial p}{\partial\phi_{TL}}+2\Omega u_{TL}\cos \lambda_{TL}\cos\phi_{TL}+2\Omega w_{TL}\sin\lambda_{TL}+F_{\phi}, \tag{10}\] \[\frac{Dw_{TL}}{Dt}=\frac{u_{TL}^{2}+v_{TL}^{2}}{r}-\frac{1}{\rho}\frac{ \partial p}{\partial r}-g+2\Omega u_{TL}\cos\lambda_{TL}\sin\phi_{TL}-2 \Omega v_{TL}\sin\lambda_{TL}+F_{r}, \tag{11}\] where \(F_{\lambda}\), \(F_{\phi}\), and \(F_{r}\) are the projections of the friction force on axes. On a typical terrestrial planet, the thickness of the atmosphere are usually ignored compared to its horizontal scale, and so is the vertical motion (\(\omega\)). Thus, the Figure 3: A hypothetical uniform zonal-mean zonal jet in the standard coordinates (a) is transformed into the winds in the tidally locked coordinates (b) using Equations (1) and (2). Blue dashed lines represent the equator (EQ) in the standard coordinates; red dashed lines represent the terminators (TM); black dot in panel (a) represents the substellar point (SP); black triangle in panel (b) represents the North Pole (NP) in the standard coordinates. shallow atmosphere approximation (\(r=a+z\approx a\), \(a\) is planetary radius, and \(z\) is height above surface; \(\partial/\partial r\approx\partial/\partial z\); \(\left|w_{TL}\right|\ll\left|u_{TL}\right|,\left|v_{TL}\right|\)) is employed to simplify Equations (9)-(11), shown as \[\frac{Du_{TL}}{Dt}=f_{TL}v_{TL}+\frac{\tan\phi_{TL}}{a}u_{TL}v_{TL}-\frac{1}{ \rho a\cos\phi_{TL}}\frac{\partial p}{\partial\lambda_{TL}}+F_{\lambda}, \tag{12}\] \[\frac{Dv_{TL}}{Dt}=-f_{TL}u_{TL}-\frac{\tan\phi_{TL}}{a}u_{TL}^{2}-\frac{1}{ \rho a}\frac{\partial p}{\partial\phi_{TL}}+F_{\phi}, \tag{13}\] \[\frac{\partial p}{\partial z}=-\rho g, \tag{14}\] where \(f_{TL}\equiv-2\Omega\cos\lambda_{TL}\cos\phi_{TL}\) is the Coriolis parameter in the tidally locked coordinates. The three momentum equations have the same forms as those in the standard spherical coordinates (Equation 2.41 in Vallis, 2019). Furthermore, Equations (12)-(14) along with the unchanged Equations (4) and (5) suggest that the projected forms of the primitive equations in the two coordinates are the same except that the formulas of the Coriolis parameter and the velocities are different. Therefore, the processes to obtain the LEC in the standard coordinates are valid in the tidally locked coordinates. Following section 14.3 in Peixoto and Oort (1992), we combine Equations (4), (5), and (12)-(14), and then integrate over the whole atmosphere to disregard all boundary terms. This process yields \[\frac{\partial\mathrm{P_{M}}}{\partial t}=-\mathrm{C}\left(\mathrm{P_{M}}, \mathrm{P_{E}}\right)-\mathrm{C}\left(\mathrm{P_{M}},\mathrm{K_{M}}\right)+ \mathrm{G}\left(\mathrm{P_{M}}\right), \tag{15}\] \begin{table} \begin{tabular}{l l} \hline \hline Symbols & Description \\ \hline \(X\) & Arbitrary quantity \\ \(\bar{X}\) & Temporal-mean of \(X\) \\ \(\left[X\right]\) & Zonal-mean of \(X\) \\ \(\bar{X}\) & Global-mean of \(X\) \\ \(X^{{}^{\prime}}\) & Deviation from the temporal-mean of \(X\), equal to \(X-\bar{X}\) \\ \(X^{{}^{\prime\prime}}\) & Deviation from the zonal-mean of \(X\), equal to \(X-\left[X\right]\) \\ \(C\left(X_{1},X_{2}\right)\) & Conversion rate from \(X_{1}\) to \(X_{2}\) \\ \(G\left(X\right)\) & Generation rate of \(X\) \\ \(D\left(X\right)\) & Dissipation rate of \(X\) \\ \(\lambda\) & Longitude \\ \(\phi\) & Latitude \\ \(Z\) & Geopotential height \\ \(u\) & Zonal wind \\ \(v\) & Meridional wind \\ \(\omega\) & Vertical pressure velocity \\ \(a\) & Planetary radius (solid part) \\ \(g\) & Gravity \\ \(T\) & Air temperature \\ \(\theta\) & Air potential temperature \\ \(R\) & Gas constant for dry air \\ \(c_{p}\) & Specific heat at constant pressure \\ \(\kappa\) & \(R/c_{p}\) \\ \(\gamma\) & Stability factor, equal to \(-\frac{\kappa\theta}{p^{2}}\left(\frac{\partial\theta}{\partial p}\right)^{-1}\) \\ \(dm\) & Mass element, equal to \(a^{2}\cos\phi d\lambda d\phi dp/g\) \\ \hline \end{tabular} \end{table} Table 1: General descriptions of symbols used in the LEC. Note that zonal mean is defined as the average along the standard longitudes in the standard coordinates, but along the tidally locked longitudes in the tidally locked coordinates. So does the deviation from the zonal mean. \[\frac{\partial\mathrm{P_{E}}}{\partial t}=\mathrm{C}\left(\mathrm{P_{M}},\mathrm{P_ {E}}\right)-\mathrm{C}\left(\mathrm{P_{E}},\mathrm{K_{E}}\right)+\mathrm{G} \left(\mathrm{P_{E}}\right), \tag{16}\] \[\frac{\partial\mathrm{K_{M}}}{\partial t}=\mathrm{C}\left(\mathrm{K_{E}}, \mathrm{K_{M}}\right)+\mathrm{C}\left(\mathrm{P_{M}},\mathrm{K_{M}}\right)- \mathrm{D}\left(\mathrm{K_{M}}\right), \tag{17}\] \[\frac{\partial\mathrm{K_{E}}}{\partial t}=-\mathrm{C}\left(\mathrm{K_{E}}, \mathrm{K_{M}}\right)+\mathrm{C}\left(\mathrm{P_{E}},\mathrm{K_{E}}\right)- \mathrm{D}\left(\mathrm{K_{E}}\right), \tag{18}\] where \(\mathrm{P_{M}}\) is mean potential energy, \(\mathrm{P_{E}}\) is eddy potential energy, \(\mathrm{K_{M}}\) is mean kinetic energy, \(\mathrm{K_{E}}\) is eddy kinetic energy, \(\mathrm{C}\left(\mathrm{X_{1}},\mathrm{X_{2}}\right)\) is conversion rate from \(\mathrm{X_{1}}\) to \(\mathrm{X_{2}}\), \(\mathrm{G}\left(\mathrm{X}\right)\) is generation rate of X, and \(\mathrm{D}\left(\mathrm{X}\right)\) is dissipation rate of X (Figure 1). The general descriptions of symbols we used are shown in Table 1. The detailed descriptions of these terms are shown in Appendix B. ### Data Daily-mean data for the atmosphere on Earth is from the National Center for Environmental Prediction and the Department of Energy reanalysis datasets (NCEP R2). These datasets are produced by an advanced data assimilation method combining numerical models and observational data. The spatial resolution is \(2.5^{\circ}\times 2.5^{\circ}\) in latitude and longitude with 17 levels from surface to 10 hPa. The period covered is the whole year of 1979. The data is grouped by 12 months and the LEC is calculated in each month and then averaged to get the annual-mean LEC. This evaluation is based on the fact that there are strong seasonal cycles and that synoptic eddies have lifetimes of mostly several days and less than a month. Daily-mean data for the atmosphere on tidally locked terrestrial planets is from simulations by the Exoplanet Community Atmosphere Model (ExoCAM, Wolf & Toon, 2014, 2015; Wolf et al., 2017, 2022). One simulation is for a rapidly rotating, tidally locked planet with a rotation period (= orbital period) of 5 Earth days, and the other simulation is for a slowly rotating, tidally locked planet with a rotation period (= orbital period) of 60 Earth days. In the two simulations, the solar constant is 1360 W m\({}^{-2}\), planets are Earth-sized aquaplanets with terrestrial atmospheres (1 bar N\({}_{2}\), 400 ppmv CO\({}_{2}\), and flexible water vapor) and the same gravity as Earth. The spatial resolution is \(4^{\circ}\times 5^{\circ}\) in latitude and longitude with 40 pressure levels from surface to 10 Pa. The period covered is the last 300 model days. We directly calculate the LEC over all 300 days instead of month by month, based on that there is no seasonal or diurnal cycle on the 1:1 tidally locked planets. In our calculations, all GCMs' data over 100 hPa are excluded. This is because temperatures and winds are strongly model-dependent near the top of the atmosphere (Sergeev et al., 2022; Turbet et al., 2022). For consistency, the NCEP R2 data over 100 hPa are also excluded. These treatments do not affect the main conclusions in this work. ## 3 Results ### Thermal structure and atmospheric circulation For Earth, the rapidly rotating tidally locked planet, and the slowly rotating tidally locked planet, the solar constant is 1360 W m\({}^{-2}\). It corresponds to a global mean of 340 W m\({}^{-2}\) received stellar radiation at the top of the atmosphere, but the absorbed values are different for the three planets, suggesting different efficiencies of energy input to the planets. On Earth, the absorbed stellar radiation is 238 W m\({}^{-2}\), including 80 W m\({}^{-2}\) absorbed by the atmosphere and 158 W m\({}^{-2}\) absorbed by the surface. The global-mean surface temperature is 288 K. The absorbed stellar radiation on the tidally locked planets is smaller than that on Earth, i.e. 187 W m\({}^{-2}\) on the rapidly rotating planet (57 W m\({}^{-2}\) absorbed by the atmosphere and 130 W m\({}^{-2}\) absorbed by the surface) and 175 W m\({}^{-2}\) on the slowly rotating planet (57 W m\({}^{-2}\) absorbed by the atmosphere and 118 W m\({}^{-2}\) absorbed by the surface), due to the reflection by thick clouds near the substellar location (Yang et al., 2013). Naturally, the global-mean surface temperatures on the two tidally locked planets are lower than that on Earth, i.e. 257 K on the rapidly rotating planet and 247 K on the slowly rotating planet. The air temperature, geopotential height, and atmospheric circulation on the three planets are shown in Figure 4. In the free atmosphere, these climatic elements are zonally homogeneous on the rapidly rotating tidally locked terrestrial planet and axisymmetric on the slowly rotating tidally locked terrestrial planet, so that the former is shown in the standard coordinates and the latter is shown in the tidally locked coordinates. The temperature structure on Earth and on the rapidly rotating tidally locked terrestrial planet are analogous, as they show steep meridional (south-north) temperature gradients in the middle latitudes (Figures 4(a) and (b)). While on the slowly rotating tidally locked terrestrial planet, the global free atmosphere is in a weak temperature gradient (WTG) regime (Pierrehumbert, 2010; Pierrehumbert and Hammond, 2019), for which the air temperature is almost horizontally homogeneous everywhere (Figure 4(c)). This is because weak Coriolis effect cannot maintain a large pressure or temperature gradient. The WTGs can be more obviously seen in the horizontal temperature structures. For example, the equator-pole temperature difference at 500 hPa is about 40 K on Earth and about 60 K on the rapidly rotating tidally locked terrestrial planet (Figures 5(a) and (b)). However, the temperature difference on the slowly rotating tidally locked terrestrial planet is no more than 5 K (Figure 5(c)). The WTGs are destroyed only very close to the surface, due to the effect of surface friction. Note that in the WTG regime, the difference of geopotential height is also small, e.g., no more than 500 m on the slowly rotating planet, while this value is about 3000 m on Earth and on the rapidly rotating planet (Figures 4(d)-(f)). Figure 4(c) shows a strong temperature inversion on the slowly rotating tidally locked planet, which is mainly on the nightside and extends to the dayside. It is caused by the uneven distribution in the stellar radiation and the effective energy transport from dayside to nightside in the free atmosphere (Joshi et al., 2020). The strong inversion can make the atmosphere be stable and inhibit the growth of eddies, which can influence the LEC and will be shown in the subsequent sections. On Earth, the annual- and zonal-mean mass stream functions clearly show the Hadley cells in the tropics and the Ferrel cells in the middle latitudes (Figure 4(d)). On the rapidly rotating tidally locked terrestrial planet, the Hadley cells expand and become dominant while the Ferrel cells almost disappear (Figure 4(e)). This is due to that the planetary rotation rate is 1/5 of that on Earth. The strength of Hadley cells on this rapidly rotating planet is about \(25\times 10^{10}\) kg s\({}^{-1}\), and is somewhat larger than the value of \(10\times 10^{10}\) kg s\({}^{-1}\) on Earth. Differently, the circulation on Earth is about 1000 km s\({}^{-1}\), and the \(\sim 10^{10}\) kg s\({}^{-1}\) is about 1000 km s\({}^{-1}\). The \(\sim 10^{10}\) kg s\({}^{-1}\) is about 1000 km s\({}^{-1}\), and the \(\sim 10^{10}\) kg s\({}^{-1}\) is about 1000 km s\({}^{-1}\). The \(\sim 10^{10}\) kg s\({}^{-1}\) is about 1000 km s\({}^{-1}\), and the \(\sim 10^{10}\) kg s\({}^{-1}\) is about 1000 km s\({}^{-1}\). the slowly rotating tidally locked terrestrial planet is an isotropic overturning circulation around the substellar point (Figure 4(f)). It extends to the globe due to the very small rotation rate, which is 1/60 of that on Earth. The strength of the global overturning circulation is about \(120\times 10^{10}\) kg s\({}^{-1}\), and is much stronger than the Hadley cells on Earth and on the rapidly rotating planet. This is due to that the day-night surface temperature contrast is about 100 K, much larger than the equator-pole surface temperature difference on Earth, about 50 K (Figures 2(c) and (d)). ### LEC on Earth The global-mean vertical integrals of the LEC on Earth are shown in Figure 6. The total available energy stored in the atmosphere is about \(62.0\times 10^{5}\) J m\({}^{-2}\), including \(44.1\times 10^{5}\) J m\({}^{-2}\) of P\({}_{\rm M}\), \(5.0\times 10^{5}\) J m\({}^{-2}\) of P\({}_{\rm E}\), \(6.9\times 10^{5}\) J m\({}^{-2}\) of K\({}_{\rm M}\), and \(6.0\times 10^{5}\) J m\({}^{-2}\) of K\({}_{\rm E}\). The main conversion path is P\({}_{\rm M}\rightarrow\)P\({}_{\rm E}\rightarrow\)K\({}_{\rm E}\rightarrow\)K\({}_{\rm M}\). P\({}_{\rm M}\) is converted to P\({}_{\rm E}\) at a rate of 1.70 W m\({}^{-2}\) through heat transport by baroclinic eddies. P\({}_{\rm E}\) is converted to K\({}_{\rm E}\) at a rate of 2.37 W m\({}^{-2}\) through cross-isobaric motions in baroclinic eddies. Some portion of K\({}_{\rm E}\) is converted to K\({}_{\rm M}\) at a rate of 0.48 W m\({}^{-2}\) through wave-mean flow interactions. In addition, P\({}_{\rm M}\) is ultimately converted to K\({}_{\rm M}\) at a relatively inefficient rate of 0.24 W m\({}^{-2}\) through cross-isobaric motions in the Hadley and Ferrel cells. Our results are consistent with previous estimations (e.g., Peixoto & Oort 1974; Li et al. 2007; Kim & Kim 2013). We also recalculate the LEC using the data simulated by ExoCAM, and obtain analogous results to those based on reanalysis data (figures not shown). Figure 5: Deviations of temporal-mean temperature from the global means at 500 hPa on (a) Earth, (b) one rapidly rotating tidally locked terrestrial planet, and (c) one slowly rotating tidally locked terrestrial planet. The temporal- and global-mean temperatures are shown in the top of each panel. The black dots represent substellar points. Note that the four arrows in Figure 6 without specific values represent the generation rates of P\({}_{\rm M}\) and P\({}_{\rm E}\) and the dissipation rates of K\({}_{\rm M}\) and K\({}_{\rm E}\), respectively. It is difficult to calculate directly from the reanalysis data, so we omit their values here. However, their values can be estimated by assuming an equilibrium energy cycle. That is, P\({}_{\rm M}\) is converted to P\({}_{\rm E}\) and K\({}_{\rm M}\) with 1.70 and 0.24 W m\({}^{-2}\), respectively, and therefore a generation rate of P\({}_{\rm M}\) with 1.94 W m\({}^{-2}\) is required. Similarly, the generation rate of P\({}_{\rm E}\) is 0.67 W m\({}^{-2}\), the dissipation rate of K\({}_{\rm M}\) is 0.72 W m\({}^{-2}\), and the dissipation rate of K\({}_{\rm E}\) is 1.89 W m\({}^{-2}\). We omit the corresponding results on the other two planets, and discuss them in Summary and Discussions. The majority of P\({}_{\rm M}\) lies in the range of 1000-300 hPa over the polar regions, especially over the Antarctic continent, where the temperature departure from the global mean is largest (Figures 4(a) and 7(a)). There is also a secondary maximum near the tropopause in the tropics, as the temperature there also deviates significantly from the global mean, although the meridional temperature gradient is weak. P\({}_{\rm E}\) is mainly in 30-90\({}^{\circ}\)S/N (Figure 7(b)), resulting from temperature anomalies induced by baroclinic eddies, forced orographic waves, and land-sea surface temperature contrast (Li et al., 2007). Most of K\({}_{\rm M}\) is in the troposphere over the middle latitudes of 30-60\({}^{\circ}\)S/N, as it reflects the subtropical and mid-latitude jets (Figures 4(a) and 7(c)). In addition, there is an extra maximum extending into the stratosphere in the southern hemisphere, as it reflects the stratospheric jet (over 100 hPa, not entirely shown). K\({}_{\rm E}\) is mainly centered in the troposphere over 50-60\({}^{\circ}\)S/N (Figure 7(d)), associated with the storm tracks over the Pacific, Atlantic, and Southern oceans (Trenberth, 1991; Harnik & Chang, 2003). The structure of P\({}_{\rm E}\) is analogous to that of P\({}_{\rm M}\) except for a slightly shift to the equator, since P\({}_{\rm E}\) is conversed from P\({}_{\rm M}\) through baroclinic eddies so that is affected by P\({}_{\rm M}\). Likewise, the structure of K\({}_{\rm E}\) is analogous to that of K\({}_{\rm M}\). Besides, K\({}_{\rm E}\) is also affected by P\({}_{\rm E}\) as it is conversed from P\({}_{\rm E}\). Thus, the maxima of K\({}_{\rm E}\) are between the maxima of P\({}_{\rm E}\) and K\({}_{\rm M}\), suggesting K\({}_{\rm E}\) is a result of a balance between P\({}_{\rm E}\) and K\({}_{\rm M}\). The conversion from P\({}_{\rm M}\) to P\({}_{\rm E}\) occurs mainly in the middle troposphere over the middle latitudes of 30-60\({}^{\circ}\)S/N, where the baroclinic eddies are strong (Figure 7(e)). The growing baroclinic eddies transport heat poleward and reduce the south-north temperature gradients, but lead additional east-west temperature variance, i.e., reduce P\({}_{\rm M}\) but generate P\({}_{\rm E}\). Inside these eddies, the cross-isobaric motion converts a small portion of P\({}_{\rm E}\) to K\({}_{\rm E}\) (Figure 7(f)). P\({}_{\rm E}\) is converted to K\({}_{\rm E}\) mainly near the surface over mid-to-high latitudes, especially over the Antarctic, and is associated with the heat-driven rising and sinking motions (Li et al., 2007). In the middle troposphere over the middle latitudes, eddies are generate and then propagate out of this region, but bring the momentum back to accelerate the jet, and these wave-mean flow interactions convert K\({}_{\rm E}\) to K\({}_{\rm M}\) (Figure 7(g)). P\({}_{\rm M}\) is converted to K\({}_{\rm M}\) in the tropics and subtropics by the motion along the pressure gradient in the Hadley cells nearly following angular momentum conservation, but K\({}_{\rm M}\) is converted back to P\({}_{\rm M}\) in the middle latitudes by the motion against the pressure gradient in the Ferrel cells (Figures 4(d) and 7(h)). The combined action of the conversion between P\({}_{\rm M}\) and K\({}_{\rm M}\) makes the global-mean conversion from P\({}_{\rm M}\) to K\({}_{\rm M}\) be relatively inefficient, and even be negative sometimes (e.g., Figure 14.8 in Peixoto & Oort, 1992). Figure 6: The global-mean vertical integrals of energy components (in boxes) and conversion rates (close to arrows) on Earth. Units are 10\({}^{5}\) J m\({}^{-2}\) for energy and W m\({}^{-2}\) for conversion rates. Arrows without specific values represent generation rates and dissipation rates, same as those in Figure 1(a). Directions of arrows represent the directions of energy conversion, and the main conversion paths are highlighted by blue color. Values are calculated in the standard coordinates. The LEC perspective is consistent with the momentum budget. The maxima of \(\rm K_{M}\) over 40-60\({}^{\circ}\)S/N coincide with the regions where \(\rm K_{E}\) is converted to \(\rm K_{M}\) but \(\rm K_{M}\) is converted to \(\rm P_{M}\). It indicates that the mid-latitude jets are eddy-driven--the poleward momentum transport by eddies maintains the mid-latitude jets. Likewise, the maxima of \(\rm K_{M}\) in the subtropics coincide with the regions where both \(\rm P_{M}\) and \(\rm K_{E}\) are converted to \(\rm K_{M}\), indicating that the poleward angular momentum transport by meridional circulation and eddies together maintain the subtropical jets. ### LEC on rapidly rotating, tidally locked terrestrial planet Figure 8(a) displays the global-mean vertical integrals of the LEC on the rapidly rotating tidally locked terrestrial planet. In our estimations, \(\rm P_{M}\) is about \(37.9\times 10^{5}\) J m\({}^{-2}\), \(\rm P_{E}\) is about \(6.9\times 10^{5}\) J m\({}^{-2}\), \(\rm K_{M}\) is about \(66.3\times 10^{5}\) J m\({}^{-2}\), and \(\rm K_{E}\) is about \(12.1\times 10^{5}\) J m\({}^{-2}\). There are two main conversion paths, \(\rm P_{M}\rightarrow\)\(\rm P_{E}\rightarrow\)\(\rm K_{E}\) and \(\rm P_{M}\rightarrow\)\(\rm K_{M}\rightarrow\)\(\rm K_{E}\). On the one hand, \(\rm P_{M}\) is converted to \(\rm P_{E}\) at a rate of 0.45 W m\({}^{-2}\), and then into \(\rm K_{E}\) at a rate of 2.34 W m\({}^{-2}\). On the other hand, \(\rm P_{M}\) is converted to \(\rm K_{M}\) at a rate of 1.90 W m\({}^{-2}\), and then into \(\rm K_{E}\) at a rate of 0.83 W m\({}^{-2}\). The stellar flux received by the planet has an equator-pole contrast, which can generate \(\rm P_{M}\) in the atmosphere. The majority of \(\rm P_{M}\) is in the middle troposphere over the polar regions, and a portion is in the tropics (Figure 9(a)). The structure is similar to that on Earth, because they both have similar south-north temperature gradient (Figures 4(a), 4(b), 5(a), and 5(b)). \(\rm P_{E}\) is much smaller than \(\rm P_{M}\) and mostly over the polar regions, with contributions from the extratropical stationary Rossby waves (Figure 9(b)). The cold lobes of the Rossby waves are visible in the horizontal temperature structure (Figure 5(b)). \(\rm K_{M}\) is located in the mid-to-upper troposphere over the middle latitudes, coinciding with the two westerly jets (Figures 4(b) and 9(c)). Moreover, \(\rm K_{M}\) over the equator is also non-negligible due to the equatorial superrotation. \(\rm K_{E}\) is concentrated in the polar regions, associated with the extratropical stationary Rossby waves (Figure 9(d)). On the rapidly rotating tidally locked planets, although the received stellar radiation and the underlying surface are invariant, the atmosphere still has large variability, i.e., transient eddies, which can be seen in the instantaneous pressure field, temperature, winds, water vapor, and clouds (e.g., Merlis and Schneider, 2010; Pierrehumbert and Hammond, 2019; Song and Yang, 2021). The variability also contributes a portion of \(\rm P_{E}\) and \(\rm K_{E}\). Figure 7: Latitude-altitude cross-sections of the LEC on Earth. (a)–(d) Mean potential energy (\(\rm P_{M}\)), eddy potential energy (\(\rm P_{E}\)), mean kinetic energy (\(\rm K_{M}\)), and eddy kinetic energy (\(\rm K_{E}\)), respectively; (e)–(h) conversion rates of mean potential energy to eddy potential energy, eddy potential energy to eddy kinetic energy, eddy kinetic energy to mean kinetic energy, and mean potential energy to mean kinetic energy, respectively. Global-mean vertical integrals of each panel are corresponding values in Figure 6. Panels are shown in the standard coordinates. The range of color bar in panel (a) is different from it in panels (b)–(d). In the middle troposphere over the polar regions, \(\rm P_{M}\) is converted to \(\rm P_{E}\) by eddies (Figure 9(e)). These eddies are conjectured to arise from a form of baroclinic instability (Pierrehumbert and Hammond, 2019). Figure 10 shows the instantaneous longitude-altitude structure of these eddies with a slightly westward tilt, which is similar to the unstable baroclinic mode on Earth, but on a larger scale due to the smaller planetary rotation rate (Holton and Hakim, 2013). Meanwhile, in the upper troposphere over the equator, the conversion from \(\rm P_{M}\) to \(\rm P_{E}\) is caused by the equatorial wave activity. However, a fraction of \(\rm P_{E}\) is converted back to \(\rm P_{M}\) in the range of 500-300 hPa over the middle latitudes Figure 8: Same as Figure 6 but on the rapidly rotating tidally locked terrestrial planet. Values are calculated in the standard coordinates (a) and in the tidally locked coordinates (b). Figure 9: Same as Figure 7 but on the rapidly rotating tidally locked terrestrial planet. Panels are shown in the standard coordinates. The maximum in panel (a) is about \(600\times 10^{5}\)\(\rm J\,m^{-2}\,bar^{-1}\); the maximum in panel (h) is about \(40\)\(\rm W\,m^{-2}\,bar^{-1}\), and the minimum is about \(-30\)\(\rm W\,m^{-2}\,bar^{-1}\). and near the surface, which is due to the up-gradient heat transport by stationary waves. The cross-isobaric motion converts \(\rm P_{E}\) to \(\rm K_{E}\) in the tropics and middle latitudes, but converts \(\rm K_{E}\) back to \(\rm P_{E}\) in the subtropics and polar regions (Figure 9(f)). In general, the net conversion is from \(\rm P_{E}\) to \(\rm K_{E}\). In the free troposphere, \(\rm K_{E}\) is converted to \(\rm K_{M}\) over the tropics, but \(\rm K_{M}\) is converted to \(\rm K_{E}\) over the middle latitudes, which is caused by the wave-mean flow interactions (Figure 9(g)). That is, the stationary Rossby and Kelvin waves form as a Matsuno-Gill mode that transports westerly momentum from the middle latitudes to the equator, accelerating the equatorial jet but damping the jets in the middle latitudes (e.g., Matsuno, 1966; Gill, 1980; Showman & Polvani, 2011). However, the net conversion is from \(\rm K_{M}\) to \(\rm K_{E}\). The conversion from \(\rm P_{M}\) to \(\rm K_{M}\) is dominant from the tropics to the middle latitudes through the expanded Hadley cells, while the conversion from \(\rm K_{M}\) back to \(\rm P_{M}\) only occurs at high latitudes through the weak and narrow Ferrel cells (Figures 4(e) and 9(h)). Thus, the net conversion is from \(\rm P_{M}\) to \(\rm K_{M}\). The maxima of \(\rm K_{M}\) over 30-60\({}^{\circ}\)S/N coincide with the regions where \(\rm P_{M}\) is converted to \(\rm K_{M}\). It suggests that the jets there are maintained by poleward angular momentum transport through the Hadley cells rather than by momentum transport through waves. Zonal jets constrained by the angular momentum conservation would obey \[[u]=\Omega a\frac{\sin^{2}\phi}{\cos\phi}, \tag{19}\] where \(\Omega\) is the planetary rotation rate, \(a\) is the planetary radius, \(\phi\) is the latitude, and wind speed over the equator is assumed to be zero (Equation 11.4 in Vallis, 2019). It gives an estimate of the zonal-mean wind speed at 50\({}^{\circ}\)S/N to be about 80 m s\({}^{-1}\), in agreement with the simulation. Thus, the jets are more like subtropical jets, because their driving mechanism is similar to that of the subtropical jets on Earth. Comparing to Earth, a main difference of the LEC is that \(\rm K_{M}\) is much larger, i.e., about ten times larger, which is related to the larger wind speeds. For example, a zonal jet has a maximum wind speed of about 80 m s\({}^{-1}\) on the rapidly rotating tidally locked planet, while this value is only 30 m s\({}^{-1}\) on Earth (Figures 4(a) and (b)). This is due to that wider Hadley cells move the air further away from the equator, allowing larger wind speeds, although the planet rotates at 1/5 the rate of Earth (Equation (19)). The stronger winds are also consistent with the larger meridional temperature gradient in the high latitudes of the planet (Figure 5(b)). Another big difference from Earth is that the net conversion from \(\rm P_{M}\) to \(\rm K_{M}\) becomes efficient. This is due to that the Hadley cells become stronger and wider, but the Ferrel cells become weaker, as a result of the smaller rotation rate. This situation is similar to Venus, where the Hadley cells extend throughout the atmosphere and make the conversion from \(\rm P_{M}\) to \(\rm K_{M}\) be most efficient (Lee & Richardson, 2010). In addition, \(\rm K_{M}\) is eventually converted to \(\rm K_{E}\) rather than \(\rm K_{E}\) being converted to \(\rm K_{M}\). ### LEC on slowly rotating, tidally locked terrestrial planet In this section, the LEC on the slowly rotating tidally locked planet is calculated using the tidally locked formulas. The global-mean vertical integrals are shown in Figure 11(a). Overall, the total available energy is about \(12.7\times 10^{5}\) J m\({}^{-2}\), including \(1.9\times 10^{5}\) J m\({}^{-2}\) of \(\rm P_{M}\), \(0.9\times 10^{5}\) J m\({}^{-2}\) of \(\rm P_{E}\), \(3.2\times 10^{5}\) J m\({}^{-2}\) of \(\rm K_{M}\), and \(6.7\times 10^{5}\) J m\({}^{-2}\) of \(\rm K_{E}\). The main conversion path is \(\rm P_{M}\rightarrow\)\(\rm K_{M}\rightarrow\)\(\rm K_{E}\): \(\rm P_{M}\) is converted to \(\rm K_{M}\) at a rate of 2.25 W m\({}^{-2}\) through cross-isobaric motions in global overturning circulation, and a portion of \(\rm K_{M}\) is converted to \(\rm K_{E}\) at a rate of 0.51 W m\({}^{-2}\) through interactions between the overturning circulation and eddies. While the other path, \(\rm P_{M}\rightarrow\)\(\rm P_{E}\rightarrow\)\(\rm K_{E}\), which occurs in Figure 10: Longitude-altitude cross-sections of (a) eddy meridional velocity, (b) eddy temperature, and (c) eddy geopotential height at 70\({}^{\circ}\)N on the rapidly rotating tidally locked terrestrial planet. Panels are instantaneous quantities departed from their zonal means. baroclinic eddies and stationary planetary waves, is relatively inefficient, as that the conversion rate from \(\rm P_{M}\) to \(\rm P_{E}\) is only 0.07 \(\rm W\,m^{-2}\) and the conversion rate from \(\rm P_{E}\) to \(\rm K_{E}\) is only 0.14 \(\rm W\,m^{-2}\). As a result of the WTGs in the free atmosphere (Figures 4(c) and 5(c)), \(\rm P_{M}\) is nearly zero except very close to the surface around the substellar point (Figure 12(a)). Likewise, \(\rm P_{E}\) is nearly zero throughout the atmosphere (Figure 12(b)). The horizontal flow in the upper branch of the global overturning circulation, known as zonal-mean Figure 11: Same as Figure 6 but on the slowly rotating tidally locked terrestrial planet. Values are calculated in the tidally locked coordinates (a) and in the standard coordinates (b). Figure 12: Same as Figure 7 but on the slowly rotating tidally locked terrestrial planet. Panels are shown in the tidally locked coordinates. SP and AP are the substellar point and the antistellar point, respectively. The maximum in panel (a) is about \(70\times 10^{5}\)\(\rm J\,m^{-2}\,bar^{-1}\); the maximum in panel (c) is about \(70\times 10^{5}\)\(\rm J\,m^{-2}\,bar^{-1}\); the maximum in panel (h) is about 150 \(\rm W\,m^{-2}\,bar^{-1}\). meridional winds in the tidally locked coordinates (Figure 4(f)), contributes to the majority of \(\mathrm{K_{M}}\) in the upper troposphere on the dayside (Figure 12(c)), while zonal-mean zonal winds in the tidally locked coordinates are very weak (Figure 4(c)). In addition, the backflow from the nightside to the substellar point contributes to the secondary maximum in \(\mathrm{K_{M}}\) near the surface of the dayside, but with a relatively smaller wind speed due to surface friction. The stationary planetary waves contribute a portion of \(\mathrm{K_{E}}\), which is centered on the nightside and extends to the dayside (Figure 12(d)). Note that the superrotation is transformed to an eddy component in the tidally locked coordinates (Figure 3). Thus, the kinetic energy of the superrotation contributes a portion of \(\mathrm{K_{E}}\) rather than \(\mathrm{K_{M}}\). In general, \(\mathrm{K_{E}}\) here is a measure of any wind that deviates from the global overturning circulation. The conversion from \(\mathrm{P_{M}}\) to \(\mathrm{P_{E}}\) is inefficient throughout the atmosphere due to the WTGs and weak baroclinic activity (Figure 12(e)). \(\mathrm{P_{E}}\) is converted to \(\mathrm{K_{E}}\) on the nightside and around the substellar point, but \(\mathrm{K_{E}}\) is converted back to \(\mathrm{P_{E}}\) on the dayside, which is mainly related to the cross-isobaric motion of the equatorial superrotation (Figure 12(f)). The combine action of the conversion between \(\mathrm{P_{E}}\) and \(\mathrm{K_{E}}\) makes the global-mean conversion from \(\mathrm{P_{E}}\) to \(\mathrm{K_{E}}\) be inefficient. However, the ultimate reason is that the atmosphere is nearly barotropic. The conversion between \(\mathrm{K_{E}}\) and \(\mathrm{K_{M}}\) is dominant by \(\mathrm{K_{M}}\) converting to \(\mathrm{K_{E}}\), which occurs near the terminator (Figure 12(g)). It is caused by the interactions between the global overturning circulation and the eddy components, and the details are discussed in section 3.5. \(\mathrm{P_{M}}\) is converted to \(\mathrm{K_{M}}\) near the surface and in the upper troposphere around the substellar point, by the motion along the pressure gradient in the global overturning circulation (Figures 4(f) and 12(h)). \(\mathrm{K_{M}}\) is converted back to \(\mathrm{P_{M}}\) in a limited region, and the net conversion is from \(\mathrm{P_{M}}\) to \(\mathrm{K_{M}}\). The structure of the conversion from \(\mathrm{P_{E}}\) to \(\mathrm{K_{E}}\) is common on a slowly rotating tidally locked planets. This is due to the fact that the crests of stationary Rossby waves usually lie near the eastern terminator and the troughs usually lie near the western terminator (e.g., Carone et al., 2015; Hammond and Pierrehumbert, 2018; Wang and Yang, 2021). That is, the equatorial superrotation acts against the pressure gradient force as it crosses the dayside, converting \(\mathrm{K_{E}}\) to \(\mathrm{P_{E}}\), and vice versa. Comparing to Earth, the big difference is that \(\mathrm{P_{M}}\) on the slowly rotating tidally locked planet is much smaller, which is mainly due to the WTGs caused by the smaller planetary rotation rate, i.e., 1/60 of that on Earth. Moreover, the strong and wide temperature inversion away from the substellar region makes the atmosphere be more stable than Earth and more difficult to generate motions, which also results in small \(\mathrm{P_{M}}\). Another difference from Earth is that the energy conversion involved in baroclinic activity is inefficient, while the conversion from \(\mathrm{P_{M}}\) to \(\mathrm{K_{M}}\) is efficient, as a consequence of that a small planetary rotation rate makes the atmosphere be more barotropic and makes the thermal forcing tend to generate a strong circulation rather than temperature gradients (e.g., Edson et al., 2011; Noda et al., 2017; Komacek et al., 2019). ### Comparison of LEC between standard and tidally locked coordinates In this section, we compare the LEC between standard and tidally locked coordinates applied to rapidly and slowly rotating tidally locked planets. Briefly, the LEC in the standard coordinates is beneficial to describe \(\mathrm{P_{M}}\) related to equator-pole temperature contrasts, \(\mathrm{K_{M}}\) related to zonal-mean zonal winds (e.g., the equatorial superrotation), the conversion between \(\mathrm{P_{M}}\) and \(\mathrm{K_{M}}\), and wave-mean flow interactions. While the LEC in the tidally locked coordinates is beneficial to describe \(\mathrm{P_{M}}\) related to day-night temperature contrasts, \(\mathrm{K_{M}}\) related to the global overturning circulation, and the conversion between them. To gain more insight, we recalculate the LEC on the rapidly rotating tidally locked planet in the tidally locked coordinates and the LEC on the slowly rotating tidally locked planet in the standard coordinates, and compare them with the results in Sections 3.3 and 3.4. #### 3.5.1 Comparison on rapidly rotating, tidally locked terrestrial planet The global-mean vertical integrals of the LEC on the rapidly rotating tidally locked planet in the tidally locked coordinates are shown in Figure 8(b), and their structures are shown in Figure 13. On a rapidly rotating tidally locked planet, the air temperature and geopotential height are zonally homogeneous in the free atmosphere, and the sharp contrasts are between the equatorial and polar regions (Figures 14(c) and (e)). The winds in the free atmosphere also behave as the zonal jets. These equator-pole contrasts become eddy components in the tidally locked coordinates, and so do the zonal jets (Figures 14(d) and (f)). Thus, \(\mathrm{P_{E}}\) and \(\mathrm{K_{E}}\) in the tidally locked coordinates are much larger than those in the standard coordinates, respectively, which may lead a misconception that eddies dominate this planet rather than large-scale circulation (Figure 8). Temperature and winds exhibit a day-night asymmetry only very close to the surface (Figure 14(a)), so that \(\rm P_{M}\) and \(\rm K_{M}\) in the tidally locked coordinates are smaller than those in the standard coordinates and centered only close to the surface (Figures 13(a) and (c)). Since both zonal-mean zonal winds and waves belong to eddy components in the tidally locked coordinates, the conversion between \(\rm K_{E}\) and \(\rm K_{M}\) here no longer describes the wave-mean flow interactions. The structure of this conversion rate is complicated and atypical, and the corresponding dynamical process is unclear on this planet (Figure 13(g)). The zonal-mean meridional wind speed in the tidally locked coordinates is generally larger than that in the standard coordinates, because meridional winds in the standard coordinates are reversed between the day and night hemispheres. For example, the former is over \(10\rm\ m\,s^{-1}\) on this planet, while the latter is \(4\rm\ m\,s^{-1}\). Thus, the conversion from \(\rm P_{M}\) to \(\rm K_{M}\) in the tidally locked coordinates is more efficient than that in the standard coordinates (Figure 8). The integrals and cross-sections of energy and conversion rates in the tidally locked coordinates can be understood from the distributions of air temperature, geopotential height, and horizontal winds. However, they are more complicated and lead to less insight into the atmospheric circulation on this planet than those in the standard coordinates, so we omit them here. Note that the total potential energy, i.e., the sum of \(\rm P_{M}\) and \(\rm P_{E}\), is about \(44.8\times 10^{5}\rm\ J\,m^{-2}\) and is the same in both two coordinates (Figure 8). Likewise, the total kinetic energy is about \(78.4\times 10^{5}\rm\ J\,m^{-2}\) and is also the same in the two coordinates. They do not depend on the choice of the coordinates. #### 3.5.2 Comparison on slowly rotating, tidally locked terrestrial planet The global-mean vertical integrals of the LEC on the slowly rotating tidally locked planet in the standard coordinates are shown in Figure 11(b), and their structures are shown in Figure 15. On a slowly rotating tidally locked planet, the day-night temperature contrast is usually larger than the meridional contrast. For example, the day-night surface temperature contrast in our experiment is larger than 80 K, while the meridional contrast is smaller than 30 K (Figures 16(a) and (b)). Likewise, the wind speeds of the global overturning circulation are larger than those of the zonal-mean zonal winds, for example, about \(20\rm\ m\,s^{-1}\) for the former and about \(5\rm\ m\,s^{-1}\) for the latter at 300 hPa in our experiment (Figures 16(c) and (d)). Thus, in contrast to the LEC in the tidally locked coordinates, \(\rm P_{M}\) and \(\rm K_{M}\) are smaller in the standard coordinates, but \(\rm P_{E}\) and \(\rm K_{E}\) are larger than those in the tidally locked coordinates (Figure 11), as the day-night temperature contrast and the overturning circulation belong to eddy components here (Figures 15(b) and (d)). Figure 13: Same as Figure 7 but on the rapidly rotating tidally locked terrestrial planet. Panels are shown in the tidally locked coordinates. The main conversion path in the standard coordinates becomes \(\mathrm{P_{E}}\) to \(\mathrm{K_{E}}\), which may lead to a misconception that the conversion is dominated by the baroclinic activity (Figure 15(f)). Indeed, the main conversion occurs in the large-scale dynamical process, where the day-night temperature contrast induced by the uneven stellar radiation generates large-scale upwellings and downwellings. However, the meaning of the conversion between \(\mathrm{K_{E}}\) and \(\mathrm{K_{M}}\) in the standard coordinates is clear. Its structure is consistent with wave-mean flow interactions, where momentum is transported to the equator by waves to maintain the equatorial superrotation (Figure 15(g)). By contrast, the conversion between \(\mathrm{K_{E}}\) and \(\mathrm{K_{M}}\) in the tidally locked coordinates is complex. On this planet, the tidally locked zonal-mean zonal winds are nearly zero, but the tidally locked zonal-mean meridional winds are strong (Figures 4(c) and (f)), so that the conversion between \(\mathrm{K_{E}}\) and \(\mathrm{K_{M}}\) in the tidally locked coordinates is more related to the shear instability of the meridional winds, i.e., the global overturning circulation (see Equation (B15) in Appendix B). In our calculations, the bulk of the conversion from \(\mathrm{K_{M}}\) to \(\mathrm{K_{E}}\) is contributed by the horizontal shear of the global overturning circulation, which causes the accumulation of the large-scale momentum and the conversion to the vortex momentum (see the second term in the right-hand side in Equation (B15), figure not shown). Note that the total potential energy is about \(2.8\times 10^{5}\) J m\({}^{-2}\), the total kinetic energy is about \(9.9\times 10^{5}\) J m\({}^{-2}\), and they are the same in the two coordinates, respectively (Figure 11). ## 4 Summary and Discussions Figure 14: Climatic elements on the rapidly rotating tidally locked planet in the standard coordinates (left column) and in the tidally locked coordinates (right column). Upper row: surface temperature (contours) and the near-surface winds (vectors). Middle row: air temperature (contour) and winds (vector) at 500 hPa. Lower row: deviations of geopotential height from the global means (contour) and winds (vector) at 200 hPa. Black dot represents the substellar point, and black triangle represents the North Pole in the standard coordinates. Figure 16: Decomposition of temperatures and winds on the slowly rotating tidally locked terrestrial planet in the standard coordinates. (a) Meridional contrast of zonal-mean surface temperature; (b) eddy surface temperature; (c) zonal-mean zonal and meridional winds at 300 hPa; (d) eddy winds at 300 hPa. Figure 15: Same as Figure 7 but on the slowly rotating tidally locked terrestrial planet. Panels are shown in the standard coordinates. In this study, we employ the Lorenz energy cycle (LEC) to understand the atmospheric circulation on tidally locked terrestrial planets. We use ExoCAM to simulate atmospheric circulation on both rapidly and slowly rotating tidally locked terrestrial planets, calculate their LECs, and compare them with that on Earth. The main conclusions are as follows: 1. On the rapidly rotating tidally locked planet, the mean potential energy \(\mathrm{P_{M}}\) and eddy potential energy \(\mathrm{P_{E}}\) are comparable to those on Earth, because both them have similar steep meridional temperature gradients. The mean kinetic energy \(\mathrm{K_{M}}\) is much larger than that on Earth, mainly related to much larger wind speeds. The two paths of energy conversion, \(\mathrm{P_{M}}\rightarrow\)\(\mathrm{P_{E}}\rightarrow\)\(\mathrm{K_{E}}\) and \(\mathrm{P_{M}}\rightarrow\)\(\mathrm{K_{M}}\rightarrow\)\(\mathrm{K_{E}}\), are both effective. The former path is mainly associated with baroclinic instabilities, and the latter path is associated with large-scale thermal-driven circulation and barotropic instabilities. These suggest that the atmosphere of rapidly rotating tidally locked planets is in a mixed dynamical regime of single-cell circulation and baroclinic eddies. 2. On the slowly rotating tidally locked planet, \(\mathrm{P_{M}}\) and \(\mathrm{P_{E}}\) are small. This is because the slow rotation rate makes the planet be in a weak temperature gradient regime. Meanwhile, the temperature inversion that lies over the entire nightside and part of the dayside makes the atmosphere be very stable, also contributes to the small potential energy. \(\mathrm{K_{M}}\) and \(\mathrm{K_{E}}\) are comparable to those on Earth. However, in the tidally locked coordinates, \(\mathrm{K_{M}}\) is the measure of the global overturning circulation, and \(\mathrm{K_{E}}\) is the measure of waves and zonal-mean zonal jets including the equatorial superrotation. The main path of energy conversion is \(\mathrm{P_{M}}\rightarrow\)\(\mathrm{K_{M}}\rightarrow\)\(\mathrm{K_{E}}\), associate with cross-isobaric motions in the global overturning circulation and interactions between the global overturning circulation and eddy components. In this study, the main factor discussed to affect the LEC is the planetary rotation rate. Although only two experiments have been performed, the rough tendency is that air temperature gradients and wind speed decrease as the rotation rate becomes smaller, as can be found in previous studies (e.g., Carone et al., 2015, 2016; Noda et al., 2017). As a result, the total available energy stored in the atmosphere is less on planets with slower rotation. Meanwhile, a smaller planetary rotation rate makes the atmosphere less baroclinic (Komacek et al., 2019). Thus, the conversion paths associated with baroclinic instabilities are less efficient, and the conversion between \(\mathrm{P_{M}}\) and \(\mathrm{K_{M}}\) becomes dominant. The planetary rotation rate also determines the ratio of the Rossby deformation radius to the planetary radius, and hence a regime transition in the atmosphere (e.g., Carone et al., 2015; Haqq-Misra et al., 2018). On a slow rotator, the atmospheric state has a large day-night asymmetry, and a LEC in the tidally locked coordinates is more appropriate. Since the planet becomes a rapid rotator, the atmospheric state tends to be zonally homogeneous, so that a LEC in the standard coordinates is more appropriate. For an Earth-like tidally locked planet, the rotation period that separates the rapid and slow rotators is usually 5-10 Earth days (Edson et al., 2011; Yang et al., 2014; Noda et al., 2017). However, both forms of LEC have shortcomings, as the LEC in the standard coordinates may not be robust very close to the surface, and the LEC in the tidally locked coordinates is disabled in describing wave-mean flow interactions. There may be a new form of LEC that can be applied to both rapid and slow rotators, which needs to be investigated in the future. Other factors may also affect the LEC, but are not included in this study. For example, both day-night temperature contrast and wind speeds monotonically decrease as the background air pressure is increased (Kite et al., 2011; Leconte et al., 2013; Wordsworth, 2015; Zhang and Yang, 2020). They would make the available potential and kinetic energy per unit mass of air decrease, but the change of the total atmospheric energy is unclear, as the total mass of the atmosphere is increased. Atmospheric compositions should also have impacts on the LEC. Ding and Wordsworth (2019) showed that excluding greenhouse gases from the atmosphere would decrease the day-night temperature contrast and wind speeds; Wang and Yang (2022) showed that the atmospheric circulation in pure N\({}_{2}\) atmosphere without any greenhouse gas is very weak. They suggest that the potential and kinetic energy would be smaller in an atmosphere without any greenhouse gas. In this study, we do not calculate the generation rates of potential energy and the dissipation rates of kinetic energy, which are inevitable to close the LEC. However, they could be estimated from the remnants of four conversion rates by assuming an equilibrium state (e.g., Peixoto and Oort, 1992; Li et al., 2007; Pan et al., 2017). By doing so, our calculations obtain the total generation rates of potential energy (= total dissipation rates of kinetic energy) with values of 2.61, 4.24, and 2.39 W m\({}^{-2}\) on the three planets, respectively. Moreover, the generation and dissipation rates can be used to obtain a more realistic efficiency of the atmospheric heat engine, which is regarded as the ratio of the total dissipation rate of kinetic energy to the mean net incoming solar radiation. On Earth, the total dissipation rate is 2.61 W m\({}^{-2}\) and the mean net incoming solar radiation is 238 W m\({}^{-2}\). They lead to \(\eta\approx 1.1\%\), which is much smaller than the ideal limit based on the Carnot's heat engine, \(\eta\approx 10\%\)(Peixoto & Oort, 1992). Likewise, the LEC will also provide a smaller efficiency than the ideal limit on tidally locked planets, for example, 2.3% (4.24 W m\({}^{-2}\)/187 W m\({}^{-2}\)) for the rapidly rotating planet and 1.4% (2.39 W m\({}^{-2}\)/175 W m\({}^{-2}\)) for the slowly rotating planet in our experiments. Koll & Abbot (2016) estimated the surface wind speed on tidally locked planets by using the ideal efficiency of atmospheric heat engine, and the efficiency we obtained may help to optimize their estimations. The formulas of the LEC in this study is only applied to atmospheres on terrestrial planets. Based on the shallow atmosphere approximation, we do not consider the kinetic energy of vertical motion or the conversion rates contributed from corresponding metric terms (e.g., \(-u_{TL}\omega_{TL}/a\) in Equation (9)). However, this approximation may be invalid when the vertical motion is comparable to the horizontal motion. That means that the primitive equations and the LEC in this study may be inapplicable for gas planets. They must be corrected by considering all the terms involved in the vertical motion. The application of the LEC to gas giants is useful to estimate the efficiency of the planetary heat engine, which is different from that of a terrestrial planet. In solar system, gas giants (e.g., Jupiter and Saturn) receive less solar radiation than terrestrial planets (e.g., Venus and Earth) but hold stronger winds, suggesting a more efficient planetary heat engine, which may be due to the absence of solid surface or atmospheric compositions (Showman et al., 2009; Ingersoll, 2013). This estimate may be beneficial for predicting wind speeds including superrotation and for quantifying the energy sources of jet streams on gas giants. Moreover, the LEC may be a useful way to evaluate various models for gas giants by comparing the results computed from these models. ## Appendix A Transformation Relation In order to transform the physical quantities in the standard coordinates to their counterparts in the tidally locked coordinates, we derive a transformation relation between the two coordinates. This relation has been derived by Koll & Abbot (2015), but with a left-hand coordinate system (i.e., \(\mathbf{\hat{e}_{i}}\times\mathbf{\hat{e}_{j}}=-\mathbf{\hat{e}_{k}}\), see their Figure 1(b)). Here we make sure that the two coordinates are right-hand systems (i.e., \(\mathbf{\hat{e}_{i}}\times\mathbf{\hat{e}_{j}}=\mathbf{\hat{e}_{k}}\)). We put the substellar point at latitude/longitude (\(\phi,\lambda\)) = (\(0^{\circ},180^{\circ}\)) in the standard coordinates and at tidally locked latitude \(\phi_{TL}=90^{\circ}\) in the tidally locked coordinates, and also put the South and North Poles in the standard coordinates at (\(\phi_{TL},\lambda_{TL}\)) = (\(0^{\circ},0^{\circ}\)) and (\(\phi_{TL},\lambda_{TL}\)) = (\(0^{\circ},180^{\circ}\)), respectively (Figure 2). We transform the two coordinates into the Cartesian coordinates, where x-axis links the spherical core and the substellar point, and z-axis links the spherical core and the North Pole in the standard coordinates, so that \[x = -r\cos\lambda\cos\phi,\] \[y = -r\sin\lambda\cos\phi,\] \[z = r\sin\phi,\] (A1) and \[x = r\sin\phi_{TL},\] \[y = r\sin\lambda_{TL}\cos\phi_{TL},\] \[z = -r\cos\lambda_{TL}\cos\phi_{TL},\] (A2) where \(r\) is the radial distance from the center of the planet. Combining Equations (A1) and (A2) yields the transformation relations between the two coordinates: \[\lambda_{TL} = \tan^{-1}\left(\frac{\sin\lambda}{\tan\phi}\right),\] \[\phi_{TL} = \sin^{-1}\left(-\cos\lambda\cos\phi\right),\] \[\lambda = \tan^{-1}\left(\frac{\sin\lambda_{TL}}{\tan\phi_{TL}}\right),\] \[\phi = \sin^{-1}\left(-\cos\lambda_{TL}\cos\phi_{TL}\right),\] (A3) where \(\lambda\) and \(\lambda_{TL}\) belong to \([0,2\pi]\), and \(\phi\) and \(\phi_{TL}\) belong to \([-\pi/2,\pi/2]\). Because \(\cos\phi\geq 0\) and \(\cos\phi_{TL}\geq 0\), Equations (15) and (16) also yield the sign relations: \[\frac{\cos\lambda}{|\cos\lambda|} = -\frac{\sin\phi_{TL}}{|\sin\phi_{TL}|}=-\frac{\tan\phi_{TL}}{| \tan\phi_{TL}|},\] \[\frac{\sin\lambda}{|\sin\lambda|} = -\frac{\sin\lambda_{TL}}{|\sin\lambda_{TL}|}, \tag{17}\] which are used to calculate the transformation of trigonometric functions. The transformation relations for the scalars including the temperature and the geopotential height follow the transformation of the coordinates. That is, the value of a scalar in the tidally locked coordinates is equal to the value at the corresponding longitude and latitude in the standard coordinates, i.e., \[A(\lambda_{TL},\phi_{TL})=A(\lambda(\lambda_{TL},\phi_{TL}),\phi(\lambda_{TL },\phi_{TL})), \tag{18}\] where the transformation of the coordinates is from Equation (14). The winds in the standard coordinates are \((u,v,w)\equiv(r\cos\phi(D\lambda/Dt),r(D\phi/Dt),Dr/Dt)\). Likewise, the winds in the tidally locked coordinates are \[u_{TL} \equiv r\cos\phi_{TL}\frac{D\lambda_{TL}}{Dt}\] \[= r\cos\phi_{TL}\left(\frac{\partial\lambda_{TL}}{\partial \lambda}\frac{D\lambda}{Dt}+\frac{\partial\lambda_{TL}}{\partial\phi}\frac{D \phi}{Dt}\right)\] \[= \cos\phi_{TL}\left(\frac{\partial\lambda_{TL}}{\partial\lambda} \frac{u}{\cos\phi}+\frac{\partial\lambda_{TL}}{\partial\phi}v\right),\] \[v_{TL} \equiv r\frac{D\phi_{TL}}{Dt}\] \[= r\left(\frac{\partial\phi_{TL}}{\partial\lambda}\frac{D\lambda} {Dt}+\frac{\partial\phi_{TL}}{\partial\phi}\frac{D\phi}{Dt}\right)\] \[= \frac{\partial\phi_{TL}}{\partial\lambda}\frac{u}{\cos\phi}+ \frac{\partial\phi_{TL}}{\partial\phi}v,\] \[w_{TL} = \frac{Dr}{Dt}=w. \tag{19}\] Substituting Equation (14) into Equation (A), we obtain \[\left[\begin{array}{c}u_{TL}\\ v_{TL}\\ w_{TL}\end{array}\right]=\left[\begin{array}{cc}\frac{\cos\lambda\sin\phi}{ \sqrt{1-\cos^{2}\lambda\cos^{2}\phi}}&\frac{-\sin\lambda}{\sqrt{1-\cos^{2} \lambda\cos^{2}\phi}}&0\\ \frac{\sin\lambda}{\sqrt{1-\cos^{2}\lambda\cos^{2}\phi}}&\frac{\cos\lambda \sin\phi}{\sqrt{1-\cos^{2}\lambda\cos^{2}\phi}}&0\\ 0&0&1\end{array}\right]\left[\begin{array}{c}u\\ v\\ w\end{array}\right]. \tag{20}\] The general transformation procedure is to calculate \(u_{TL}\) and \(v_{TL}\) at an arbitrary point in the standard coordinates using Equation (20), and then to determine the position of the point in the tidally locked coordinates using Equation (14). Substituting Equations (14) and (17) into the transformation matrix of the winds yields \[\frac{\cos\lambda\sin\phi}{\sqrt{1-\cos^{2}\lambda\cos^{2}\phi}} = -\frac{|\cos\lambda|\sin\phi}{\sqrt{1-\cos^{2}\lambda\cos^{2}\phi }}\frac{\tan\phi_{TL}}{|\tan\phi_{TL}|}\] \[= -\frac{\sin\phi}{\sqrt{\tan^{2}\lambda+\sin^{2}\phi}}\frac{\tan \phi_{TL}}{|\tan\phi_{TL}|}\] \[= \frac{\cos\lambda_{TL}\cos\phi_{TL}\tan\phi_{TL}}{\sqrt{\sin^{2} \lambda_{TL}+\cos^{2}\lambda_{TL}\sin^{2}\phi_{TL}}}\] \[= \frac{\cos\lambda_{TL}\tan\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+ \tan^{2}\phi_{TL}}},\] \[\frac{\sin\lambda}{\sqrt{1-\cos^{2}\lambda\cos^{2}\phi}} = -\frac{\sin\lambda_{TL}}{|\sin\lambda_{TL}|}\frac{|\sin\lambda|}{ \sqrt{1-\cos^{2}\lambda\cos^{2}\phi}}\] \[= -\frac{\sin\lambda_{TL}}{|\sin\lambda_{TL}|}\frac{1}{\sqrt{1+\cot^ {2}\lambda\sin^{2}\phi}}\] \[= -\frac{\sin\lambda_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+\cos^{2} \lambda_{TL}\sin^{2}\phi_{TL}}}\] \[= -\frac{\sin\lambda_{TL}\sec\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL} +\tan^{2}\phi_{TL}}},\] so that Equation (A7) can also be expressed in terms of the tidally locked coordinates, \[\left[\begin{array}{c}u_{TL}\\ v_{TL}\\ w_{TL}\end{array}\right]=\left[\begin{array}{cc}\frac{\cos\lambda_{TL}\tan \phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+\tan^{2}\phi_{TL}}}&\frac{\sin\lambda_{ TL}\sec\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+\tan^{2}\phi_{TL}}}&0\\ -\frac{\sin\lambda_{TL}\sec\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+\tan^{2} \phi_{TL}}}&\frac{\cos\lambda_{TL}\tan\phi_{TL}}{\sqrt{\sin^{2}\lambda_{TL}+ \tan^{2}\phi_{TL}}}&0\\ 0&0&1\end{array}\right]\left[\begin{array}{c}u\\ v\\ w\end{array}\right],\] (A8) which can be used to directly calculate the transformation of some special wind fields in the standard coordinates, such as a uniform zonal-mean zonal winds. It is easy to prove that \[u_{TL}^{2}+v_{TL}^{2}+w_{TL}^{2}=u^{2}+v^{2}+w^{2},\] so that the transformation between the standard and the tidally locked coordinates satisfies the conservation of energy. ## Appendix B Formulas of Lee The formulas of the LEC in the standard coordinates have been derived by Peixoto & Oort (1974, 1992). We follow their method and obtain the LEC in the tidally locked coordinates. Since the primitive equations in the standard and the tidally locked coordinates are the same, the formulas of the LEC in the two coordinates are also the same, albeit with different details. For example, \(u\) is the winds along the longitude in the standard coordinates but along the tidally locked longitude in the tidally locked coordinates. The zonal mean is the average along the longitude in the standard coordinates, but is the average along the tidally locked longitude in the tidally locked coordinates, so do the deviations from zonal means. The formulas and the descriptions of the LEC are shown below. * Mean available potential energy \[\mathrm{P_{M}}=\frac{c_{p}}{2}\int\gamma\left[\bar{T}\right]^{{}^{\prime\prime 2 }}dm,\] (B9) which mainly depends on the departures of zonal-mean isotherms from their global means. The zonal-mean isotherms are always tilted by large-scale uneven heating or cooling, while their global means are horizontal (Figure 1(b)). * Eddy available potential energy \[\mathrm{P_{E}}=\frac{c_{p}}{2}\int\gamma\left[\overline{T^{{}^{\prime 2}}}+ \bar{T}^{*2}\right]dm,\] (B10) which mainly depends on the temporal variability of temperatures and the departures from temporal- and zonal-mean temperatures. \(\mathrm{P_{E}}\) is also generated by uneven heating or cooling, but usually in smaller scales. * Mean kinetic energy \[\mathrm{K_{M}}=\frac{1}{2}\int\left([\bar{u}]^{2}+[\bar{v}]^{2}\right)dm,\] (B11) which depends on the strengths of jet streams and overturning circulation. * Eddy kinetic energy \[\mathrm{K_{E}}=\frac{1}{2}\int\left[\overline{u^{{}^{\prime}2}}+\overline{v^{ {}^{\prime}2}}+\bar{u}^{*2}+\bar{v}^{*2}\right]dm,\] (B12) which depends on the wind speeds of transient and stationary eddies. * Conversion rate from \(\mathrm{P_{M}}\) to \(\mathrm{P_{E}}\) \[\mathrm{C}\left(\mathrm{P_{M}},\mathrm{P_{E}}\right)=-\int c_{p}\gamma\left[ \overline{v^{\prime}T^{\prime}}+\bar{v}^{*}\bar{T}^{*}\right]\frac{\partial \left[\bar{T}\right]}{a\partial\phi}dm-\int c_{p}p^{-\kappa}\left[\overline{ \omega^{\prime}T^{\prime}}+\bar{\omega}^{*}\bar{T}^{*}\right]\frac{\partial}{ \partial p}\left(\gamma p^{\kappa}\left[\bar{T}\right]^{{}^{\prime\prime}} \right)dm,\] (13) which mainly depends on the heat transport by baroclinic eddies. In this process, the isotherms in the longitude-latitude cross-section are warped, which reduces the zonal-mean meridional temperature gradients but leads to additional variance in the longitude direction, equivalent to converting \(\mathrm{P_{M}}\) to \(\mathrm{P_{E}}\) (Figure 1(c)). * Conversion rate from \(\mathrm{P_{E}}\) to \(\mathrm{K_{E}}\) \[\mathrm{C}\left(\mathrm{P_{E}},\mathrm{K_{E}}\right)=-\int g\left[\frac{u^{ \prime}\partial Z^{\prime}}{a\cos\phi\partial\lambda}+\frac{\overline{v^{ \prime}\partial Z^{\prime}}}{a\partial\phi}+\frac{\bar{u}^{*}\partial\bar{Z}^ {*}}{a\cos\phi\partial\lambda}+\frac{\bar{v}^{*}\partial\bar{Z}^{*}}{a \partial\phi}\right]dm,\] (14) which depends on the cross-isobaric motions in eddies. The pressure gradient force would do work on the air parcel which moves along the pressure gradients, and convert \(\mathrm{P_{E}}\) to \(\mathrm{K_{E}}\). Otherwise, the air parcel moving against the pressure gradient force would convert \(\mathrm{K_{E}}\) to \(\mathrm{P_{E}}\) (Figure 1(d)). * Conversion rate from \(\mathrm{K_{E}}\) to \(\mathrm{K_{M}}\) \[\mathrm{C}\left(\mathrm{K_{E}},\mathrm{K_{M}}\right) = \int\left[\overline{v^{\prime}u^{\prime}}+\bar{v}^{*}\bar{u}^{*} \right]\cos\phi\frac{\partial\left[\bar{u}\right]/\cos\phi}{a\partial\phi}dm+ \int\left[\overline{v^{\prime}}^{2}+\bar{v}^{*2}\right]\frac{\partial\left[ \bar{v}\right]}{a\partial\phi}dm\] (15) \[+\int\left[\overline{\omega^{\prime}u^{\prime}}+\bar{\omega}^{*} \bar{u}^{*}\right]\frac{\partial\left[\bar{u}\right]}{\partial p}dm+\int\left[ \overline{\omega^{\prime}v^{\prime}}+\bar{\omega}^{*}\bar{v}^{*}\right]\frac{ \partial\left[\bar{v}\right]}{\partial p}dm\] \[-\int\left[\overline{u^{\prime}}^{2}+\bar{u}^{*2}\right]\left[ \bar{v}\right]\frac{\tan\phi}{a}dm,\] which normally depends on the wave-mean flow interactions. Under a positive \(\beta\)-plane, eddies with group velocity directed away from the source region would transport momentum back to accelerate the zonal winds, which leads to a conversion from \(\mathrm{K_{E}}\) to \(\mathrm{K_{M}}\); the shear instability of the zonal-mean winds would generate eddies, which leads to a conversion from \(\mathrm{K_{M}}\) to \(\mathrm{K_{E}}\) (Figure 1(e)). * Conversion rate from \(\mathrm{P_{M}}\) to \(\mathrm{K_{M}}\) \[\mathrm{C}\left(\mathrm{P_{M}},\mathrm{K_{M}}\right)=-\int g\left[\bar{v}\right] \frac{\partial\left[\bar{Z}\right]}{a\partial\phi}dm,\] (16) which depends on the same processes as \(\mathrm{C}(\mathrm{P_{E}},\mathrm{K_{E}})\) but in the large-scale circulation. We do not calculate the generation rates of potential energy (\(\mathrm{G}(\mathrm{P_{M}})\) and \(\mathrm{G}(\mathrm{P_{E}})\)) and the dissipation rates of kinetic energy (\(\mathrm{D}(\mathrm{K_{M}})\) and \(\mathrm{D}(\mathrm{K_{E}})\)). This is because it is hard to list all diabatic heating, cooling, and damping processes from the reanalysis data and the GCMs' outputs. The formulas of \(\mathrm{C}(\mathrm{P_{M}},\mathrm{K_{M}})\) and \(\mathrm{C}(\mathrm{P_{E}},\mathrm{K_{E}})\) are so-called 'v-grad z' form. There is another form called '\(\omega\cdot\alpha\)' (\(\alpha\) is specific volume), i.e., \(\left[\bar{\omega}\right]\left[\bar{\alpha}\right]\) and \(\left[\overline{\omega^{\prime}\alpha^{\prime}}+\bar{\omega}^{*}\bar{\alpha}^{*} \right]\), respectively. The '\(\omega\cdot\alpha\)' form is associated with different physical aspect of the conversion, that is, vertical motions of warm and cold air. The two forms may display different spatial distributions, but lead to the same global-mean vertical integrals (Peixoto and Oort, 1992; Marques et al., 2009; Kim and Kim, 2013). Thus, different forms have tiny impacts on our conclusions. We use the 'v-grad z' form in this study, because wind velocities and geopotential height can be directly read from the reanalysis data and GCMs' outputs. We are grateful to Eric Wolf for the release of the model ExoCAM and to D.B. Koll for the release of the changing-coordinates codes and helpful discussions. J.Y. acknowledges support from the National Natural Science Foundation of China (NSFC) under grants 42161144011 and 42075046. The model ExoCAM is released at [https://github.com/storyofthewolf/ExoCAM](https://github.com/storyofthewolf/ExoCAM). The transformation codes for tidally locked coordinates are released at [https://github.com/ddbkoll/tidallylocked-coordinates](https://github.com/ddbkoll/tidallylocked-coordinates). The calculation codes for the LEC are released at [https://doi.org/10.5281/zenodo.7472396](https://doi.org/10.5281/zenodo.7472396). The simulation data in this study are archived at [https://doi.org/10.5281/zenodo.7476074](https://doi.org/10.5281/zenodo.7476074).
2307.13560
XDLM: Cross-lingual Diffusion Language Model for Machine Translation
Recently, diffusion models have excelled in image generation tasks and have also been applied to neural language processing (NLP) for controllable text generation. However, the application of diffusion models in a cross-lingual setting is less unexplored. Additionally, while pretraining with diffusion models has been studied within a single language, the potential of cross-lingual pretraining remains understudied. To address these gaps, we propose XDLM, a novel Cross-lingual diffusion model for machine translation, consisting of pretraining and fine-tuning stages. In the pretraining stage, we propose TLDM, a new training objective for mastering the mapping between different languages; in the fine-tuning stage, we build up the translation system based on the pretrained model. We evaluate the result on several machine translation benchmarks and outperformed both diffusion and Transformer baselines.
Linyao Chen, Aosong Feng, Boming Yang, Zihui Li
2023-07-25T15:08:34Z
http://arxiv.org/abs/2307.13560v2
# XDLM: Cross-lingual Diffusion Language Model for Machine Translation ###### Abstract Recently, diffusion models have excelled in image generation tasks and have also been applied to neural language processing (NLP) for controllable text generation. However, the application of diffusion models in a cross-lingual setting is less unexplored. Additionally, while pretraining with diffusion models has been studied within a single language, the potential of cross-lingual pretraining remains understudied. To address these gaps, we propose XDLM, a novel Cross-lingual diffusion model for machine translation, consisting of pretraining and fine-tuning stages. In the pretraining stage, we propose TLDM, a new training objective for mastering the mapping between different languages; in the fine-tuning stage, we build up the translation system based on the pretrained model. We evaluate the result on several machine translation benchmarks and outperformed both diffusion and Transformer baselines. Our code is available in [https://github.com/Amayama/XDLM](https://github.com/Amayama/XDLM). ## 1 Introduction Diffusion-based generative models, or diffusion models (Ho et al., 2020), have recently demonstrated substantial potential for generating high-quality output in computer vision (CV). Furthermore, several recent studies have explored their application in natural language processing (NLP), including generation tasks such as machine translation, text summarization, and controllable text generation (Li et al., 2022; Zheng et al., 2023; Gao et al., 2022). Notably, GENIE (Lin et al., 2022) proposes the use of pretraining on diffusion models, leveraging large English corpora and subsequent fine-tuning on downstream tasks. However, there is a lack of research investigating the cross-lingual application of diffusion models, particularly in the context of pretraining. There are two types of diffusion models: discrete and continuous. Some works focused on the discrete nature of text and have attempted to extend diffusion models to generate high-quality text. The discrete diffusion (Austin et al., 2021; Hoogeboom et al., 2021) model was initially proposed to generate text samples, solved by denoising and resetting the mask state for each token step by step. In the other hand, the continuous diffusion model (Li et al., 2022) was introduced later, which added additional embedding and rounding steps to transform discrete tokens into continuous latent representations, enabling gradient-based methods for controllable text generation. Then, GENIE model (Lin et al., 2022) involves integrating the diffusion model with a Transformer-based model, which resulted in a large-scale language pre-training model based on the diffusion framework. Furthermore, the Difformer model (Gao et al., 2022) has improved the existing diffusion methods by updating the loss function, adding a layer normalization and a noise factor, to establish a more stable diffusion process. (Zheng et al., 2023) introduce a trick of reparameterization to the discrete diffusion, contributing to a simplified training process and a flexible sampling process. Inspired by GENIE, we propose to apply pretraining in a cross-lingual setting with continuous diffusion. In this paper, we examine the properties of a large-scale multilingual corpus and propose the implementation of cross-lingual pre-training denoising tasks to construct a framework for a cross-lingual diffusion model, termed as Cross-Lingual Diffusion Language Model (XDLM). XDLM specifically designs a cross-lingual pre-training task and corresponding objective for multilingual data, enabling the diffusion model to comprehend the mapping relationships between various languages. To the best of our knowledge, this is the inaugural attempt to introduce the concept of cross-lingual pretraining to diffusion-based models. The principal contributions of this work can be summarized as follows: * We introduce XDLM, the first architecture to the best of our knowledge aiming to incorporate cross-lingual pretraining into diffusion-based text generation. * We propose a pre-training task, Translation Diffusion Language Modeling (TDLM), along with a corresponding loss function. These enhancements augment the model's capacity to capture contextual correlations across various language domains. We also provide a discussion on potential issues. ## 2 Cross-Lingual Diffusion Language Model In this section, we present the Cross-lingual Diffusion Language Model (XDLM), which incorporates a pretraining phase on cross-lingual data, utilizing diffusion techniques for the purpose of non-autoregressive machine translation, and a fine-tuning phase generating corresponding text from one language to another language based on the pre-trained model. **Non-AutoRegressive (NAR) Machine Translation** In machine translation, given the input sequence from a source language \(X=\{x_{1},x_{2},\dots,x_{|X|}\}\), the task is to generate the output sequence of the translation in the target language \(Y=\{y_{1},y_{2},\dots,y_{|Y|}\}\). In this work, we focus on the Non-AutoRegressive (NAR) translation setting with the diffusion model. Typically, it has the following conditional probability: \[p_{\theta}(Y|X)=\prod_{i=1}^{|Y|}p_{\theta}(y_{i}|X).\] Unlike AutoRegressive (AR) text generation, all tokens \(y_{i}(0\leq i\leq|Y|)\) in the generated sequence \(Y\) are predicted concurrently. The generation solely depends on the input sequence \(X\), without any dependency on preceding tokens. This attribute presents a challenge in determining the length of the generated sequence. To address this issue, the length prediction of the output sequence is introduced as an auxiliary task Gu et al. (2017). And the training loss is defined as a weighted sum between the translation loss and the length prediction loss. We apply length prediction in our finetuning phase following RDMZheng et al. (2023)). **Diffusion Models** The Denoising Diffusion Probabilistic Model (DDPM) Ho et al. (2020) is a parametrized Markov chain, and it is trained using variational inference to generate samples that match the original input data. The diffusion process comprises a noise-adding forward process and a noise-removing backward process, both of which can be viewed as discrete-time Markov processes. During the forward process, the model gradually introduces random noise with different scheduled variance \(\beta_{1},...,\beta_{t}\), with the aim of generating a standard Gaussian noise \(x_{t}\) after \(t\) turns. This can be formalized as follows: \[q(x_{t+1}|x_{t})=\mathcal{N}(x_{t+1};\sqrt{1-\beta_{t+1}}x_{t},\beta_{t}\mathbf{ I}).\] The backward process, the reverse of the forward process, attempts to reconstruct the target sequence from the standard noise. Like the forward process, this procedure is also applied incrementally and can be formalized as follows: \[p(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}^{t-1},\sigma_{\theta}^{t-1}),\] \[\mu_{\theta}^{t-1}=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1 -\overline{(}\alpha_{t})}}z_{\theta}(x_{t},t)),\] \[\sigma_{\theta}^{t-1}=\sqrt{\frac{1-\overline{\alpha_{t-1}}}{1-\overline{ \alpha_{t}}}}\dot{\beta}_{t}.\] where \(\alpha_{t}=1-\beta_{t},\overline{\alpha_{t}}=\prod_{i=1}^{t}\alpha_{i}\) and \(z_{\theta}\) comes from the prediction of model parameterized by \(\theta\). In this work, we apply discrete diffusion for text generating and cross-lingual translation. Based on Zheng et al. (2023), we follow the proposed discrete diffusion model with the following routing mechanism. \[x_{t-1},v_{t-1}\sim q(x_{t-1},v_{t-1}|x_{t},x_{0})\] \[q(v_{t-1}|x_{t},x_{0})=q(v_{t-1})=Bernoulli(\lambda)\] \[q(x_{t-1}|v_{t-1},x_{t},x_{0})=\] \[\begin{cases}v_{t-1}x_{t}+(1-v_{t-1}^{(1)})q_{\text{noise}},&\text{if }x_{t}=x_{0}\\ v_{t-1}x_{0}+(1-v_{t-1}^{(2)})q_{\text{noise}}(x_{t}),&\text{if }x_{t}\neq x_{0} \end{cases}\] Which models the joint distribution over both \(x\) and \(v\), where \(q_{noise}(x_{t})=\beta_{t}x_{t}+(1-\beta_{t})q_{noise}\) and. The sampling process here also takes the reparameterized method, which improves flexibility and expressiveness compared to the vanilla multinomial diffusion process. **Translation Diffusion Language Modeling (TDLM)** Unlike previous diffusion model objectives for language modeling that primarily concentrate on monolingual data, we target to exploit cross-lingual modeling capabilities from parallel datasets. Consequently, we propose a pretraining process named Translation Diffusion Language Modeling (TDLM), aiming at enhancing cross-lingual pretraining with diffusion models. As illustrated in Figure 1, we first concatenate both source and target sentences and generate the corresponding language and position embedding sequences, and then stack them as the input to the encoder and diffusion model. The language and position embedding sequence is also introduced to the decoder, helping the model to map latent vectors to the generated sentences. In a similar vein to Lin et al. (2023), we random mask 15% of the tokens to the input as (Lample and Conneau, 2019) designed, tasking the model with predicting the noise and its surrounding text based on the cross-lingual context. This denoising setting assists the model in grasping the cross-lingual context. **Translation Model Setting** In our translation task, we employ a pretrained model based on Translation Diffusion Language Modeling (TDLM) during the fine-tuning stage. This model serves as a robust foundation for comprehending cross-lingual mapping relationships. The Diffusion model operates within an encoder-decoder architecture, where we utilize sentences from both the source and target domains to construct encoder inputs and target inputs, along with their corresponding position embeddings. Furthermore, the language embeddings of the input and output align with the languages presented in the source and target domains. Tokens from the same language as the pretraining process share identical embeddings, facilitating the model's rapid acclimatization to the source and target domains of the translation task. ## 3 Experiments ### Baselines and Datasets We conduct a large cross-lingual corpus and two standard benchmarks for cross-lingual translation, which are introduced as follows, and the information of each dataset is shown in Table 1. * Opus-ENDE1: This dataset comprises a large volume of English-German sentence pairs. Footnote 1: [https://opus.nlpl.eu/](https://opus.nlpl.eu/) * IWSLT14 DE-EN (Cettolo et al., 2014): This benchmark is specifically employed for the German to English translation task. * WMT14 EN-DE (Bojar et al., 2014): This benchmark is designated for the English to German translation task. We use the origin datasets with the same split and do not apply distillation on the dataset. Besides, we follow the data processing introduced by fairseq 2 and use the joint vocabulary as the pretrained model. We compare with three groups of baselines: \begin{table} \begin{tabular}{l l l l l} \hline \hline **Dataset** & **Usage** & **Train** & **Test** & **Valid** \\ \hline Opus-ENDE & pretrain & 9,323,066 & 5,000 & 5,000 \\ IWSLT14 & finetune & 160,240 & 6,750 & 7,283 \\ WMT14 & finetune & 4,496,988 & 3,003 & 3,000 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics. Figure 1: The workflow of Translation Diffusion Language Modeling. * **Auto-regressive model**: Transformers Vaswani et al. (2017). Which generates sentences in an auto-regressive manner. We follow the setting and the results introduced by Gao et al. (2022), beam search with a beam size of 5 is used during generation. * **Continuous Diffusion**: SeqDiffuSeq Yuan et al. (2022), DiffuSeq Gong et al. (2022), Difformer Gao et al. (2022). Which generates the sentences from a continuous latent space. We evaluate Difformer in the origin dataset without knowledge distillation on WMT14 dataset. * **Discrete Diffusion**: CMLM Ghazvininejad et al. (2019), RDM Zheng et al. (2023). Which generates sentences by denoising for each token gradually. We implement the RDM based on the same data setting and batch setting as our models. ### Experimental Settings Model FrameworkWe constructed our XDLM on an encoder-decoder architecture, with both the encoder and decoder comprising six Transformer layers. We set the hidden size of the model to 512 with eight attention heads. Pretraining Stage SettingDuring the pretraining stage, we formulate the pretraining task on the large-scale corpus mentioned above. We initialize pretraining with a weight decay rate of 0.0005 and a dropout rate of 0.2. We set the maximum number of tokens in each batch to 4k and provided 30k warm-up steps. Besides, the max length of both the source and target language is 256, aiming to make the input sentence a proper length. Fine-tuning Stage SettingWe apply fine-tuning on corresponding datasets, leveraging the robust foundation established from the pretraining datasets. The parameter setting for the fine-tuning process is primarily based on the pretraining stage, but with a smaller learning rate of 5e-5. ### Main result To ascertain the efficacy of pretraining on XDLM, we fine-tuned the model across several machine translation tasks, with comparative results in Table 2. Our model demonstrates superior performance relative to certain continuous diffusion models, with the exception of Difformer. However, it exhibits comparable effectiveness to some discrete diffusion models, such as CMLM on the WMT14 dataset. During the evaluation phase, we assess the BLEU score at both word and BPE (Byte Pair Encoding) levels, each requiring different tokenization scales. A comparison of the two tokenization methods is depicted in Table 3. Our findings indicate that our model performs more effectively when evaluated at the BPE level tokenization across two datasets, registering an improvement of approximately 5%. This enhancement can be attributed to the fact that different tokenization levels help mitigate the complexity of the problems. ## 4 Ablation Study Study on the Denoising Capacity at Intermediate StepsWe also focus on the denoising capacity in each diffusion step. In the reverse process, XDLM apply \(T\) diffusion steps to the Gaussian noise \(Z_{T}\), generating correspond output \(y\) after all intermediate steps. Figure 2 shows the change of BLEU scores with the increasing of reverse steps. We can find for different decoding settings, our model can reach a stable result after 10 iterations, which shows out the effectiveness of our model. **Discussion** In this section, we concentrate on the factors that contribute to the comparatively lower performance of our model relative to other models. One may have noticed that our method is not able to perform against the original RDM method, we discuss a few reasons in this section. Firstly, prior research such as RDM leverages a substantial batch size coupled with an extensive number of training iterations, a strategy that has been shown Figure 2: The effect of generation iterations on the BLEU score. to enhance performance. Due to our machine limitations, we failed to conduct the experiments with the same level. Secondly, in terms of our pretraining configuration, we employ a pretraining dataset with an expanded vocabulary size to construct the Byte Pair Encoding (BPE) codes. This approach, while comprehensive, inadvertently increases the complexity of the problem and introduces out-of-vocabulary words that the model must interpret. Such challenges are not typically encountered in previous works. This discrepancy in methodology could potentially account for the performance differential observed. ## 5 Conclusion and Future Work In this study, we propose an innovative architecture that integrates cross-lingual pretraining into diffusion-based text generation. This is achieved through a carefully designed pretraining task. We compare our model with some previous works under automated evaluation method. Looking forward, we plan to extend our model to include additional languages, with the aim of constructing a robust multilingual model capable of handling more extensive cross-lingual translation tasks.
2306.13649
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
Knowledge distillation (KD) is widely used for compressing a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, current KD methods for auto-regressive sequence models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference. To address this issue, we introduce Generalized Knowledge Distillation (GKD). Instead of solely relying on a fixed set of output sequences, GKD trains the student on its self-generated output sequences by leveraging feedback from the teacher on such sequences. Unlike supervised KD approaches, GKD also offers the flexibility to employ alternative loss functions between the student and teacher, which can be useful when the student lacks the expressivity to mimic the teacher's distribution. Furthermore, GKD facilitates the seamless integration of distillation with RL fine-tuning (RLHF). We demonstrate the efficacy of GKD for distilling auto-regressive language models on summarization, translation, and arithmetic reasoning tasks, and task-agnostic distillation for instruction-tuning.
Rishabh Agarwal, Nino Vieillard, Yongchao Zhou, Piotr Stanczyk, Sabela Ramos, Matthieu Geist, Olivier Bachem
2023-06-23T17:56:26Z
http://arxiv.org/abs/2306.13649v3
# GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models ###### Abstract Knowledge distillation is commonly used for compressing neural networks to reduce their inference cost and memory footprint. However, current distillation methods for auto-regressive models, such as generative language models (LMs), suffer from two key issues: (1) distribution mismatch between output sequences during training and the sequences generated by the student during its deployment, and (2) model under-specification, where the student model may not be expressive enough to fit the teacher's distribution. To address these issues, we propose Generalized Knowledge Distillation (GKD). GKD mitigates distribution mismatch by sampling output sequences from the student during training. Furthermore, GKD handles model under-specification by optimizing alternative divergences, such as reverse KL, that focus on generating samples from the student that are likely under the teacher's distribution. We demonstrate that GKD outperforms commonly-used approaches for distilling LLMs on summarization, machine translation, and arithmetic reasoning tasks. ## 1 Introduction Auto-regressive sequence models, such as language models (LMs), have shown impressive capabilities in numerous tasks, where the key to this success is often scaling the amount of training data as well as the number of model parameters (Figure 2). However, scaling parameter count comes at a cost, and the deployment of such state-of-the-art models in real-world is limited by either their inference cost or memory footprint. Thus, a crucial goal for practical use of large capable models, such as LLMs, is to compress them by reducing their number of parameters, while retaining as much as possible of their performance. One of the prevalent techniques for compressing auto-regressive models is knowledge distillation (Hinton et al., 2015). Distillation is the process of training a model - the student - to replicate the knowledge of another model - the teacher - on a specific set of tasks. Typically, the student has fewer parameters than the teacher and as such, distillation can improve task-specific performance while maintaining lower inference cost and memory footprint than the teacher. Current distillation methods for auto-regressive models either require Figure 2: Number of parameters in LLMs over the years. generating a large amount of output sequences from the teacher model, which can be quite costly depending on the teacher size, or a fixed dataset of sequences that the teacher can label (_e.g._, assign token-level probabilities). However, using a fixed dataset can lead to distribution mismatch between teacher-generated output sequences during training and the sequences generated by the student auto-regressively during its deployment, a well-known problem in imitation learning (Pomerleau, 1991; Ross and Bagnell, 2010). Furthermore, the common objective for distillation is to maximize the likelihood of teacher-generated samples under the student's distribution. However, this objective may not be ideal due to model under-specification (Huszar, 2015): the student model may not be expressive enough to fit the teacher's distribution and maximizing likelihood can lead to student-generated samples that are unlikely under the teacher (_e.g._, Figure 3). Inspired by the recent success of reinforcement learning (RL) fine-tuning approaches with auto-regressive language models, we mitigate the above issues with supervised distillation by recognizing that we can view such models as agents and as such, distillation as an imitation learning problem with a known expert. In particular, instead of training the student using a fixed distribution over outputs, we argue for using samples from the student's distribution itself during training, akin to RL and on-policy distillation (Ross et al., 2011). Furthermore, to address model under-specification, we argue that alternative objectives that focus on generating samples from the student that are likely under the teacher's distribution, such as reverse KL, are more suitable for distilling auto-regressive models. Combining the above ideas, we propose Generalized Knowledge Distillation (GKD), which generalizes both on-policy and supervised distillation. We demonstrate that GKD significantly outperforms commonly-used approaches for distilling large language models on summarization (Figure 1), machine translation (WMT), and arithmetic reasoning (GSM8K) tasks. ## 2 Background: KL-based Divergences In probability theory and statistics, the divergence between two probability distributions is a measure of the similarity of the distributions. The most common ones are the **Kullback-Leibler** (KL) divergence and the **Jensen-Shannon** (JS) divergence. The KL divergence Figure 1: **Comparing GKD with common distillation methods on XSum summarization**(Narayan et al., 2018). We use the T5 models (Raffel et al., 2020) trained with supervised fine-tuning (FT) as the student models for distillation. Supervised KD and supervised FT use the XSum training dataset with ground-truth summaries but KD can query the teacher to get probabilities while FT does not. Furthermore, on-policy approaches sample summaries from the student while ‘Mixed’ refers to uniform sampling from the ground-truth and student-generated summaries. ImitKD (Lin et al., 2020) corresponds to ‘Mixed’ sampling with Forward KL. GKD with reverse KL and generalized JS divergence (JSD), as discussed in Section 2, outperforms other approaches. between two discrete probability distributions \(P(\mathcal{C})\) and \(Q(\mathcal{C})\) is given by \[\mathcal{D}_{KL}(P\|Q)=\sum_{c\in\mathcal{C}}P(c)\log\frac{P(c)}{Q(c)} \tag{1}\] The KL divergence is not symmetric, that is, \(\mathcal{D}_{KL}(P\|Q)\neq\mathcal{D}_{KL}(Q\|P)\). As such, we refer to \(\mathcal{D}_{KL}(P\|Q)\) as the **forward KL** while \(\mathcal{D}_{BKL}(P\|Q):=\mathcal{D}_{KL}(Q\|P)\) as the **reverse KL** between \(P\) and \(Q\). Note that the forward KL under an empirical data distribution corresponds to maximum likelihood, which we typically optimize in supervised learning given a fixed dataset. Notably, when approximating \(P(\mathcal{C})\) using a parameterized distribution \(Q_{\theta}(\mathcal{C})\), minimizing the reverse and forward KL under model under-specification results in mean and mode-seeking behavior (Figure 3). One issue with the KL divergence is that it is only finite when the support of \(P\) is contained in the support of \(Q\). This means that if there is a point \(c\) where \(Q(c)=0\) and \(P(c)>0\), then \(\mathcal{D}_{KL}(P\|Q)\to\infty\). A well-known divergence that is _bounded_ even for probability distributions with disjoint supports, albeit not as common as the KL divergence, is the generalized Jensen-Shannon divergence (JSD). JSD(\(\beta\)) interpolates between the forward and reverse KL using the bounded coefficient \(0<\beta<1\): \[\mathcal{D}_{JSD[\beta]}(P\|Q)=\beta\mathcal{D}_{KL}\Big{(}P\Big{\|}\beta P+ (1-\beta)Q\Big{)}+(1-\beta)\mathcal{D}_{KL}\Big{(}Q\Big{\|}\beta P+(1-\beta)Q \Big{)} \tag{2}\] Interestingly, it can be proved that \(\lim_{\beta\to 0}\frac{\mathcal{D}_{JSD(\beta)}(P\|Q)}{\beta}=\mathcal{D}_{KL}(P\|Q)\) (Huszar, 2015). As such, JSD(\(\beta\)) behaves similarly to forward KL for small values of \(\beta\). Similarly, JSD(\(\beta\)) has similar behavior to reverse KL for \(\beta\) close to 1, since \(D_{JSD(\beta)}(P\|Q)=\mathcal{D}_{JSD(1-\beta)}(Q\|P)\). ## 3 Notation & Setup We denote the input and output sequence as \(x,y\) respectively. Let \(\mathbb{V}\) denote the input and output vocabulary comprising of \(M\) tokens, \(y_{<n+1}=(y_{1},y_{2},\ldots,y_{n})\) denote the generated output sequence up to the \(n^{th}\) token, and \(L_{y}\) denote the length of sequence \(y\). We define the token-wise generative process as a deterministic Markov Decision Process (MDP), where the initial state corresponds to the input context or prompt \(x\), the state at the \(n^{th}\) token generation is the output sequence generated thus far, \(y_{<n}\), and the action space is defined over the vocabulary \(\mathbb{V}\). A policy \(p(.|y_{<n},x)\in(0,1)^{M}\) is a next-token probability distribution over all tokens in \(\mathbb{V}\), conditioned on the context \(x\) and state \(y_{<n}\). Following this formulation, the policy is identical to a token-level auto-regressive language model. Furthermore, \(y\sim p(.|x)\) corresponds Figure 3: **Mode _vs_ Mean-seeking KL under model under-specification**. The plot above shows the learned distribution when minimizing the forward and reverse KL between a mixture distribution \(P\) of 2 Gaussians with unit variance and a unimodal Gaussian \(Q_{\theta}\) with respect of \(Q\). Reverse KL is mode-seeking as minimizing it forces \(Q_{\theta}\) to be zero where \(P\) is zero and hence makes it concentrate on one of the modes (last plot). However, forward KL is mode-covering or mean-seeking minimizing it ensures that there is some mass under \(Q_{\theta}\) wherever there is some mass under \(P\) (all plots). See Le (2017) to replicate this plot. to a sampled output sequence \(y\) given the input context \(x\). For ease of notation, we define \(p(y_{n}|x):=p(y_{n}|y_{<n},x)\). Note that if we can not compute logits but are given only samples from a policy (_e.g._, accessing GPT-3 (Brown et al., 2020) through its API), then \(p(\cdot|y_{<n},x)\) corresponds to the empirical distribution on the sampled tokens \(y_{n}\) given \(y_{<n}\) and input \(x\). **Distillation Setup**. We are given two auto-regressive models of different capacity, where \(p_{\text{S}}\) and \(p_{\text{T}}\) refers to the student and teacher policy respectively. We assume that the student is a neural network with learnable parameters \(\theta\) and \(p_{\text{S}}^{\theta}\) is differentiable w.r.t \(\theta\). We are also given a dataset of input contexts \(X\). Optionally, we can also assume access to a dataset of input-output sequence pairs \((X,Y)\). If not given, such a dataset can be easily generated by sampling output sequences from the teacher, that is \(\{(x,y)\text{ s.t.}\,x\in X,y\sim p_{T}(\cdot|x)\}\). For a divergence \(\mathcal{D}\), we define the discrepancy between token-level distributions of \(p_{T}\) and \(p_{S}\) as \[\mathcal{D}\big{(}p_{\text{T}}\|p_{\text{S}}^{\theta}\big{)}(y|x):=\frac{1}{L _{y}}\sum_{n=1}^{L_{y}}\mathcal{D}\big{(}p_{\text{T}}(\cdot|y_{<n},x)\|p_{ \text{S}}^{\theta}(\cdot|y_{<n},x)\big{)}, \tag{3}\] for an input context \(x\) and output sequence \(y\). For example, setting \(\mathcal{D}\) in (3) to the reverse KL results in \(\mathcal{D}_{RKL}\big{(}p_{\text{T}},p_{\text{S}}^{\theta}\big{)}(y|x)=\frac{1 }{L_{y}}\sum_{n}\mathcal{D}_{KL}\big{(}p_{\text{S}}^{\theta}(\cdot|y_{<n},x) \|p_{\text{T}}(\cdot|y_{<n},x)\big{)}\). ## 4 Distillation for Autoregressive Models **Supervised FT**. If we are only given a fixed dataset of ground-truth output sequences but not the teacher policy, then a simple approach is to minimize the negative log-likelihood of such sequences under the student policy: \(L_{SFT}(\theta)=\mathbb{E}_{(x,y)\sim(X,Y)}\big{[}-\log p_{\text{S}}^{\theta} (y|x)\big{]}\). **Supervised KD**(Hinton et al., 2015; Sanh et al., 2019) is a prevalent model compression technique where a student policy is trained to imitate the token-level probability distributions of a teacher policy. The student \(p_{S}\) is trained with the supervised objective \(L_{SD}\) over the target token-level probabilities of the teacher \(p_{T}\): \[L_{SD}(\theta):=\mathbb{E}_{(x,y)\sim(X,Y)}\left[\mathcal{D}_{KL}\big{(}p_{ \text{T}}\|p_{\text{S}}^{\theta}\big{)}(y|x)\right], \tag{4}\] where the expectation is over the samples from the dataset. This supervised objective results in a rich training signal by leveraging the full token-level distribution of the teacher. Note that supervised KD reduces to supervised FT if we only have access to the empirical distribution of teacher-generated sequences and not the teacher policy. Following Hinton et al. (2015), we use a softmax-temperature \(\gamma\): \(p_{T}^{(v)}=\frac{\exp(z_{v}/\gamma)}{\sum_{i=1}^{N}\exp(z_{i}/\gamma)}\) where \(\gamma\) controls the smoothness of the token-level teacher distribution and \(z_{v}\) is the logit score of this distribution for the vocabulary token \(v\). For simplicity, we set the temperature to 1 for the student both during training and inference. **On-policy KD** is a class of imitation learning approaches for knowledge transfer from a teacher policy to another policy. In such approaches (_e.g._ Ross et al., 2011), we iteratively train a student policy using forward KL on a non-stationary dataset with self-generated outputs. Given an input context \(x\), the student generates the output sequence \(y\) and imitates the teacher generated action distribution, \(p_{T}(\cdot|y_{<n},x)\), on intermediate states \(y_{<n}\). Specifically, the on-policy loss \(\mathcal{L}_{OD}\) on student-generated outputs is given by \[L_{OnD}(\theta):=\mathbb{E}_{x\sim X}\Big{[}\mathbb{E}_{y\sim p_{\text{S}}( \cdot|x)}\left[\mathcal{D}_{KL}\big{(}p_{\text{T}}\|p_{\text{S}}^{\theta} \big{)}(y|x)\right]\Big{]}, \tag{5}\] where we do _not_ backpropagate through the sampling distribution \(p_{\text{S}}(\cdot|x)\). On-policy distillation improves on supervised distillation by training on a dataset that resembles the output distribution the student policy is likely to generate. Despite the popularity of on-policy distillation in imitation learning and RL (_e.g.,_ Parisotto et al., 2015; Kelly et al., 2019; Agarwal et al., 2022), it is typically not used for distilling auto-regressive models, with the exception of ImitKD (Lin et al., 2020) that can be instantiated using GKD as we discuss below. In this work, we unify supervised and on-policy distillation approaches and propose a more general approach, which we call Generalized Knowledge Distillation (GKD), where we can choose the divergence to optimize as well as the output distribution for the input training contexts. Specifically, we can optimize any divergence on token-level teacher and student distributions on a mixture of fixed dataset of output sequences, either teacher-generated or ground-truth, and on-policy student-generated sequences. Abstractly, GKD minimizes an objective of the form \[\boxed{L_{\text{GKD}}(\theta):=(1-\lambda)\operatorname{\mathbb{E}}_{(x,y) \sim(X,Y)}\left[\mathcal{D}(p_{\text{T}}\|p_{\text{S}}^{\theta})(y|x)\right]+ \lambda\operatorname{\mathbb{E}}_{x\sim X}\left[\operatorname{\mathbb{E}}_{p \sim p_{\text{S}}(\cdot|x)}\left[\mathcal{D}(p_{\text{T}}\|p_{\text{S}}^{ \theta})(y|x)\right]\right]}\] where \(D(p_{\text{T}},p_{\text{S}})(y|x)\) is a divergence between token-level teacher and student distributions, as described in Eq (3), and \(\lambda\in[0,1]\) is a hyper-parameter that controls the **student data fraction**, that is, the fraction of on-policy student-generated outputs. Akin to on-policy distillation, we do not backpropagate gradients through the student's sampling process. **Remark**. On-policy KD and supervised KD are special cases of GKD where we set the divergence \(\mathcal{D}\) to forward KL and student data fractions \(\lambda\) to \(1\) and \(0\) respectively. Furthermore, ImitKD (Lin et al., 2020) can be viewed as GKD with forward KL as divergence and a non-increasing schedule on \(\lambda\), a simple choice being \(\lambda=0.5\). That said, GKD allows for other choices for the on-policy fraction \(\lambda\) and the divergence, which we explore in this work. _Choose of Divergence._ When using forward KL for distillation, the student model tries to cover the entire support of the token-level teacher distribution \(p_{\text{T}}(\cdot|y_{<n},x)\), and in doing so might end up assigning probability mass to tokens \(y_{n}\) which have low probability under \(p_{\text{T}}(\cdot|y_{<n},x)\). Since the student has typically much lower model capacity than that of the student, this is likely to happen (_e.g._, Figure 3). We posit that a reasonable choice for \(\mathcal{D}\) in GKD might be the reverse KL or JSD(0.9), which has mode-seeking behavior. _Choosing student data fraction._ It is well-known that using only a fixed dataset of ground-truth outputs (\(\lambda=0\)) for training autoregressive models, which is a common practice, can lead to train-test distribution mismatch. This is because the distribution of partial sequences seen during the auto-regressive generation phase can be very different from the ones encountered during the model's training. One simple way to mitigate this issue is to sampling output sequences from the student during training itself, akin to on-policy KD. Furthermore, if we only have access to an unlabeled dataset \(X\) of input contexts, generating sequences from the student rather than the teacher is much less expensive due to the difference in their model sizes. Particularly, when only using student-generated sequences (\(\lambda=1\)), GKD generalizes on-policy KD to arbitrary divergences: \[L_{\text{OnGKD}}(\theta):=\operatorname{\mathbb{E}}_{x\sim X}\Big{[} \operatorname{\mathbb{E}}_{y\sim p_{\text{S}}(\cdot|x)}\big{[}\mathcal{D}(p_ {\text{T}}\|p_{\text{S}}^{\theta})(y|x)\big{]}\Big{]}.\] ### RL + On-policy GKD In some tasks, it is plausible that the teacher only provides a proxy to our main objective. With an RL approach, we can however directly optimize this objective. Conveniently, a useful property of on-policy GKD is that it can easily be combined with online RL fine-tuning as it only requires output samples from the student. Indeed, consider that one wants to optimize the student policy for a scalar reward signal \(r\), while staying close to a teacher policy, then we get a regularized RL fine-tuning objective of the form: \[\mathbb{E}_{x\sim X}\left[(1-\alpha)\underbrace{E_{\mathbf{y}\sim p_{\mathrm{S} }^{\theta}(\cdot|x)}\left[r(y)\right]}_{\text{RL objective}}-\alpha \underbrace{\mathbb{E}_{\mathbf{y}\sim p_{\mathrm{S}}(\cdot|x)}\left[\mathcal{ D}(p_{\mathrm{T}}\|p_{\mathrm{S}}^{\theta})(y|x)\right]}_{\text{Generalized On-Policy Distillation}}\ \right], \tag{6}\] where \(\alpha\in[0,1]\) controls the strength of the distillation loss compared to the RL objective. With \(\alpha=1\), it will perform only distillation. Following Roit et al. (2023), we use a REINFORCE-like policy gradient algorithm for optimizing the RL loss. ## 5 Experiments In this section, we evaluate GKD for distilling LLMs, a widely-studied class of auto-regressive models, on a variety of language tasks, namely abstractive summarization, machine translation and arithmetic reasoning. **Student / Teacher Models**. In all experiments, we are given two models: a student and a teacher, pretrained on the same datasets, but with different model sizes. We focus on the pretrained T5 v1.1 models (Raffel et al., 2020) of different sizes. For all experiments, we use supervised fine-tuned T5-XL (\(\sim\) 3B params) as the teacher. For student models, we use T5-small (77M params), T5-base (250M params), and T5-large (800M params), which are smaller than the teacher by a factor of 38\(\times\), 12\(\times\) and 3.8\(\times\) respectively. **GKD Variants**. For choice of divergence \(\mathcal{D}\) in GKD, we use Forward KL, Reverse KL and two variants of JSD(\(\beta\)), JSD(0.1) and JSD(0.9). For student data fraction \(\lambda\), we try \(\lambda=0\) (**Supervised**), \(\lambda=1\) (**On-policy**) and \(\lambda=0.5\) (**Mixed**). In particular, we are interested in the mixed and on-policy variants with reverse KL and JSD, which have not been previously explored. We compare to the following baselines: * **Supervised FT**: See Section 4 It directly fine-tunes the student on either ground-truth or teacher-generated outputs. * **Supervised KD**: See Eq (4), GKD with \(\lambda=0\) and forward KL as divergence \(\mathcal{D}\). * **On-policy KD**: See Eq (5), GKD with \(\lambda=1\) and forward KL as divergence. Note that purely on-policy distillation hasn't been employed for auto-regressive models. * **ImitKD**(Lin et al., 2020): GKD with \(\lambda=0.5\) and forward KL as divergence. Figure 4: **Data efficiency and scaling** on XSum. Mixed and On-policy GKD variants with JSD(0.1) perform quite well with a small number of training examples and have better scaling behavior than KD methods that use forward KL. The T5-XL teacher obtains a ROUGE-2 score of 22. The results for 100% XSum training dataset are shown in Figure 1. On-policy and Mixed GKD surpassses supervised KD and ImitKD with 100% dataset using only 0.5% of the original dataset. ### Case Study: Abstractive Summarization **Training Setup**. We start by evaluating GKD on an abstractive summarization task of generating a concise summary that captures salient ideas of the input document. To do so, we use the XSum dataset (Narayan et al., 2018), which consists of news articles paired with human-written summaries. For evaluating summarization performance, we report ROUGE scores (Lin, 2004) of predicted summaries on the validation dataset split of XSum. Following PaLM (Chowdhery et al., 2022), we mainly use ROUGE-2 but observe similar trends in ROUGE-LSun and ROUGE-1. We use T5 models fine-tuned on XSum as the base models for distillation while the T5-XL model as the teacher. We report ROUGE-2 performance after a fixed number of training steps with batch size of 32. **Data efficiency and scaling**. To evaluate the efficiency and scaling characteristics of various methods, we perform knowledge distillation by using the teacher only with subsampled XSum training datasets containing the first 1K (0.5%), 10K (5%) and 50K(\(\sim 25\%\)) examples. For this experiment, we use T5-small as the student and report the results in Figure 4. Notably, on-policy GKD with JSD(0.9) on the 0.5% dataset, without any ground-truth summaries, outperforms supervised KD and ImitKD with 100% of the training dataset with ground-truth summaries (Figure 1). Furthermore, as we scale the dataset size from 0.5% to 25%, on-policy KD results in a 6\(\times\) larger improvement than supervised KD and ImitKD. **Varying student model capacity**. Next, we explore how GKD compares to prior distillation approaches across different student model sizes. As shown in Figure 1, we observe consistent improvements with GKD, which demonstrates the generalizability of GKD. Furthermore, we find that GKD variants with reverse KL or JSD(0.1) substantially outperform approaches that use the forward KL. Interestingly, GKD allows us to nearly match the few-shot performance of PaLM (540B) using a \(7000\times\) smaller model (T5-small). **On-policy GKD with RL**. For summarization, we'd like model-generated summaries to be factually consistent with their input documents. However, distillation alone might not be for improving factual consistency of the student model as even large teachers generate factually inconsistent summaries. Recently, Roit et al. (2023) mitigate this issue by using RL with textual entailment feedback (RLEF) as faithful summaries but be textually entailed from their corresponding input documents. Inspired by their success, we explore combining RL optimization with on-policy GKD, as shown in (6). As shown in Figure 5, this approach can substantially improve factual consistency over the teacher model while obtaining large improvements in ROUGE-2 score for the distilled student model. **Ablating GKD variants**. We now evaluate the impact of different KL-based divergences and student data fractions in GKD for distilling T5-XL to a T5-small student. As shown Figure 5: **Trade-off between reward maximization and distillation performance** (ROUGE-2) on XSum. We report performance improvements relative to the original student model (T5-base). Following Roit et al. (2023), we use the binary entailment score from a NLI classifier as the reward while \(\alpha\) controls the strength of the on-policy GKD loss with JSD (0.1). As \(\alpha\) increases, ROUGE-2 increases while entailment score improvement decreases. For comparison, we show the relative performance of the 12\(\times\) larger teacher model. RLEF* corresponds to RLHF-like method from Roit et al. (2023), where the student is regularized towards the original student model itself instead of the teacher. As expected, on-policy GKD + RL achieves higher ROUGE-2 compared to the RLEF* and obtain much better entailment reward compared to the teacher. in Figure 6, forward KL performs poorly compared to other divergences while on-policy variants typically perform at least as well or better than the mixed and supervised variants. **Varying Teacher Temperature**. Furthermore, contrary to conventional wisdom in supervised learning, Figure 7 demonstrates that setting teacher's softmax-temperature lower than 1 can sometimes lead to large performance improvements. ### Machine Translation To evaluate GKD beyond summarization, we consider WMT14 en-de (Bojar et al., 2014), a machine translation task that requires rewriting text in English into German, preserving the content, semantics and style of the input. We report performance on the original test split using the BLEU score, which measures the similarity of machine-translated text to a set of high quality reference translations. We use fine-tuned T5-Base and T5-small models as students while T5-XL as the teacher with a softmax-temperature of 0.1. **Results**. As shown in Figure 8, we observe that mode-seeking KL-based divergences typically outperform their mean-seeking counterparts. Furthermore, using on-policy and mixed data distributions consistently outperforms GKD variants only using a fixed supervised dataset, showing the importance of generating on-policy output sequences from the stu Figure 8: **Varying on-policy student data fraction and divergence in GKD on WMT English \(\rightarrow\) German**. We report the BLEU score improvement of distilled student relative to the original student model. We observe that using student-generated samples as well as mode-seeking divergences, that is reverse KL and JSD (0.9), outperforms using only teacher-generated data or using forward KL. We use the T5-XL (\(\sim\)38 params) supervised fine-tuned on WMT as the teacher, which obtains a BLEU score of 28. (Left) Student corresponds to T5-base (250M params) with a BLEU score of 27.30. GKD with reverse KL and 50-50 split between student-generated and ground-truth translations performs the best. (Right) We use T5-small (77M params) as the student, which obtain a BLEU score of 25.52. On-policy GKD with reverse KL or JSD (0.9) perform the best. Figure 6: **Ablating GKD variants on XSum.** We distill T5-XL to T5-Small (77M params), which obtain a ROUGE-2 score of 7.96 and 22 respectively. For the distilled student, we report improvement in ROUGE-2. On-policy GKD with JSD (0.9) performs the best. dent. These findings are consistent with what we observed on XSum. However, reverse KL seem to perform better than JSD (0.9), while for XSum their ranking seems to be reversed. ### Arithmetic Reasoning Wei et al. (2022) show that reasoning abilities only appear to emerge in LLMs with at least several billions parameters, making knowledge distillation an important area of research for improving reasoning abilities of smaller models. To this end, we evaluate GKD on natural language math problems requiring multi-step logical inference. In particular, we use GSM8K (Cobbe et al., 2021), a high quality linguistically diverse dataset of 8.5K grade school math word problems. Furthermore, we explore GKD in conjunction with chain-of-thought (CoT) prompting (Wei et al., 2022), a common approach to improve reasoning abilities of LLMs, by simply augmenting the input contexts with the given prompt. **Training Setup**. We perform few-shot prompting by augmenting the math problems in GSM8K with the first 4 CoT exemplars from Wei et al. (2022), which we use as input contexts. For evaluation, we report accuracy on the GSM8K test split by checking whether the target answer matches the final answer given an external calculator, akin to Cobbe et al. (2021). For supervised training, we use the CoT outputs generated by Magister et al. (2022) with Palm 540B and filtered using the target answer, resulting in around 5.3K (problem, CoTs) pairs in the original training split of GSM8K. Magister et al. (2022) show that supervised fine-tuning (SFT) on this dataset substantially improves test accuracy on GSM8K. As such, we use Flan-T5 models (Chung et al., 2022) fine-tuned for 10K steps on the above CoT dataset as a starting point for distillation. We use the supervised fine-tuned FLAN T5-XL model as the teacher, which obtains a test accuracy of 27.1. **Results**. As shown in Figure 9, we observe that on-policy GKD methods perform quite well compared to other approaches. Notably, in contrast to our XSum results, we notice that GKD with forward KL (on-policy KD) performs quite well. However, GKD with reverse KL outperform the on-policy KD for the FLAN T5-base and T5-large student models. Also, we observe that mixing student-generated outputs and supervised CoT dataset typically under-performs using only the on-policy student-generated outputs (Figure 10). Figure 10: Ablating GKD variants on GSM8K. Figure 9: **Distillation results on GSM8K with 4-shot chain-of-thought (CoT) prompting. On-policy GKD variants outperform other distillation approaches. As a reference, we provide UL2 results with CoT + calculator as well as LaMDA results with CoT (w/o calc). The teacher FLAN T5-XL achieves a test accuracy of 27.1. SFT corresponds to the approach of Magister et al. (2022).** ## 6 Related work **Knowledge distillation.** Supervised KD (Bucilua et al., 2006; Hinton et al., 2015) is now a classic approach for model compression and has been successfully used for distilling auto-regressive models (Sanh et al., 2019). Another natural approach for distilling auto-regressive models is sequence-level KD (Kim and Rush, 2016), which maximizes the log-likelihood of the student on sequences generated by the teacher. Other approaches rather train the student to match different quantities obtained from the teacher, such as hidden states (Jiao et al., 2020) or attention scores (Wang et al., 2020). However, none of these approaches make the connection between sequence-level distillation and imitation learning, and a purely supervised learning approach can lead to distribution drifts, that is exposure bias (Arora et al., 2022). The first approach to make this connection is ImitKD from Lin et al. (2020), inspired by Dagger (Ross et al., 2011). It samples trajectories from both the student and the teacher to reduce the exposure bias, but does not push the idea further and keep the forward KL at the token level. This is not necessary when one has access to the teacher's log-probabilities, rather than just samples. In essence, ImitKD is a special case of the proposed GKD, which we show to generally lead to worse empirical results in our experiments. More recently, the concurrent work on MiniLLM (Gu et al., 2023) also exploits the link to imitation while also recognizing the limitation of forward KL. They frame distillation as an RL problem with the reward being the reverse KL between the teacher and the student, this being equivalent to the reverse KL at the sequence level (while likelihood maximiziation is the forward one), and solve it using a policy gradient approach. This is close to what we propose, but we argue that GKD is simpler and more stable, being closer to supervised training, since it does not backpropagate through the student's sampling process. Indeed, MiniLLM relies on a number of stabilizing tricks, to tackle high variance, reward hacking, and generation length bias. GKD is also more general, in the sense that we also consider the generalized Jensen-Shannon divergence, which sometimes provides better results. **RL finetuning.** There are now numerous examples of large language models being fine-tuned with RL, be the reward optimizing for some metric (Wu et al., 2018), or learned using human feedback (Ziegler et al., 2019). In these approaches, it is typical to regularize the RL finetuned model towards the initial (usually supervised fine-tuned) model. However, as far as we know, we are the first to perform distillation and RL finetuning at the same time. If it may seem natural, it is quite different from an optimization perspective, as it changes the regularization towards the initial policy to towards the teacher policy, and we show empirically that it is a viable approach. **Knowledge Distillation with Reasoning Traces**. CoT prompting (Wei et al., 2022) has recently demonstrated that LLMs can solve complex reasoning tasks, step by step, just by prompting. This idea was quickly adapted to knowledge distillation, by extending the teacher dataset with CoT prompts for fine-tuning the student (Magister et al., 2022; Ho et al., 2022; Hsieh et al., 2023). The distillation is still done in a supervised way, and other kind of enhanced prompts could be considered (Li et al., 2022; Mukherjee et al., 2023). We adopt the same approach, but combine it with on-policy distillation with various divergences, for the first time to the best of our knowledge. It shows the versatility of GKD, and improves upon the purely supervised approach, as seen in our results on GSM8K (Figure 9). ## 7 Conclusion In this work, we proposed generalized knowledge distillation (GKD) for auto-regressive models that addresses the challenges of distribution mismatch and model under-specification. We investigated performance of GKD on three natural language generation tasks: abstractive summarization, machine translation, and arithmetic reasoning. Our method consistently outperformed more commonly-used knowledge distillation baselines. We further showed that the approach can be combined with reinforcement learning to optimize a sequence-level reward in addition to distilling the knowledge of a large teacher model. We believe that our method will be a valuable resource for researchers and practitioners who are working on improving performance of small auto-regressive models. ## Acknowledgments We are thankful to Johan Ferret, Leonard Hussenot and Ramki Gummadi for providing feedback on an early draft of this work. We are especially thankful to Nikola Momchev, and Sertan Girgin, for key contributions to our research infrastructure and general support for infrastructure-related queries. We'd also like to acknowledge Aleksandra Faust, Olivier Pietquin, Dale Schuurmans, Aaron Courville, Leonard Hussenot, Robert Dadashi, Adrien Ali Taiga, and Max Schwarzer for helpful discussions.
2307.03773
The topological Kondo model out of equilibrium
The topological Kondo effect is a genuine manifestation of the nonlocality of Majorana modes. We investigate its out-of-equilibrium signatures in a model with a Cooper-pair box hosting four of these topological modes, each connected to a metallic lead. Through an advanced matrix-product-state approach tailored to study the dynamics of superconductors, we simulate the relaxation of the Majorana magnetization, which allows us to determine the related Kondo temperature, and we analyze the onset of electric transport after a quantum quench of a lead voltage. Our results apply to Majorana Cooper-pair boxes fabricated in double nanowire devices and provide nonperturbative evidence of the crossover from weak-coupling states to the strongly correlated topological Kondo regime. The latter dominates at the superconductor charge degeneracy points and displays the expected universal fractional zero-bias conductance.
Matteo M. Wauters, Chia-Min Chung, Lorenzo Maffi, Michele Burrello
2023-07-07T18:00:04Z
http://arxiv.org/abs/2307.03773v2
# The topological Kondo model out of equilibrium ###### Abstract The topological Kondo effect is a genuine manifestation of the nonlocality of Majorana modes. We investigate its out-of-equilibrium signatures in a model with a Cooper-pair box hosting four of these topological modes, each connected to a metallic lead. Through matrix-product-state techniques, we simulate the relaxation of the Majorana magnetization, which allows us to determine the related Kondo temperature. Then, we analyze the onset of electric transport after a quantum quench of a lead voltage. Our results apply to Majorana Cooper-pair boxes fabricated in double nanowire devices and provide non-perturbative evidence of the crossover from weak-coupling states to the strongly correlated topological Kondo regime. The latter dominates at the superconductor charge degeneracy points and displays the expected universal fractional zero-bias conductance. The engineering of Majorana zero-energy modes (MZMs) in hybrid superconducting-semiconducting devices has been the core of strenuous theoretical and experimental activities for the last two decades [1; 2; 3]. The detection of these subgap modes relies primarily on tunneling spectroscopy applied to a rich variety of platforms. Tunneling spectroscopy, however, cannot provide direct evidence of the most intriguing properties of Majorana modes, namely their nonlocal and anyonic features. Hence, it is desirable to devise a new generation of experiments that balances the constraints imposed by the current technological limitations and the pursuit of MZM evidence beyond spectroscopy. In this respect, the topological Kondo effect (TKE) [4; 5; 6] plays a crucial role: on one side, it is a transport signature of MZMs well-suited for experimental observations; on the other, it directly results from their nonlocality, such that it can hardly be confused with phenomena originating by nontopological subgap states [7]. The TKE is predicted to emerge in multiterminal devices where \(M\) external leads are coupled to a Majorana Cooper-pair box hosting four MZMs and characterized by a sufficiently strong charging energy \(E_{c}\) (Fig. 1). The TKE manifests itself as a universal nonlocal zero-bias conductance \(dI_{\alpha}/dV_{\beta\neq\alpha}\) quantized at values \(2e^{2}/Mh\). Such conductance is approached at low temperatures in the strong coupling regime in correspondence of both the Coulomb valleys and Coulomb peaks of the related devices [8], as derived from the renormalization group (RG) analysis of effective low-energy models describing the Majorana Cooper-pair box and its coupling to the leads [4; 5; 6; 8; 9; 10; 11; 12]. We adopt a more elementary approach to show the onset of the TKE in out-of-equilibrium systems: we investigate a minimal fermionic model that includes not only the zero-energy Majorana degrees of freedom of the Cooper-pair box, but also its quasiparticle excitations above the superconducting gap. We study, in particular, its dynamics following different protocols of quantum quenches. The time evolution is determined by the tunneling of single electrons from the leads to the central superconducting island, and, differently from the most typical characterizations of the TKE [4; 6; 10; 13; 14; 15], we apply matrix-product-state simulations [16] which do not rely on any perturbative approximation of this coupling. This technique allows us to examine the crossover between the predicted weak-coupling and topological Kondo strong-coupling regimes. The model we propose aims at describing Majorana Cooper-pair boxes engineered from nanowires. Recent developments in the fabrication of parallel double InAs nanowires hybridized with Al [17; 18] make these platforms suitable to combine all the necessary elements for the implementation of the topological Kondo model. Such devices hold promise to investigate its transport signatures as a function of the lead voltage bias, the charge induced on the central superconducting (SC) island, and Figure 1: Schematics of the system: two p-wave superconducting nanowires with MZMs at the edges are coupled by a superconducting island (blue) with charging energy \(E_{c}\). Voltage gates (yellow) tune the island induced charge, \(n_{g}\propto V_{g}\), and the coupling rates \(\Gamma_{\alpha}\) with the leads (orange). Each MZM is coupled with a single normal lead at chemical potential \(\mu_{\alpha}\). the tunneling rates from the leads to the island (Fig. 1). In the following, we will focus on deriving the dependence of the topological Kondo temperature \(T_{K}\) and currents on these physical parameters. _Model and methods.-_ The minimal model for the TKE that we consider describes two parallel 1D topological superconductors coupled by a common floating SC island with charging energy \(E_{c}\) and charge \(n_{g}\) induced by the potential \(V_{g}\) (Fig. 1). These two coupled systems effectively represent two nanowires with strong spin-orbit coupling subject to a proximity-induced SC pairing and a suitable Zeeman interaction, which provide the most common route to engineer MZMs [19; 20]. Their low-energy physics is described by spinless fermions subject to an emergent p-wave SC pairing \(\Delta_{P}\). As a result, four MZMs \(\{\gamma_{\alpha}\}_{\alpha=1,\ldots,4}\) form at the edges of these nanowires and each of them is coupled to a spinless normal lead. The effective tunneling rates \(\Gamma_{\alpha}\) between the leads and the MZMs can be switched off to change the number of terminals \(M\leq 4\) coupled to the system. The simplest description for each SC nanowire is a zero-bandwidth model [21; 22], where the lowest energy level is the subgap state defined by two Majorana operators while the higher energy state represents Bogoliubov quasiparticles above the SC gap. This is achieved by considering a 2-site Kitaev chain for each nanowire, with each of the four corresponding fermionic sites tunnel-coupled to one of the leads. This system defines the Majorana Cooper-box [23; 24] sketched in Fig. 1. The Hamiltonian can be decomposed into \(\widehat{H}=\widehat{H}_{\rm sys}+\widehat{H}_{L}+\widehat{H}_{\rm t}\); \(\widehat{H}_{\rm sys}\) describes the Majorana Cooper-pair box: \[\widehat{H}_{\rm sys}=\sum_{\sigma,n}\epsilon_{n,\sigma}\hat{f}_{n,\sigma}^{ \dagger}\hat{f}_{n\,\sigma}+E_{c}(\hat{N}-n_{g})^{2}\, \tag{1}\] where \(\sigma=\uparrow,\downarrow\) labels the upper and lower nanowires and \(n=0,1\) labels the two quasiparticle energy levels in each of them[25]. \(\hat{N}\) is the total charge of the box with respect to an arbitrary offset. It includes the charge of its Cooper pairs, as well as the electrons in the nanowires. The two zero-energy quasiparticles are generated by the combinations of the MZMs \(\hat{f}_{0,\uparrow}=(\hat{\gamma}_{1}-i\hat{\gamma}_{2})/2\) and \(\hat{f}_{0,\downarrow}=(\hat{\gamma}_{3}-i\hat{\gamma}_{4})/2\). We label the four corresponding low-energy states by \(|n_{\uparrow}n_{\downarrow}\rangle\), with \(\hat{n}_{\sigma}=\hat{f}_{0,\sigma}^{\dagger}\hat{f}_{0,\sigma}\). The charging energy splits them into two two-dimensional degenerate subspaces with different total fermionic parity \((-1)^{\hat{N}}\). The leads are modeled by Wilson chains [26; 16; 27] \[\widehat{H}_{L}=\sum_{\alpha=1}^{4}\sum_{l=1}^{\mathcal{L}}\left[-t_{0}{\rm e }^{-(l-1)/\xi}\hat{c}_{\alpha,l+1}^{\dagger}\hat{c}_{\alpha,l}+{\rm h.c.} \right]-\mu_{\alpha}\hat{c}_{\alpha,l}^{\dagger}\hat{c}_{\alpha,l}\,, \tag{2}\] with \(t_{0}\) being the _bare_ hopping amplitude which sets their bandwidth and is the largest energy scale in our simulations. The hopping decay length \(\xi\) is a numerical auxiliary variable that allows us to tune the resolution at small energies by modifying the lead level spacing [28; 16; 27]. The chemical potentials \(\mu_{\alpha}\) are used to bring the system out of equilibrium and study nonlocal transport properties. Finally, the tunneling Hamiltonian between the leads and the system is \[\widehat{H}_{\rm t}=-\sum_{\alpha=1}^{4}\sum_{\sigma,n}J_{\alpha}\left[\left( u_{\alpha,\sigma,n}\hat{f}_{\sigma,n}^{\dagger}+v_{\alpha,\sigma,n}\hat{f}_{ \sigma.n}\right)\hat{c}_{\alpha,1}+{\rm H.c.}\right], \tag{3}\] where \(u_{\alpha,\sigma,n}\) (\(v_{\alpha,\sigma,n}\)) is the particle (hole) projection of \(\hat{f}_{\sigma,n}\) on the real-space site coupled to the lead \(\alpha\). The tunneling amplitudes \(J_{\alpha}\) are linked to the effective tunneling rates as \(\Gamma_{\alpha}=\frac{J_{\alpha}}{2t_{0}}\). Throughout this paper, we fix the p-wave pairing \(\Delta_{P}\) and the nanowire hopping amplitude \(t_{\rm sys}\) to \(\Delta_{P}=t_{\rm sys}=0.5t_{0}\). We also induce a small hybridization between the MZMs on each nanowire by setting \(\mu_{\rm sys}=0.01t_{0}\) in both Kitaev chains [25]. In our simulations, we map the system into a matrix product state (MPS) by following the approach in Refs. [28; 16]. Each MPS site represents a single-particle eigenstate of either the leads or the nanowires (Bogoliubov quasiparticles for nanowires) and we order them based on their energy. The charge degree of freedom \(\hat{N}\) is encoded in an auxiliary bosonic site [29; 25]. The real-time dynamics is simulated using the time-dependent variational principle (TDVP) algorithm [30; 31; 32] from the ITensor library [33; 34]. _Relaxation towards equilibrium.-_ In the dynamics of Kondo problems, the formation of strong correlations and the Kondo screening cloud occurs over a time scale given by \(T_{K}^{-1}\)[35; 36; 28; 37]. Therefore, the relaxation after a quantum quench offers a useful probe to estimate the Kondo temperature and verify the onset of strongly correlated states. The first quench protocol we consider aims at observing the relaxation of the Majorana Cooper-pair box caused by the coupling with the leads. The SC box is initially prepared in the ground state \(|00\rangle\) (\(N=0\)) for \(n_{g}<0.5\), or \(|10\rangle\) (\(N=1\)) for \(n_{g}>0.5\). The box is originally decoupled from the leads, which are set at half-filling. At time \({\sf t}=0\), the couplings \(\Gamma\) are suddenly turned on and the device begins relaxing toward equilibrium. To characterize this relaxation, we analyze the average charge on the island \(\langle\hat{N}(t)\rangle\), and the effective Majorana magnetization [4; 13], defined as \(\langle\hat{Z}_{\rm eff}(t)\rangle\equiv\langle i\hat{\gamma}_{3}\hat{\gamma}_{ 4}(t)\rangle=1-2\langle\hat{n}_{\downarrow}(t)\rangle\). The observed dependence of the charge \(\langle\hat{N}\rangle\) on \(n_{g}\) after equilibration (Fig. 2) shows the crossover between the weak-coupling and the strong-coupling regime. In particular, following Ref. [38], we characterize the weak-coupling regime at \(n_{g}\sim 0\) by the slope of \(\langle\hat{N}\rangle\): \[\left.\frac{\partial\langle\hat{N}\rangle}{\partial n_{g}}\right|_{n_{g}=0}= \frac{M\Delta_{P}\Gamma}{E_{c}t_{0}}. \tag{4}\] When the coupling \(\Gamma\) is weak, the charge datasets corresponding to different choices of \(E_{c}\), and \(M\) exhibit a good agreement with Eq. (4) [inset of Fig. 2(a)]. On the other hand, the sinusoidal correction derived in Ref. [38] for the strong-coupling regime, \[\langle\hat{N}\rangle=n_{g}-\left(\frac{E_{c}}{\Delta_{P}}\sqrt{1-\Gamma/t_{0} }\right)^{M}\sin(2\pi n_{g}), \tag{5}\] closely matches the numerical data for the highest value of the tunneling rate \(\Gamma=0.08t_{0}\) [gray dot-dashed line and red squares in Fig. 2(a)], thus suggesting the emergence of Kondo correlations. Importantly, the time scale associated with the relaxation of \(\langle\hat{N}\rangle\) depends on the ratio \(\Gamma/E_{c}\) but not on the induced charge \(n_{g}\), as shown in Fig. 2(b) where we plot the time dependence of the relative charge variation, defined as \[\langle\delta\hat{N}(t)\rangle=\frac{|\langle\hat{N}(t)\rangle-\langle\hat{N} (0)\rangle|}{|\langle\hat{N}(t\rightarrow\infty)\rangle-\langle\hat{N}(0) \rangle|}. \tag{6}\] The vertical line marks the equilibration time and different curves, corresponding to different values of \(n_{g}\in\ [0,1]\), converge to the asymptotic value on similar time scales. The magnetization, instead, displays a remarkably different behavior, as shown in Fig.3(a). At short times, \(\mathfrak{t}<\hbar/\Gamma\), the relaxation is dominated by the fast rate \(\Gamma\) (dot-dashed line) independently of both \(E_{c}\) and \(n_{g}\). Then, there emerges a second timescale that depends on both \(\Gamma\) and the energy difference \(\delta E(n_{g})=E_{c}|1-2n_{g}|\) between the charge sectors \(N=0\) and \(N=1\). The black dashed lines in Fig. 3(a) are exponential fits of these slower decays for different values of \(n_{g}\in\ [0,1]\), while \(E_{c}=0.2t_{0}\) and \(\Gamma=0.08t_{0}\). This behavior is analogous to the relaxation of the magnetization in the Anderson impurity model [39; 28], suggesting that this longer timescale is associated with the energy scale \(T_{K}\) of the emerging TKE. The comparison of Figs. 3(a) and 2(b) makes it evident that this Kondo timescale characterizes only the Majorana magnetization but not the charge; the former constitutes indeed one of the effective Pauli operators, \(\langle\hat{Z}_{\text{eff}}\rangle\), at the heart of the definition of the TKE, whereas \(\langle N\rangle\) depends only on the fermionic parity of the SC island. Therefore, we interpret this charge - "spin" separation after the quantum quench as evidence of the emergence of the TKE. Figure 3: (a): Dynamics of the Majorana magnetization for different values of \(n_{g}\in[0,1]\). The dot-dashed line marks the fast relaxation depending on \(\Gamma\) alone. (b) \(T_{K}\) extracted as the relaxation rate of \(\langle\hat{Z}_{\text{eff}}\rangle\) —dashed black lines in panel (a)— as a function of \(n_{g}\). (c) \(T_{K}\) as a function of the timescale \(\Gamma^{-1}\) at \(n_{g}=0.5\) and in the even-parity Coulomb valley (\(n_{g}=0.25\)). Dot-dashed lines indicate the expected scaling in the valleys [Eq. (7)], whereas the dashed line marks the scaling at the charge degeneracy point \(T_{K}\sim M\Gamma\). A non-universal prefactor \(C\sim 0.2\) has been manually set to approximately match the data. All data are obtained with \(\mathcal{L}=64\), \(\xi=16\). Figure 2: (a) Equilibrium charge as a function of \(n_{g}\), for \(M=3\), \(E_{c}=0.2t_{0}\). The gray dot-dashed line corresponds to Eq. (5) for \(\Gamma=0.08t_{0}\). The inset shows data for different values of \(E_{c}\) (\(0.2t_{0}\) and \(0.4t_{0}\)) and \(M=3,\ 4\) in the weak coupling regime, rescaled by \(\frac{M\Delta E\Gamma}{E_{c}t_{0}}\). The dashed black line corresponds to Eq. (4). (b) Relaxation of the charge for different values of \(n_{g}\in[0,1]\), \(E_{c}=0.2t_{0}\) and \(\Gamma=0.04t_{0}\). All data are obtained with \(\mathcal{L}=64\) and \(\xi=16\). Motivated by this observation, we analyze the dependence of the so-derived decay rates \(T_{K}\) on \(n_{g}\), \(\Gamma\), and \(E_{c}\). Figure 3(b) depicts the fitted \(T_{K}\) as a function of the induced charge for different values of the coupling \(\Gamma\) and \(E_{c}=0.2t_{0}\). As expected from RG analyses, \(T_{K}\) is larger at the charge degeneracy point, where it is proportional to \(M\Gamma\), consistently with Ref. [38]. In the Coulomb valleys, instead, \(T_{K}\) is qualitatively compatible with standard RG predictions [38]: \[T_{K}\sim E_{c}\mathrm{e}^{-\frac{\delta E(n_{g})t_{0}}{2(M-2)\Gamma A_{p}}}\,. \tag{7}\] The different behaviors at the charge degeneracy point (\(n_{g}=0.5\)) and in the even Coulomb valley (\(n_{g}=0.25\)) are exemplified in Fig. 3(c), where we plot \(T_{K}\) versus \(t_{0}/\Gamma\) for \(E_{c}=0.2t_{0}\) (circles) and \(E_{c}=0.4t_{0}\) (triangles), with both \(M=3\) (full symbols) and \(M=4\) (empty symbols). The Kondo temperature extracted at \(n_{g}=0.5\) is independent of both \(E_{c}\) and \(M\) and it decreases with a power law compatible with \(T_{K}\sim\Gamma\) (dashed line). For large values of \(\Gamma\), the magnetization can change sign, preventing us from extracting \(T_{K}\) with high precision (see also the large errorbar at \(n_{g}=0.5\) in Fig. 3(b)). When looking at the Coulomb valleys, instead, \(T_{K}\) shows a substantial drop when increasing the charging energy: not only it is smaller for \(E_{c}=0.4t_{0}\), but it decreases faster with \(1/\Gamma\), in accordance with Eq. (7) (dot-dashed lines). Notice that the data for \(M=4\), \(E_{c}=0.4t_{0}\) and \(M=3\), \(E_{c}=0.2t_{0}\) almost coincide as Eq. (7) predicts the same behavior but for a factor 2 in front. Our data display a concavity that is absent in Eq. (7) and suggests a competing power law dependence on \(\Gamma\) in agreement with NRG results of the low-energy effective model [10]. _Nonlocal transport.-_ To investigate multiterminal transport properties, we adopt a different quench protocol, using DMRG to prepare the ground state corresponding to the device coupled with \(M\) leads at equilibrium (\(\mu_{\alpha}=0\)) and induced charge \(n_{g}\). In general, such a state is a superposition of different charge and magnetization states. At \(\mathsf{t}=0\) we quench the chemical potential in the first lead to a finite value \(\mu_{1}=eV_{b}\) and compute the average current flowing through the remaining connected terminals. We refer to the latter as average nonlocal current. RG predicts a fractional zero-bias nonlocal conductance, \(G_{\mathrm{TKE}}=\frac{2}{M}\frac{e^{2}}{h}\), independent from all other physical parameters for \(T\ll T_{K}\), both in the Coulomb valleys[4; 5; 6], and at the charge-degeneracy points [8; 11; 15; 38]. Our simulations capture this fractional conductance for \(M=3,4\) for sufficiently strong coupling in proximity of the charge degeneracy point where \(T_{K}\) is maximum and the behavior of this fixed point can be observed for an extended voltage bias window (Fig. 4). Close to the charge degeneracy point, we observe non-Fermi liquid power-law corrections with non-integer exponents which, however, do not seem compatible with the first-order scaling predicted by bosonization and RG [8; 11; 15; 25; 38; 40]. Our simulations are performed at zero temperature, but, away from the charge degeneracy point, \(T_{K}\) becomes comparable with the energy we introduce with the finite bias \(eV_{b}\), such that we cannot easily capture the universal strong-coupling features of the model. In Fig. 5 we plot the average nonlocal current (\(M=3\) and \(E_{c}=0.4t_{0}\)) divided by the voltage bias as a function of \(n_{g}\). We set \(\mu_{1}=eV_{b}=0.02t_{0}\), which is small enough to probe the response close to the linear regime, yet the data display a good signal-to-noise ratio allowing for a reliable estimate of the current. The TKE prediction is met only at the charge degeneracy point and strong coupling, consistently with Fig. 4, while the strong \(n_{g}\) dependence confirms that we are not deep in the TKE regime; however, there are several hints of the emergence of a strongly-correlated Kondo state also in the Coulomb valleys. In Fig. 5, we compare our numerical data with the conductance of a single noninteracting resonant fermionic level (dashed lines), which represents the charge degree of freedom coupled with \(M=3\) leads: \[G_{\mathrm{rI}}(n_{g},\mu)=\frac{e^{2}}{h}\frac{4\Gamma^{2}}{M^{2}\Gamma^{2}+ 4[\mu-E_{c}(1-2n_{g})]^{2}}. \tag{8}\] \(G_{\mathrm{rI}}\) exhibits a peak scaling as \(4G_{0}/M^{2}\) with width \(\sim M\Gamma/E_{c}\). The data with the weakest coupling (\(\Gamma=0.02t_{0}\), blue circles) match well the corresponding resonant level approximation (8), as expected in a weak coupling regime. When we increase \(\Gamma\), our data display large discrepancies with Eq. (8), with a conductance rapidly approaching the TKE value of \(\frac{2}{3}G_{0}\) (horizontal dot-dashed line) for \(n_{g}\sim 0.5\). Indeed, in this regime, the applied voltage \(\mu_{1}=0.02t_{0}\) is one order of magnitude smaller than the estimate of the Kondo temperature, \(T_{K}\sim 0.1t_{0}\) in Fig. 3(c). Moreover, we see a substantial current flowing deep in the Coulomb valleys Figure 4: Average nonlocal current as a function of the voltage bias at \(n_{g}=0.5\), for \(M=3,\ 4\). The dashed line highlights the TKE prediction \(G=\frac{2}{M}G_{0}\). The data are obtained with \(\mathcal{L}=100\) and \(\xi=32\). (\(\Gamma=0.08t_{0},0.04t_{0}\)) with apparent plateaus that suggest a crossover to the TKE regime. This is further confirmed by the analysis of the data averaged over the decay length \(\xi\) for \(n_{g}=0.25\), \(\Gamma=0.08t_{0}\), and \(\mu_{1}=10^{-3}t_{0}\)[25]. _Conclusions.-_ We analyzed the out-of-equilibrium properties of a minimal model for the topological Kondo effect. We aimed at a microscopic description alternative to RG approaches and a qualitative understanding of transport signatures that may arise in double nanowire experiments. The data we collected present evidence of the onset of strongly-correlated states compatible with a crossover between a weak-coupling and a topological Kondo regime. First, the charge and the effective magnetization of the Majorana Cooper-pair box are characterized by different relaxation behaviors: the former only depends on the system-leads hybridization \(\Gamma\), whereas the latter presents two separate decay timescales. In analogy with the dynamical features of the Anderson impurity model, we used the longer timescale to estimate the Kondo temperature associated with the TKE, with results compatible with the RG predictions [4; 6]. Second, the nonlocal multiterminal conductance in the intermediate to strong coupling regimes matches the predicted value \(G_{\rm TKE}=2G_{0}/M\) at the charge degeneracy point, where \(T_{K}\) is the largest. In the Coulomb valleys, it presents large deviations from the noninteracting resonant level approximation that well describes the weak-coupling regime and two-terminal devices [16]. When the resonant level approximation fails, the conductance displays a plateau in the Coulomb valleys, hinting at a crossover to the topological Kondo regime. Our results are obtained through a matrix product state approach that allows for the study of topological Kondo models without recurring to perturbation theory in the Majorana - lead coupling and does not require any particular hierarchy of the involved energy scales. It is therefore well suited to understand the crossover between strong and weak coupling regimes as well as the corrections to the RG predictions on the TKE when we probe the system at energy scales comparable with \(T_{K}\). Finally, our method can be extended to deviations from the minimal topological Kondo models, such as the coupling of Majorana modes [9] caused by crossed-Andreev and cotunneling processes, and adopted to predict the transport features in a wide variety of strongly interacting nanodevices based on systems with quantum dots coupled to superconducting islands[41; 42; 43; 44]. ## Acknowledgements M.W., L.M., and M.B. are supported by the Villum Foundation (Research Grant No. 25310). This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 847523 "INTERACTIONS." C.-M.C. acknowledges the support by the Ministry of Science and Technology (MOST) under Grant No. 111-2112-M-110-006-MY3, and by the Yushan Young Scholar Program under the Ministry of Education (MOE) in Taiwan. ## Supplemental Material ### Minimal two-sites Kitaev chain description In this Appendix we review the minimal two-site Kitaev chain description we use for modelling each nanowire. It should be noted that this model can be extended for longer chains. The Hamiltonian for the chain \(\sigma=\uparrow,\downarrow\) is given by: \[\widehat{H}_{\rm kit}^{(\sigma)}=\sum_{j=1}^{2}\left[-\mu_{\rm sys}\, \hat{d}_{j,\sigma}^{\dagger}\hat{d}_{j,\sigma}+\left(-t_{\rm sys}\hat{d}_{j+ 1,\sigma}^{\dagger}\hat{d}_{j,\sigma}+\Delta_{P}e^{i\Phi}\hat{d}_{j+1,\sigma }\hat{d}_{j,\sigma}+{\rm H.c.}\right)\right], \tag{9}\] where the index \(j\) labels the site and \(\Phi\) is the superconducting phase of the aluminum backbone. In our simulations, we set \(t_{\rm sys}=\Delta_{P}=0.5t_{0}\) and \(\mu_{\rm sys}=0.01t_{0}\) to avoid a perfect degeneracy of the states. Before constructing the Figure 5: Nonlocal current as a function of \(n_{g}\), for \(E_{c}=0.4t_{0}\). The dashed lines correspond to the resonant level approximation, Eq. (8). The horizontal dot-dashed line is the TKE prediction \(\frac{2e^{2}}{hM}\) with \(M=3\). MPS representation of the full system, we diagonalize the quadratic Hamiltonians in Equation (9) and define the quasiparticle excitations. For simplicity, in the following, we consider only the upper chain, as the two chains are indistinguishable at equilibrium. The physics of the MZMs is manifest by expressing each fermionic operator in terms of two Majorana fermions. In the case of the upper wire, we define: \[\hat{d}_{j,\uparrow}=\frac{\mathrm{e}^{-i\Phi/2}}{2}\left(\hat{\gamma}_{j,B}-i \hat{\gamma}_{j,A}\right) \tag{10}\] as schematically represented in Fig. 6. We consider the limit with \(\mu_{\mathrm{sys}}=0\) and \(\Delta_{P}=t_{\mathrm{sys}}\), such that the Hamiltonian is given by \(\widehat{H}_{\mathrm{kit}}^{(\uparrow)}=-i\Delta_{P}\hat{\gamma}_{2,A}\hat{ \gamma}_{1,B}\) and couples Majorana fermions only at adjacent lattice sites (Fig. 6). The ends of the chain support the unpaired MZMs \(\hat{\gamma}_{1}=\hat{\gamma}_{1,A}\) and \(\hat{\gamma}_{2}=\hat{\gamma}_{2,B}\) which allow us to define the zero-energy quasiparticle operator \(\hat{f}_{0,\uparrow}\) introduced in the main text. In the quasiparticle basis, the first excited state has energy \(\epsilon_{1,\uparrow}=2\Delta_{P}\) and corresponds to the operator \(\hat{f}_{1,\uparrow}=(\hat{\gamma}_{2,A}-i\hat{\gamma}_{1,B})/2\), such that the Hamiltonian can be written as \[\widehat{H}_{\mathrm{Kit}}^{(\uparrow)}=2\Delta_{P}\left(\hat{f}_{1,\uparrow} ^{\dagger}\hat{f}_{1,\uparrow}-1/2\right). \tag{11}\] The same construction is valid for the lower chain. ### The matrix product state construction #### Auxiliary charge site In order to simulate the topological Kondo model, we need to account for both its superconducting pairing and the charging energy of the Cooper-pair box. Importantly, the mean field BCS description of the superconducting system does not preserve the total particle number, but only its parity. This means that we cannot deduce the total charge of the Cooper-pair box \(\hat{N}\) directly from the quasiparticle MPS construction. In order to overcome this problem we add an independent auxiliary charge site to the tensor network representation to keep track of the charge and its dynamics [16; 29]. First of all, one can promote the SC phase \(\mathrm{e}^{-i\Phi/2}\) in (10) as the operator \(\mathrm{e}^{-i\Phi/2}\) which lowers the number of electrons on the box by one (due to the charge phase relation \([\hat{N},\hat{\Phi}]=-2i\)). In this way, the decomposition in Eq. (10) enables to separate this charge degree of freedom from the quasiparticle number. We can therefore describe charge dynamics by adding to our MPS an auxiliary site whose local Hilbert space is spanned by the eigenstates \(\left|N\right>\) of the charge \(\hat{N}\)[16]. The tunneling Hamiltonian \(\widehat{H}_{\mathrm{t}}\) becomes the sum of three-site operators of the form: \[\widehat{H}_{\mathrm{t}}=-\sum_{\alpha=1}^{4}\sum_{\sigma,n}J_{\alpha} \mathrm{e}^{i\hat{\Phi}/2}\left[\left(u_{\alpha,\sigma,n}\hat{f}_{\sigma,n}^{ \dagger}+v_{\alpha,\sigma,n}\hat{f}_{\sigma,n}\right)\hat{c}_{\alpha,1} \right.\quad+\mathrm{H.c.}\right]. \tag{12}\] where the operator \(\mathrm{e}^{\pm i\hat{\Phi}/2}\) acts on the auxiliary site and raises/lowers the charge eigenvalue \(N\). Finally, charging energy costs are straightforwardly taken into account by considering the state of the auxiliary site, via \(\widehat{H}_{c}=E_{c}\left(\hat{N}-n_{g}\right)^{2}\). Figure 6: Each fermionic site of the Kitaev chain can be decomposed in two Majorana operators to make the MZMs physics more transparent. Quasiparticles states are represented schematically on the right. The thick blue link represents the interaction in Eq. (11). The auxiliary charge site construction is numerically implemented by restricting its local Hilbert, \(N\in\,[-N_{\text{max}},N_{\text{max}}]\), with \(N_{\text{max}}=5\) (such that \(\mathrm{e}^{\pm i\hat{\Phi}/2}\) are represented as \(11\times 11\) matrices). Moreover, to remove the redundancy introduced by the auxiliary site, we constrain the parity of \(\hat{N}\) to be the same as the parity of the total occupation of the quasiparticle states in the Majorana Cooper-pair box [16]. Namely, once defining the operator \[\widehat{P}=\left(-1\right)^{\hat{N}+\sum_{n,\sigma}f_{n,\sigma}^{\dagger}f_{n,\sigma}}, \tag{13}\] the following relation \[\widehat{P}|\psi_{\text{phys}}\rangle=|\psi_{\text{phys}}\rangle, \tag{14}\] has to be valid for any physical states \(|\psi_{\text{phys}}\rangle\). Our MPS and matrix product operator construction encodes such \(\mathbb{Z}_{2}\) constraint. ### TDVP Dynamics and transport quantities We simulate the dynamics of the system through the TDVP algorithm, which is not limited by the long-range Hamiltonian resulting from the energy basis choice for the MPS. The Hamiltonian \(\widehat{H}\) is represented as a matrix product operator of maximum bond dimension \(\chi=16\). Through a suitable choice of the system size and Wilson decay length \(\xi\), we observe the emergence of non-equilibrium quasi-steady states, that provide faithful descriptions of the physical behavior of the (infinite) system in its stationary state (see, for instance, Refs. [45; 46]). An example of the current after the quench is depicted in Fig. 7(a). To extract the values of the currents analyzed in the main text, we average the signal after it reaches the stationary value and estimate the errorbars through standard binning techniques. Thanks to the chosen basis, after the quantum quench, the entanglement entropy of the system increases logarithmically with time [16; 47] and it is mainly localized in an energy window proportional to the voltage bias. Outside that window, the entanglement entropy is mostly time-independent, as we show in Fig. 7(b) and (c). See Ref. [16] for more details about the MPS construction and dynamics. We finally observe that the Hamiltonian we adopt to describe the double-nanowire model displays an additional symmetry with respect to the most common topological Kondo models [4; 5; 6]. Indeed, the dynamics we analyze separately preserves the two fermionic parities: \[\hat{P}_{\uparrow} =(-1)^{\sum_{\alpha=1,2}\sum_{l=1}^{\xi}\hat{e}_{l,l}^{\dagger} \hat{e}_{\alpha,l}+\sum_{n=0,1}f_{n,\uparrow}^{\dagger}f_{n\,\uparrow}}\,, \tag{15}\] \[\hat{P}_{\downarrow} =(-1)^{\sum_{\alpha=3,4}\sum_{l=1}^{\xi}\hat{e}_{l,l}^{\dagger} \hat{e}_{\alpha,l}+\sum_{n=0,1}f_{n,\downarrow}^{\dagger}f_{n\,\downarrow}}\,. \tag{16}\] Figure 7: Typical dynamics of the current and entanglement entropy after a quantum quench for a three-terminal device. (a) Time dependence of the current on each lead. \(I_{1}\) has a negative sign because it is the only in-going current. \(I_{2}=0\) because the corresponding lead is decoupled from the device. (b) Entanglement entropy at each bond of the MPS as a function of energy and time. The three arrows mark the horizontal line cuts corresponding to the curves shown in panel (c). Simulation parameters: \(\mathcal{L}=100\), \(\xi=32\). These symmetries reflect the fact that we are neglecting crossed-Andreev and direct cotunneling processes mediated by the superconducting island between the two nanowires. These conservations have the important effect of breaking the particle-hole-like symmetry of the dynamics between systems characterized by \(n_{g}\) and \(1-n_{g}\), as can be seen from Fig.[5] of the main text, at large coupling. The initial ground states \(|00\rangle\) (\(N=0\)) for \(n_{g}<0.5\) and \(|10\rangle\) (\(N=1\)) for \(n_{g}>0.5\) correspond to different sectors of \(\hat{P}_{\downarrow}\) and are not mapped one into the other by the symmetry. ### Further transport results #### Asymmetric couplings Here we investigate the effect of introducing an asymmetry in the couplings \(\Gamma_{\alpha}\) on the transport properties. We first analyze a device with three leads (\(\Gamma_{2}=0\)), where two of them have the same coupling strength \(\Gamma_{1}=\Gamma_{4}=0.08t_{0}\), which corresponds to the strong coupling regime explored in the main text, while the third is varied. In Fig. 8 (right panel) we plot the nonlocal currents \(I_{3}\) and \(I_{4}\) divided by the bias on lead 1 as we vary \(\Gamma_{3}\in[0,1.6\Gamma_{1}]\). At the charge degeneracy point (orange symbols), the data suggest that the current is approximately stable for a broad range of couplings \(\Gamma_{3}\gtrsim\Gamma_{1}\), and, within the error bars, is compatible with the linear conductance associated to the TKE (horizontal dashed line). As \(\Gamma_{3}\) decreases, \(I_{3}\) also decreases and vanishes when the lead is finally decoupled from the system. At the same time, \(I_{4}\) increases and approaches the quantized value \(I_{4}=G_{0}V_{b}\) when the device has only two terminals, as we expect from the resonant tunneling mediated by MZMs with symmetric couplings [16; 48]. As we move deeper in the Coulomb valley (\(n_{g}=0.25\), blue symbols), the system appears to be further away from the TKE regime and the current shows a roughly linear dependence on \(\Gamma_{3}\). Interestingly, however, the current \(I_{4}\) decreases upon switching off \(\Gamma_{3}\), despite keeping \(\Gamma_{4}\) constant. This is in contrast with the single resonant level prediction \[G_{\rm rl}(n_{g},\mu)=\frac{e^{2}}{h}\frac{4\Gamma^{2}}{M^{2}\Gamma^{2}+4[\mu- E_{c}(1-2n_{g})]^{2}}\, \tag{17}\] confirming that a contribution to the current originating from a strongly coupled state is present also in the Coulomb valleys, even though the TKE quantization of the conductance is not recovered for the chosen parameter ranges. Let us now focus on the crossover between \(M=4\) and \(M=3\): we consider a four-terminal device where we tune the coupling \(\Gamma_{2}\) from the symmetric configuration, \(\Gamma_{\alpha}=0.08t_{0}\) on any lead, to \(\Gamma_{2}=0\), while keeping a small voltage bias \(eV_{b}=0.01t_{0}\) on lead 1. In Fig. 8 (right panel) we show the nonlocal currents \(I_{2}\) and \(I_{4}\) for \(n_{g}=0.5\), where the Kondo temperature is maximal, as we switch off \(\Gamma_{2}\). When \(\Gamma_{2}=\Gamma_{1}\), the current on both leads is again compatible with the TKE prediction with \(M=4\) (lower horizontal dashed line). As \(\Gamma_{2}\) decreases, \(I_{2}\) and \(I_{4}\) display opposite behaviors; Figure 8: Left: Nonlocal current as a function of the varying coupling strength \(\Gamma_{3}\), at the charge degeneracy point \(n_{g}=0.5\) and in the even valley \(n_{g}=0.25\). Right: Nonlocal currents as a function of the varying coupling strength \(\Gamma_{2}\), at the charge degeneracy point \(n_{g}=0.5\). The two horizontal dashed lines mark \(G=\frac{2}{M}G_{0}\) for \(M=3,4\). In both panels \(E_{c}=0.4\), \(eV_{b}=0.01t_{0}\), \(L=100\), and \(\xi=32\). the former decreases and vanishes following \(\Gamma_{2}\) while the latter displays first a rather flat plateau followed by a rapid increase to match the TKE prediction for \(M=3\) when \(\Gamma_{2}\to 0\) (higher horizontal dashed line). The current on lead 3 (data not shown) follows closely the signal on \(I_{4}\). ### Low bias transport Transport simulations at very low bias are hampered by a low signal-to-noise ratio that prevents from an accurate estimate of the average current in the nonequilibrium quasi-steady state. This limitation is relevant for low Kondo temperatures as, for instance, in the Coulomb valleys. To partially circumvent this issue, inspired by the so-called z-trick [49] commonly used in NRG methods, here we consider data obtained by averaging over different logarithmic discretizations of the energy levels of the leads. In particular, we average the currents over different decay lengths of the hopping amplitude in the leads. In Fig. 9 we plot an example of this procedure: we consider \(n_{g}=0.25\) (even-parity Coulomb valley), \(E_{c}=0.4t_{0}\), \(M=3\), and \(\Gamma=0.08t_{0}\). The corresponding Kondo temperature extracted from the magnetization dynamics is \(T_{K}\sim 0.01t_{0}\), see Fig.[3](c) of the main text. To capture the transport signature of the TKE, we perform different simulations with a small bias \(eV_{b}=10^{-3}t_{0}\) on lead 1 and \(\xi=\{2,4,8,16\}\). We then average the outgoing current over the different values of \(\xi\). This reduces the amplitude of the current oscillations, and leads to a good match with the TKE prediction \(G_{\rm TKE}=\frac{2}{M}G_{0}\) also in the Coulomb valleys. ### Finite bias corrections Finally, we discuss the finite bias corrections to the currents close to TKE linear response behaviour, \(I=\frac{2}{M}G_{0}V_{b}\). In a renormalization group sense, a power law correction \(G=2G_{0}/M-AV_{b}^{\alpha}\) is related to the scaling dimension of the most relevant operator which arises at the TKE fixed point. In particular, a fixed point described by the Fermi liquid (FL) theory displays a quadratic correction for the conductance (\(\alpha=2\)). The topological Kondo effect, instead, is predicted to display non-Fermi liquid corrections defined by the universal fractional exponent \(\alpha=2(1-2/M)\)[40; 11]. In Fig. 10, we show the bias dependence of the current deviation from the TKE regime, \(I-\frac{2}{M}G_{0}V_{b}\), for \(M=3,4\). The data show a clear power-law behavior, particularly for bias values \(V_{b}\) that are not excessively small (such that the signal-to-noise ratio is reliable). In the displayed cases, the deviation of the current from the power-law fits is below the numerical precision \(\sim 10^{-3}\). In all cases, we observe a non-Fermi liquid scaling which significantly deviates from the cubic FL behaviour of the current \(V_{b}^{3}\). However, the fitted exponents do not match the RG predicted values \(\alpha+1=3-4/M\). The absence of a clear separation of energy scales in the problem might be a source of deviation from the perturbative RG analysis. Moreover, close to the charge degeneracy point, intermediate fixed points are believed to emerge [11]. Finally, the Figure 9: Time dependence of the nonlocal current for \(M=3\) and \(n_{g}=0.25\), averaged over 4 values of \(\xi=2,4,8,16\). The horizontal dashed line marks the TKE prediction. fitted exponents seems to depend continuously on the coupling strength \(\Gamma\) and, while this analysis has shed light on a non-Fermi liquid behaviour, further analysis is needed to understand these power-law corrections.
2308.04699
GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization
Federated Learning (FL) has recently emerged as a promising distributed machine learning framework to preserve clients' privacy, by allowing multiple clients to upload the gradients calculated from their local data to a central server. Recent studies find that the exchanged gradients also take the risk of privacy leakage, e.g., an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge. However, performing gradient inversion attacks in the latent space of the GAN model limits their expression ability and generalizability. To tackle these challenges, we propose \textbf{G}radient \textbf{I}nversion over \textbf{F}eature \textbf{D}omains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers. Instead of optimizing only over the initial latent code, we progressively change the optimized layer, from the initial latent space to intermediate layers closer to the output images. In addition, we design a regularizer to avoid unreal image generation by adding a small ${l_1}$ ball constraint to the searching range. We also extend GIFD to the out-of-distribution (OOD) setting, which weakens the assumption that the training sets of GANs and FL tasks obey the same data distribution. Extensive experiments demonstrate that our method can achieve pixel-level reconstruction and is superior to the existing methods. Notably, GIFD also shows great generalizability under different defense strategy settings and batch sizes.
Hao Fang, Bin Chen, Xuan Wang, Zhi Wang, Shu-Tao Xia
2023-08-09T04:34:21Z
http://arxiv.org/abs/2308.04699v2
# GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization ###### Abstract Federated Learning (FL) has recently emerged as a promising distributed machine learning framework to preserve clients' privacy, by allowing multiple clients to upload the gradients calculated from their local data to a central server. Recent studies find that the exchanged gradients also take the risk of privacy leakage, e.g., an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge. However, performing gradient inversion attacks in the latent space of the GAN model limits their expression ability and generalizability. To tackle these challenges, we propose **G**radient Inversion over **F**eature **D**omains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers. Instead of optimizing only over the initial latent code, we progressively change the optimized layer, from the initial latent space to intermediate layers closer to the output images. In addition, we design a regularizer to avoid unreal image generation by adding a small \(l_{1}\) ball constraint to the searching range. We also extend GIFD to the out-of-distribution (OOD) setting, which weakens the assumption that the training sets of GANs and FL tasks obey the same data distribution. Extensive experiments demonstrate that our method can achieve pixel-level reconstruction and is superior to the existing methods. Notably, GIFD also shows great generalizability under different defense strategy settings and batch sizes. ## 1 Introduction Federated learning [23, 38] is an increasingly popular distributed machine learning framework, which has been applied in many privacy-sensitive scenarios [20, 36], such as financial services, medical analysis, and recommendation systems. It allows multiple clients to participate in collaborative learning under the coordination of the central server. The central server aggregates the uploaded gradients calculated from the local data by the end users, rather than the private data. This mechanism resolves the data silos problem and brings privacy benefits to distributed learning. However, a series of recent studies have shown that even the gradients uploaded in FL take the risk of privacy leakage. Zhu _et al_. [43] first formulate it as an optimization problem and design an optimization-based algorithm that reconstructs private data by best matching the dummy gradients with the real gradients. Zhao _et al_. [41] further improve the attack with an extra label restoration step. Geiping _et al_. [9] first achieve ImageNet-level recovery through a well-designed loss function that adds a new regularization and Figure 1: The reconstructed results of our proposed GIFD on ImageNet[6] and FFHQ[18]. The first column contains the randomly initialized images generated by generators. The next two columns show the reconstruction samples of the latent space search and our proposed GIFD, uses a different distance metric. In order to improve the performance on larger batch sizes, Yin _et al_. [37] propose a batch-level label extraction method and assume that certain side-information is available to regularize feature distributions through batch normalization (BN) prior. It is widely investigated and acknowledged that a pre-trained GAN learned from a public dataset generally captures a wealth of prior knowledge. Recent studies [37, 17, 21] propose to leverage the manifold of GAN as prior information, which provides a good approximation of the natural image space and enhances the attacks significantly. The aforementioned works achieve impressive results in their own scenarios, but most of them rely on strong assumptions, e.g., known labels, BN statistics, and private data distribution, which are actually impractical in the real FL scenario. Therefore, it is hard for most existing methods to recover high-quality private data in a more realistic setting. In this paper, we advocate a simple and effective solution, Gradient Inversion over Feature Domain (GIFD), to address the challenges of expression ability and generalizability of pre-trained GANs. Recently, it has been shown that rich semantic information is encoded in the intermediate features and the latent space of GANs [2, 33, 29, 5]. Among them, the GAN-based intermediate layer optimization in solving compressed sensing problems achieves great performance [5]. Inspired by these works, We reformulate the GAN inversion as a novel intermediate layer optimization problem by minimizing the gradient matching loss by searching the intermediate features of the generative model. Specifically, our first step is to optimize the latent space and then we optimize the intermediate layers of the generative model successively. During the feature domain optimization stage, we only use part of the generator and the solution space becomes larger, which can easily lead to unreal image generation. To solve this problem, we iteratively project the optimizing features to a small \(l_{1}\) ball centered at the initial vector induced by the previous layer. Finally, we select output images from the layer with the corresponding least gradient matching loss as the final results. The visual comparison in Figure 1 clearly demonstrates the necessity of optimizing the intermediate feature domains. Another issue unsolved in GAN-based gradient attacks is the flexibility of private data generation under more rigorous and realistic settings. To relax these assumptions, we first investigate an out-of-distribution (OOD) gradient attack scenario, where the private data distribution is significantly different from that of the GAN's training set. The significant result improvement demonstrates the proposed method has excellent generalizability and achieves great performance on OOD datasets. Furthermore, we discuss several common defense strategies in _protection form gradient sharing_[39], including gradient sparsification [31, 1], gradient clipping [10], differential privacy [10], and Sote-ria (_i.e_., perturbing the data representations) [32]. These frequently used privacy defense approaches have been confirmed to achieve high resilience against existing attacks by degrading the privacy information carried by the share gradients. Extensive experiments and ablation studies have demonstrated the effectiveness of the GIFD attack. Our main contributions are summarized as follows: * We propose GIFD for exploiting pre-trained generative models as data prior to invert gradients by searching the latent space and the intermediate features of the generator successively with \(l_{1}\) ball constraint. * We show that this optimization method can be used to generate private OOD data with different styles, demonstrating the impressive generalization ability of the proposed GIFD under a more practical situation. * We systematically evaluate our proposed method compared with the state-of-the-art baselines with the gradient transformation technique under four considered defense strategies. ## 2 Related Work ### Gradient-based Attack in FL In federated learning, the early studies investigate _member inference_[30, 24], where a malicious attacker can determine whether a certain data sample has participated in model training. A similar attack, called _property inference_[8], can reveal the attributes of the samples in the training set. Another powerful attack is _model inversion_[14], which works by training a GAN from local images and the shared gradients to generate samples with the same distribution as the private data. Wang _et al_. [34] then improve the model attack and reconstruct client-level data representatives. **Gradient Inversion Attacks.** This is a more threatening type of attack where an adversary can fully reconstruct the client's private data samples. The existing attack methods can be characterized into two paradigms [39]: _recursion_ and _iteration_-based methods. Recursion-based attacks. Phong _et al_. [28] first utilized gradients to successfully recover the input data from a shallow perceptron. Fan _et al_. [7] considered networks with convolution layers and solved the problem by converting the convolution layer into a full connection layer. Zhu _et al_. [42] combined forward and backward propagation to transform the problem into solving a system of linear equations. Chen _et al_. [4] then combined optimization problems under different situations and proposed a systematic framework. The recursion-based methods have the following limitations: (1) low-resolution images only; (2) the global model in FL cannot contain pooling layers or shortcut connections; (3) these methods cannot handle mini-batch train ing; and (4) they heavily depend on gradients, _i.e_., if gradients are perturbed, most of these methods barely work. Iteration-based attacks. Zhu _et al_. [43] first formulated the attack as an iterative optimization problem. Attackers restore data samples by minimizing the distance between the shared gradients and the dummy gradients generated by a pair of dummy samples. Zhao _et al_. [41] proposed to extract the label of a single sample from the gradients and further improved the attack. Geiping _et al_. [9] reconstructed higher resolution images from ResNet [13] by changing the distance metric and adding a regularization term. Yin _et al_. [37] primarily focused on larger batch sizes recovery. With strong BN statistics and deep pre-trained ResNet-50 as the global model (larger model generates more gradient information), they successfully revealed some information from partial images at larger batch sizes. Jeon _et al_. [17] fine-tuned the GAN parameter to better utilize image prior and improved the quality of restored images. Hatamizadeh _et al_. [12] extended attacks on Vision Transformers. Considering defense strategies in FL, Li _et al_. [21] proposed a new technique called gradient transformation to deal with the degraded gradients and still revealed private information. Currently, several strong assumptions are made to help better reconstruct, which are not identical to the realistic FL setting. By nullifying some of these assumptions [16], the reconstruction performance drops significantly. ### GAN as prior knowledge GAN [11] is a deep generative model, which can learn the probability distribution of the images in the training set through adversarial training. A well-trained GAN can generate realistic and high-diversity images. Recent studies show that GAN can be leveraged to solve inverse problems [35], _e.g_. compressed sensing. Yin _et al_. [37] introduced a method that utilizes a pre-trained generative model as an image prior. Jeon _et al_. [17] proposed to search the latent space and parameter space of the generative model in turn, which fully exploits GAN's generation ability to reconstruct images of outstanding quality. A weakness is that it requires a specific generator to be trained for each reconstructed image, which consumes large amounts of GPU memory and inference time. Li _et al_. [21] also adopted the generative model, but only optimized the latent code, which achieves semantic-level reconstruction. Among the GAN-based methods, only Jeon _et al_. [17] really considered the situation when the training data of the generative model and the global model obey different probability distributions. Inspired by the successful application of Intermediate Layer Optimization (ILO) [5] in compressed sensing, we decide to search the latent space and feature domains of the generative model to achieve pixel-level reconstruction. Meanwhile, we find that our method is superior to the previous methods for OOD data. ## 3 Method In this section, we first introduce the basic paradigm of gradient inversion attacks. Then, we explain how former methods leverage GAN to achieve better results. Finally, we elaborate on our proposed GIFD, which successively searches the latent space and intermediate feature spaces of the generative model. ### Problem Formulation Given a neural network \(f_{\theta}\) with weights \(\theta\) for image classification tasks, and batch-averaged gradients \(g\) calculated from a private batch with images \(\mathbf{x}^{*}\) and labels \(\mathbf{y}^{*}\), the attacker attempts to invert the gradients to private data with randomly initialized input tensor \(\mathbf{\hat{x}}\in\mathbb{R}^{B\times H\times W\times C}\) and labels \(\mathbf{\hat{y}}\in\{0,1\}^{B\times L}\) (\(B,H,W,C,L\) being batch size, height, width, number of channels and class number): \[\mathbf{\hat{x}}^{*},\mathbf{\hat{y}}^{*}=\operatorname*{arg\,min}_{\mathbf{ \hat{x}},\mathbf{\hat{y}}}\mathcal{D}\left(\frac{1}{B}\sum_{i=1}^{B}\nabla \ell(f_{\theta}(x_{i}),y_{i}),g\right), \tag{1}\] where \(\mathbf{\hat{x}}=(x_{1},\ldots,x_{B})\), \(\mathbf{\hat{y}}=(y_{1},\ldots,y_{B})\). \(\mathcal{D}(\cdot,\cdot)\) is the measurement of distance, _e.g_., \(l_{2}\)-distance [37, 21], negative cosine similarity [9, 17], and \(\ell(\cdot,\cdot)\) is the loss function for classification. In the workflow of the algorithm, the attacker generates a pair of random noise \(\mathbf{\hat{x}}\) and labels \(\mathbf{\hat{y}}\) as parameters, optimized towards the ground truth \(\mathbf{x}^{*}\) and \(\mathbf{y}^{*}\) through minimizing the matching loss between dummy gradients and transmitted gradients. Since private labels can be inferred directly from the gradients [41, 37], the objective function with regularization term can be simplified to the following form: \[\mathbf{\hat{x}}^{*}=\operatorname*{arg\,min}_{\mathbf{\hat{x}}}\mathcal{D} \left(F(\mathbf{\hat{x}}),g\right)+R_{prior}(\mathbf{\hat{x}}), \tag{2}\] where \(F(\mathbf{\hat{x}})=\frac{1}{B}\sum_{i=1}^{B}\nabla\ell(f_{\theta}(x_{i}),y_{ i})\), \(R_{prior}(\mathbf{\hat{x}})\) is prior knowledge regularization (_e.g_., BN statistics [37]). Given a pre-trained generative model \(G_{w}(\cdot)\) learning from the public dataset, an intuitive method is to transform the problem into the following form: \[\mathbf{z}^{*}=\operatorname*{arg\,min}_{\mathbf{z}}\mathcal{D}\left(F(G_{w}( \mathbf{z})),g\right)+R_{prior}(\mathbf{z};G_{w}), \tag{3}\] where \(\mathbf{z}\in\mathbb{R}^{B\times k}\) is the latent code of the generative model. By narrowing the search range from \(\mathbb{R}^{B\times m}\) (\(m=H\times W\times C\)) to \(\mathbb{R}^{B\times k}\) (\(k\)\(<<\)\(<\)\(m\)), one can reduce the uncertainty in the optimizing process. Based on this, various GAN-based gradient inversion methods [21, 17] are proposed to ensure the quality and fidelity of the generated images. ### Gradient Inversion over Feature Domains First, we formally formulate our optimization objective: \[\mathbf{\hat{x}}^{*}=\operatorname*{arg\,min}_{\mathbf{\hat{x}}}\mathcal{D} \left(\mathcal{T}(F(\mathbf{\hat{x}})),g\right)+\mathcal{R}_{fidty}(\mathbf{ \hat{x}}), \tag{4}\] where \(\mathbf{\hat{x}}\) is generated by \(G_{w}\) or part of \(G_{w}\), \(F(\cdot)\) is the batch-averaged gradient operator, \(\mathcal{T}(\cdot)\) is the gradient transformation technique we will discuss later. The first term \(\mathcal{D}\left(\mathcal{T}(F(\mathbf{\hat{x}})),g\right)\) denotes the gradient matching loss, and the second term \(\mathcal{R}_{fidty}(\mathbf{\hat{x}})\) is the image fidelity regularization. To simplify the expression, we solve for the objective function in the following form: \[\mathbf{\hat{x}}^{*}=\operatorname*{arg\,min}_{\mathbf{\hat{x}}}\mathcal{L}_{ grad}(\mathbf{\hat{x}}), \tag{5}\] where we denote the loss function in (4) by \(\mathcal{L}_{grad}(\mathbf{\hat{x}})\). An overview of our method is shown in Figure 2, we next introduce each component in detail. **Intermediate Layer Optimizer.** This is the core of our algorithm. As the pseudocode described in Algorithm 1, instead of directly optimizing over \(\mathbf{\hat{x}}\), we focus on searching the latent space and the intermediate space of the generator in turn, to make the most of the GAN prior. The first step is to optimize over the randomly initialed latent vector \(\mathbf{z}\) using gradient descent with an effective Spherical Optimizer [25]. Once we obtain the optimal \(\mathbf{z}^{*}\), we assemble the generator \(G_{w}\) into \(G_{0}\circ G_{1}\circ\cdots\circ G_{N-1}\circ G_{N}\) for intermediate feature optimization. Then, we map optimal latent vector \(\mathbf{z}^{*}\) into intermediate latent representations \(\mathbf{h_{1}^{0}}\) using \(G_{0}\), _i.e._, \(\mathbf{h_{1}^{0}}:=G_{0}(\mathbf{z}^{*})\). Next, our algorithm enters the for loop in line 7 of Algorithm 1 and starts to search the intermediate features. At the pass of loop \(i\), we perform the following operations. First, we generate images from intermediate feature \(\mathbf{h_{i}}\) only with the rest part of \(G_{w}\) (_i.e._, \(G_{i}\circ\cdots\circ G_{N}\)). Then, we use the generated images to compute dummy gradients and optimize over \(\mathbf{h_{i}}\) via minimizing cost function in (4). Considering the intermediate feature searching might lead to unreal images generation, we constrain the searching range to lie within an \(l_{1}\) ball of radius \(r[i]\) centered at \(\mathbf{h_{i}^{0}}\), _i.e._ the term \(ball_{\mathbf{h_{i}^{0}}}^{r[i]}\) in the line 9 of Algorithm 1. After obtaining the optimal results \(\mathbf{h_{i}^{*}}\) of the present layer, we generate the initial intermediate representations for the next layer with \(G_{i}\), _i.e._\(\mathbf{h_{i+1}^{0}}:=G_{i}(\mathbf{h_{i}^{*}})\). As shown in line 4, 11, 12, 13, 18 of Algorithm 1, we hope to utilize the gradient matching loss as valid information to guide us to select the output images. More specifically, we choose the output images from the layer with the corresponding least gradient matching loss among all the searched intermediate layers as the final output. Although less loss doesn't always mean better image quality, our strategy still outperforms specifying a fixed layer's output. With all the efforts above, we encourage the optimizer to explore the intermediate space with rich information, to generate more diverse and high-fidelity images, while limiting the solution space within a \(l_{1}\) ball around the manifold induced by the previous layer in order to avoid overfitting and guarantee the realism of the generated images. Furthermore, our approach is easy to implement as it is not tied to any specific GAN architecture and only requires a pre-trained generative model. **Labels Extraction.** Specifically, consider a network parameterized by \(\mathrm{W}\) for classification task over \(n\)-classes using cross-entropy loss function, when the training data is a single image, the ground truth label \(c\) can be accurately inferred [41] through: \[c=i,\ \ \ \mathrm{s.t.}\ \nabla\mathrm{W_{FC}^{i}}^{\top}\cdot\nabla\mathrm{W_{ FC}^{j}}\leq 0,\ \forall\ j\neq i, \tag{6}\] where we denote the gradient vector w.r.t. the weights (de Figure 2: Overview of our proposed GIFD attack. The intermediate layer optimizer minimizes the loss computed from the dummy gradients and the shared gradients from the victim under the image fidelity regularization, to update the latent vector and the intermediate features successively. Finally, the generative model outputs reconstruction data from the layer with the corresponding least gradient matching loss. noted as \(\mathrm{W_{\mathbf{FC}}^{i}}\)) connected to the \(i_{th}\) logit in the classification layer (_i.e_., the output layer) by \(\nabla\mathrm{W_{\mathbf{FC}}^{i}}\). Hence, we can identify the ground-truth label via the index of the negative gradients. [37] further extends to support batch-level label extraction with high accuracy, while assuming non-repeating labels in the batch. The inferred labels are used to compute dummy gradients and as the class conditions for conditional GANs, which greatly enhances our attack. **Image Fidelity Regularization.** Intuitively, it is challenging to restore data only from the shared gradients, as gradients are only a non-linear mapping form of the original data. It is therefore worth using some strong priors as an approximation of natural images: \[\mathcal{R}_{fidty}(\mathbf{\hat{x}})=\alpha_{\ell_{2}}\mathcal{R}_{\ell_{2}}( \mathbf{\hat{x}})+\alpha_{TV}\mathcal{R}_{TV}(\mathbf{\hat{x}}), \tag{7}\] where the first term is the \(l_{2}\) norm of the images [37] with scaling factor \(\alpha_{\ell_{2}}\), which encourages the algorithm to solve for a solution that is preferably sparse. Since neighboring pixels of natural images are likely to have close values, we add the second term [9]\(\mathcal{R}_{TV}(\mathbf{\hat{x}})\) to penalize total variation of \(\mathbf{\hat{x}}\) with scaling factor \(\alpha_{TV}\). **Gradient Transformation.** In order to mitigate the effects of defense strategies, we adopt the adaptive attack [21] by estimating transformation from received gradients and incorporating it into the optimization process, _i.e_., \(\mathcal{T}(\cdot)\) in (4). Specifically, we can infer three defense strategies: (1) _Gradient clipping_; (2) _Gradient sparsification_; and (3) _Soteria_. ## 4 Experiments To validate the effectiveness of GIFD in improving attack performance, we conduct experiments on two widely used GANs in a range of scenarios. We evaluate our method for the classification task on the validation set of ImageNet ILSVRC 2012 dataset[6]) and 10-class (using age as label) FFHQ [18] at \(64\times 64\) pixels. For the generative model, we use a pre-trained BigGAN [3] for ImageNet and a pre-trained StyleGAN2 [18] for FFHQ. We use a randomly initialized ResNet-18 as the FL model, and choose negative cosine similarity as distance metric \(\mathcal{D}(\cdot)\). We use the default \(B=1\) at one local step. Then we conduct experiments with larger \(B\) and compare the performance of different methods. Our code is available at [https://github.com/ffhibnese/GIFD](https://github.com/ffhibnese/GIFD). **Implementation details**. According to its specific structure, we split BigGAN into \(G_{0}\) to \(G_{12}\) with 12 intermediate feature domains, and StyleGAN2 into \(G_{0}\) to \(G_{7}\) with 7 intermediate feature domains. We ensure that the intermediate features lie in the \(l_{1}\) ball through Project Gradient Descent (PGD) [26]. Motivated by the fact that a stepwise optimization over the noises in StyleGAN2 yields better reconstructions [5] for compressed sensing, we gradually allow to optimize more noises as we move to deeper intermediate layers and make them lie inside the \(l_{1}\) ball as well. For more details about experiments, please refer to the Appendix. **Evaluaion Metrics**. We compute the following quantitative metrics to measure the discrepancy between reconstructed images and ground truth: (1) PSNR (Peak Signal-to-Noise Ratio), (2) LPIPS [40] (Learned Perceptual Image Patch Similarity), (3) SSIM (Similarity Structural Index Measure), and (4) MSE (Mean Square Error) between reconstruction and private images. Figure 3: Comparison of PSNR mean on BigGAN and StyleGAN2 under different values of hyper-parameter \(K\) (_i.e_., the last intermediate layer to optimize). Notably, the figures exclude the results where the corresponding values are below the starting point of the y-axis. ### Decide Which Layer to End In order to further improve the quality of output images, we need to carefully handle the parameter \(K\) in Algorithm 1. Actually, we find that there is a trade-off between under-fitting and over-fitting about the choice of \(K\). When \(K\) is small, we only search the first few intermediate features of the generative model and do not fully utilize the rich information encoded in the intermediate space. As a result, the quality of the generated images does not meet our expectations. On the contrary, when \(K\) is large, we excessively search the deeper layers and generate images that have less cost, but a larger discrepancy with the original images. Therefore, we randomly select images (disjoint from our main experimental data) from the validation set of ImageNet and FFHQ to study the impact of \(K\) and try to select the best final layer. As shown in Figure 3, when \(K=9\) and \(K=4\) are used for BigGAN and StyleGAN2 respectively, we obtain results with the largest PSNR. Hence, we use this configuration for conducting all the experiments. ### Comparison with the State-of-the-art Attacks Next, we compare our proposed GIFD with existing methods and provide qualitative and quantitative results. We consider the following four state-of-the-art baselines: (1) _Inverting Gradients (IG)_ by Geiping [9]; (2) _Grad-Inversion (GI)_ by Yin [37]; (3) _Gradient Inversion in Alternative Spaces (GIAS)_ by Jeon [17]; and (4) _Generative Gradient Leakage (GGL)_ by Li [21]. In real application scenarios, a vast majority of FL systems do not transmit the BN statistics computed from private data [16]. Based on this fact, all the experiments do not use the strong BN prior proposed by [37]. Since the randomly initialized values of vectors will greatly affect the reconstruction results, we conduct 4 trials for every attack and select the result with the least gradient matching loss. The ablation study is conducted in the Appendix. **Experiment Results.** By observing the results in Table 1, we demonstrate that our method consistently achieves great improvement compared to the competing methods for gradient inversion attacks. Especially in the ImageNet dataset with BigGAN, our method has nearly 2.5dB and 0.1 improvements in average PSNR and LPIPS values respectively. As the visualization comparison shown in Figure 4, under a more practical setting, most existing methods struggle to recover meaningful and high-quality images even at \(B=1\), while our method reveals significant information \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{4}{c}{ImageNet} & \multicolumn{4}{c}{FFHQ} \\ \cline{2-10} & IG [9] & GI [37] & GGL [21] & GIAS [17] & **GIFD** & IG [9] & GI [37] & GGL [21] & GIAS [17] & **GIFD** \\ \hline PSNR\(\uparrow\) & 17.0756 & 16.5109 & 13.3885 & 17.4923 & **20.0534** & 15.3523 & 14.9485 & 15.1335 & 20.1799 & **21.3368** \\ LPIPS\(\downarrow\) & 0.3078 & 0.3297 & 0.3678 & 0.2536 & **0.1559** & 0.4172 & 0.4503 & 0.2009 & 0.1266 & **0.1023** \\ SSIM\(\uparrow\) & 0.2908 & 0.2673 & 0.1251 & 0.3381 & **0.4713** & 0.2272 & 0.2044 & 0.2453 & 0.5379 & **0.5768** \\ MSE\(\downarrow\) & 0.0223 & 0.0258 & 0.0553 & 0.0236 & **0.0141** & 0.0311 & 0.0343 & 0.0339 & 0.0121 & **0.0098** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of GIFD with state-of-the-art methods on every 1000th image of the ImageNet and FFHQ validation set. We calculate the average value of metrics on reconstructed images. Figure 4: Qualitative results of different methods on ImageNet and FFHQ. about the private data and achieves pixel-level reconstruction on both two datasets. The GAN-based methods (i.e. GGL, GIAS, GIFD) generally achieve better results than the GAN-free methods (i.e. GI, IG) on the FFHQ dataset. This indicates that the special data distribution of human-face can be more easily learned by the generative model so that the gain from the GAN prior is larger. We also observe that the GAN-based method GGL, which only optimizes the latent code and does not fully exploit the GAN prior, yields unsatisfactory results and performs even worse than the GAN-free methods [9, 37] on the ImageNet dataset, which again verifies the necessity of searching intermediate layers. We note that the performance of GIAS with BigGAN is worse than with StyleGAN2. One reason is that the data of ImageNet is more diverse. More importantly, with such a large number of parameters in BigGAN, the solution space for the GAN parameter search process becomes larger and presents a great challenge, _i.e_., GIAS is more susceptible to the scale of GAN. In contrast, GIFD chooses to optimize the intermediate features and then avoids this problem, hence achieving faithful reconstruction on both two GANs, demonstrating the excellent versatility of our method. ### Out of Distribution Data Recovery We then consider a more practical scenario where the training sets of the GAN model and the FL task obey different data distributions. Considering the difficulty and feasibility of gradient attack tasks, we define the OOD data as having the same label space, but quite different feature distributions. Hereinafter, we denote the OOD data of ImageNet and FFHQ by ImageNet* and FFHQ* respectively. PAC [19] dataset is a widely used benchmark for domain generalization with four different styles, _i.e_., Art Painting, Cartoon, Photo, and Sketch. In order to achieve our OOD setting, we manually select data with three different styles (_i.e_., Art Painting, Cartoon, Photo) from the validation set of PACS. For each style in ImageNet*, we select 15 images of guitar, elephant and horse in total. For FFHQ*, we select 15 images for each style and crop them to obtain the face images. We present visual comparison and quantitative results in Figure 5 and Table 2. **Experiment Results.** As shown in Table 2, the experiment results demonstrate our significant improvement over the baseline methods. For instance, our method has nearly 3.8dB improvement in average PSNR upon GIAS for Car \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{Art Painting} & \multicolumn{4}{c}{Photo} & \multicolumn{4}{c}{Cartoon} \\ \cline{3-13} & & PSNR\(\uparrow\) & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & MSE\(\downarrow\) & PSNR\(\uparrow\) & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & MSE\(\downarrow\) & PSNR\(\uparrow\) & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & MSE\(\downarrow\) \\ \hline \multirow{6}{*}{ImageNet*} & IG [9] & 18.3476 & 0.2286 & 0.3870 & 0.0172 & 15.6647 & 0.3575 & 0.2409 & 0.0325 & 15.8766 & 0.3183 & 0.3970 & 0.0288 \\ & GI [37] & 17.4681 & 0.2625 & 0.3445 & 0.0203 & 15.2700 & 0.3888 & 0.2201 & 0.0346 & 15.3905 & 0.3112 & 0.3926 & 0.0327 \\ & GGL [21] & 12.8011 & 0.3639 & 0.1356 & 0.0571 & 12.9246 & 0.3159 & 0.1507 & 0.0667 & 11.0315 & 0.3294 & 0.2832 & 0.0895 \\ & GIAS [17] & 17.2804 & 0.2774 & 0.3346 & 0.0227 & 0.4539 & 0.1724 & 0.4913 & 0.0111 & 19.0247 & 0.1862 & 0.5740 & 0.0149 \\ & **GIFD** & **19.3311** & **0.1700** & **0.4503** & **0.0151** & **21.9281** & **0.1137** & **0.5765** & **0.0082** & **22.8055** & **0.1030** & **0.6970** & **0.0067** \\ \hline \multirow{6}{*}{FFHQ*} & IG [9] & 15.9020 & 0.3856 & 0.2736 & 0.0273 & 17.7422 & 0.3043 & 0.3398 & 0.0174 & 14.7029 & 0.3118 & 0.3213 & 0.0358 \\ & GI [37] & 16.2990 & 0.3537 & 0.2917 & 0.0259 & 18.5540 & 0.2388 & 0.3808 & 0.0147 & 15.0097 & 0.3232 & 0.3201 & 0.0331 \\ \cline{1-1} & GGL [21] & 14.2833 & 0.2514 & 0.1982 & 0.0435 & 15.5001 & 0.2309 & 0.2513 & 0.0302 & 12.3590 & 0.2556 & 0.2322 & 0.0624 \\ \cline{1-1} & GIAS [17] & 18.4619 & 0.1912 & 0.4424 & 0.0172 & 19.6763 & 0.1615 & 0.4885 & 0.0123 & 15.3798 & 0.2250 & 0.3837 & 0.0338 \\ \cline{1-1} & **GIFD** & **19.8847** & **0.1534** & **0.4979** & **0.0120** & **21.3981** & **0.1148** & **0.5446** & **0.0098** & **17.4005** & **0.1634** & **0.4614** & **0.0220** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of GIFD with state-of-the-art baselines on OOD data of different styles. Figure 5: Visual comparison of different methods on ImageNet* and FFHQ*. toon in ImageNet*. Compared with other styles, the GAN-based methods perform best on Photo, whose domain characteristics are similar to the training sets of GANs. We also note that for Art in ImageNet*, the GAN-based methods except GIFD perform even worse than the GAN-free ones, which implies that here the gain from GAN is minor and even brings negative effects to them. Generally, the other GAN-based methods preserve more pre-trained knowledge from ImageNet or FFHQ, thus struggling to generate images similar to ground truth with different styles. In contrast, our method augments the generative ability of the GAN models and enlarges the diversity of the output space, hence achieving outstanding performance. Thus, with our proposed GIFD, we are able to safely relax the assumption that the datasets of the generative model and FL have to obey the same feature distribution. ### Attacks under Certain Defense Strategies Next, we consider attacking a more robust and secure FL system with defense strategies. In order to make a fair comparison, we equip all the baselines with the well-designed gradient transformation technique mentioned before to mitigate the impact of defense. We consider a relatively strict defense setup as the previous work [19]: (1) _Gaussian Noise_ with standard deviation 0.1; (2) _Gradient Clipping_ with a clip bound of 4; (3) _Gradient Sparsification_ in a sparsity of 90; and (4) _Soteria_ with a pruning rate of 80%. **Experiment Results.** We present experiment results in Table 3 compared to related methods. In general, with the underlying gradient transformation and the fully exploited GAN image prior, GIFD is still able to invert a degraded gradient observation to generate high-quality images or reveal private information, especially in cases of clipping and Soteria. One exception is that GGL takes the lead on FFHQ when applying additive noise operation. This is because the gradient information is seriously corrupted by the added high-variance Gaussian noise and is no more enough for pixel-level reconstruction. However, GGL only searches the latent space and with GAN's powerful generative capability, it can still produce well-formed images with clear facial contour, which can give a fair result in the metrics even though they are quite different from the original ones. This also indicates that adding Gaussian noise is indeed an effective defense method against related attacks when the variance exceeds a certain threshold. ### Performance of Larger Batch Sizes We then increase the batch size and observe the results of each algorithm. Notably, we assume that no duplicate labels in each batch and infer the labels from the received gradients [37]. We present the results on ImageNet in Table 4, see Appendix for results on FFHQ. **Experiment Results.** We find that the proposed GIFD achieves a steady improvement over previous methods at any batch size. The numerical results also show that the performance of all methods generally degrades as the batch size increases, implying that the reconstruction at large batch sizes is still a significant challenge. ## 5 Conclusion We propose GIFD, a powerful gradient inversion attack that can generalize well in unseen OOD data scenarios. We leverage the GAN prior via optimizing the feature domain of the generative model to generate stable and high-fidelity inversion results. Through extensive experiments, we demonstrate the effectiveness of GIFD with two widely used pre-trained GANs on two large datasets in a variety of more practical and challenging scenarios. To alleviate the proposed threat, one possible defense strategy is utilizing gradient-based adversarial noise as a novel privacy mechanism to provide confused inversion. We hope this paper can inspire some new ideas for future work and make contributions to the gradient attacks under more realistic scenarios. We also hope that our work can \begin{table} \end{table} Table 4: PSNR mean of different methods for different batch sizes on ImageNet. \begin{table} \end{table} Table 3: PSNR mean of different methods under different defense strategies. -shed light on the design of privacy mechanisms, to enhance the security and robustness of FL systems.
2306.15238
On Nonlinear Scattering of Drift Wave by Toroidal Alfven Eigenmode in Tokamak Plasmas
Using electron drift wave (eDW) as a paradigm model, we have investigated analytically direct wave-wave interactions between a test DW and ambient toroidal Alfv\'en eigenmodes (TAE) in toroidal plasmas, and their effects on the stability of the eDW. The nonlinear effects enter via scatterings to short-wavelength electron Landau damped kinetic Alfv\'en waves (KAWs). Specifically, it is found that scatterings to upper-sideband KAW lead to stimulated absorption of eDW. Scatterings to the lower-sideband KAW, on the contrary, lead to its spontaneous emission. As a consequence, for typical parameters and fluctuation intensity, nonlinear scatterings by TAE have negligible net effects on the eDW stability; in contrast to the ``reverse" process investigated in Ref. [Nuclear Fusion {\bf 62}, 094001 (2022)], where it is shown that nonlinear scattering by ambient eDW may lead to significant damping of TAE.
Liu Chen, Zhiyong Qiu, Fulvio Zonca
2023-06-27T06:37:42Z
http://arxiv.org/abs/2306.15238v1
# On Nonlinear Scattering of Drift Wave by Toroidal Alfven Eigenmode in Tokamak Plasmas ###### Abstract Using electron drift wave (eDW) as a paradigm model, we have investigated analytically direct wave-wave interactions between a test DW and ambient toroidal Alfven eigenmodes (TAE) in toroidal plasmas, and their effects on the stability of the eDW. The nonlinear effects enter via scatterings to short-wavelength electron Landau damped kinetic Alfven waves (KAWs). Specifically, it is found that scatterings to upper-sideband KAW lead to stimulated absorption of eDW. Scatterings to the lower-sideband KAW, on the contrary, lead to its spontaneous emission. As a consequence, for typical parameters and fluctuation intensity, nonlinear scatterings by TAE have negligible net effects on the eDW stability; in contrast to the "reverse" process investigated in Ref. [Nuclear Fusion **62**, 094001 (2022)], where it is shown that nonlinear scattering by ambient eDW may lead to significant damping of TAE. ## I Introduction Drift wave (DW) [1] and shear Alfven wave (SAW) [2; 3; 4] are two fundamental electromagnetic oscillations in magnetized plasmas such as tokamaks. DWs are, typically, electrostatic fluctuations excited by thermal plasma density and/or temperature nonuniformities. Consequently, DWs have frequencies, perpendicular wavelengths and parallel wavelengths comparable, respectively, to the thermal plasma diamagnetic drift frequencies, thermal ion Larmor radii and the system size. SAWs, meanwhile, are electromagnetic fluctuations and, typically, manifest themselves as Alfven eigenmodes (AEs) located within the frequency gaps of SAW continuous spectra [2]. For typical tokamak parameters, AE frequencies could be an order of magnitude higher than those of DWs, and, thus, spontaneous excitations of AEs often involve resonances with superthermal energetic particles (EPs); e.g., alphas in a D-T fusion plasma. AEs, thus, have perpendicular wavelengths in the order of EP Larmor radii and parallel wavelengths in the order of system size. In short, we may describe DWs as low-frequency micro-scale fluctuations; while AEs are meso-scale fluctuations at higher frequencies but still much lower than the ion cyclotron frequency. Since both DWs and AEs are intrinsic fluctuations in magnetic confined fusion plasmas and have routinely been observed in tokamak plasmas, it is, thus, natural to inquire whether and how these two kinds of fluctuations may interact and what the potential implications of these cross-scale interactions could be. Recently, we have investigated such interactions via the channel of nonlinear wave scatterings between toroidal Alfven eigenmode (TAE) [2] and, as a paradigm model, electron drift wave (eDW). Interactions of DW turbulence and AEs have attracted significant interest in the recent years due to observed stabilization of tokamak turbulence by fast ions [5; 6]. However, some fundamental aspects remain to be clarified and understood concerning the underlying physics processes. One important aspect when extrapolating from present day devices to reactor relevant fusion plasmas is the EP characteristic energy and normalized orbit width, which are responsible of remarkably different EP dynamic responses in the two cases [4]. Another aspect concerns whether the predominant cross-scale coupling process is direct or indirect. In the first group is either stimulated or spontaneous wave-wave coupling. Of the second type are processes mediated by zonal structures [7], e.g., zonal flows and fields [8; 9], including phase space zonal structures [10; 11]. An example of direct coupling is the TAE/ITG (ion temperature gradient) induced scattering, where EP may excite TAE by inverse ion Landau damping in the presence of finite amplitude ITG turbulence [12]. This mechanism has been invoked to explain the observed excitation of marginally stable TAE in gyrokinetic simulations of ITG [13], which then enhance the level of zonal flows and eventually yield to an appreciable reduction of ITG induced turbulence transport [14]. In this work, we further explore the DW-AE direct coupling channel via nonlinear wave scatterings using the eDW paradigm [15] with the aim of developing a comprehensive gyrokinetic description of theses precesses and of gaining insights into their possible impact on turbulent transport. There are two types of direct nonlinear interactions between TAE and eDW. The first type involves the scattering of a test TAE by ambient eDWs [15]. In this case, it was demonstrated that the TAE will suffer significant damping via nonlinearly generated upper and lower sidebands of short-wavelength electron Landau damped kinetic Alfven waves (KAWs) [16]. This scattering process, thus, may be regarded as stimulated absorption. Furthermore, for typical parameters, it is found that the nonlinear damping rate could be comparable to the growth rate of TAE instability excited by EPs. The second type of nonlinear wave-wave interactions involve the scattering of a test eDW by ambient TAEs, and is the actual focus of the present work. As will be shown in the following anal ysis, while the second type of scattering may be considered as the "reverse" of the first one, the induced nonlinear damping/growth rate in this case is, in fact, negligible for typical parameters. Qualitatively speaking, while the nonlinearly generated upper sideband KAW (UKAW) still gives rise to stimulated absorption, the nonlinearly generated lower sideband KAW (LKAW), however, gives rise to spontaneous emission (i.e., as in a parametric decay instability) [17]. Quantitatively, these two effects tend to nearly cancel each other; leading to negligible net effect on the stability of eDW. The plan of this work is as follows. The theoretical model and governing equations are given in Sec. II. Section III discusses the nonlinear generation of upper and lower KAW sidebands. Nonlinear dispersion relation of eDW in the presence of the finite-amplitude TAE is then derived and analyzed in Sec. IV. Section V gives the final conclusions and discussions. ## II Theoretical model and governing equations We consider a large-aspect-ratio and low-\(\beta\) tokamak plasma with circular magnetic surfaces. Thus, \(\epsilon\equiv r/R\ll 1\) with \(r\) and \(R\) being, respectively, the minor and major radii of the torus, and \(\beta\sim O(\epsilon^{2})\ll 1\) being the ratio between plasma and magnetic pressure. We, furthermore, take the thermal background plasma to be Maxwellian, and adopt the eDW paradigm model with finite density gradient but negligible temperature gradient as well as trapped particle effects in order to simplify the theoretical analyses and, thereby, illuminate the underlying physics. The perturbed distribution function, \(\delta f_{j}\) with \(j=e,i\) \[\delta f_{j}=-(e/T)_{j}F_{Mj}\delta\phi+\exp(-\mathbf{\rho}\cdot\nabla)\delta g_{j}, \tag{1}\] obeys the nonlinear gyrokinetic equation [18] \[\left(\partial_{t}+v_{\parallel}\mathbf{b}\cdot\nabla+\mathbf{v }_{d}\cdot\nabla+\langle\delta\mathbf{u}_{g}\rangle_{\alpha}\cdot\right)\delta g _{j} \tag{2}\] \[= \left(e/T\right)_{j}F_{Mj}\left(\partial_{t}+i\omega_{aj}\right) \langle\exp(\mathbf{\rho}_{j}\cdot\nabla)\delta L\rangle_{\alpha}.\] Here, \(F_{Mj}\) is the Maxwellian distribution, \(\mathbf{\rho}_{j}=\mathbf{b}\times\mathbf{v}/\Omega_{j}\), \(\mathbf{b}\equiv\mathbf{B}_{0}/B_{0}\), \(\Omega_{j}=(eB_{0}/mc)_{j}\), \(\delta g_{j}\) is the non-adiabatic particle response, \(\mathbf{v}_{d}=\mathbf{b}\times\left[(v_{\perp}^{2}/2)\nabla\ln B_{0}+v_{ \parallel}^{2}\mathbf{b}\cdot\nabla\mathbf{b}\right]\) is the magnetic drift velocity, \(\langle\mathbf{A}\rangle_{\alpha}\) denotes the gyro-phase averaging of \(\mathbf{A}\), \(\omega_{*j}=-i(cT/eB_{0})\mathbf{b}\times\nabla\ln N_{j}\cdot\nabla\) is the diamagnetic drift frequency due to the finite density gradient, \[\langle\delta\mathbf{u}_{j}\rangle_{\alpha}=(c/B_{0})\mathbf{b}\times\nabla \langle\exp(-\mathbf{\rho}_{j}\cdot\nabla)\delta L\rangle_{\alpha}, \tag{3}\] and \[\delta L=\delta\phi-v_{\parallel}\delta A_{\parallel}/c \tag{4}\] with \(\delta\phi\) and \(\delta A_{\parallel}\) being, respectively, the scalar and parallel component of the vector potential. Note that, with \(\beta\ll 1\), magnetic compression may be neglected; i.e., \(\delta B_{\parallel}\simeq 0\). Meanwhile, the governing field equations are the quasi-neutrality condition \[\sum_{j=e,i}\left[(N_{0}e^{2}/T)_{j}\delta\phi-e_{j}\left\langle(J_{k}\delta g )_{j}\right\rangle_{v}\right]=0, \tag{5}\] and the parallel Ampere's law \(\nabla_{\perp}^{2}\delta A_{\parallel}=-(4\pi/c)\delta J_{\parallel}\). Here, we note \(J_{k}=J_{0}(k_{\perp}\rho)=\langle\exp(i\mathbf{\rho}\cdot\mathbf{k}_{\perp})\rangle _{\alpha}\) and \(k_{\perp}^{2}=-\nabla_{\perp}^{2}\) should be understood as an operator. Furthermore, we note that, for SAW and KAW, instead of the Ampere's law, it is more convenient to use the following nonlinear gyrokinetic vorticity equation [19; 20] \[ik_{\parallel}\delta J_{\parallel k}+(N_{0}e^{2}/T)_{i}\left(1- K_{l}\right)\left(\partial_{t}+i\omega_{s}\right)_{k}\delta\phi_{k} \tag{6}\] \[+i\sum_{j}\left\langle e_{j}J_{k}\omega_{d}\delta g_{j}\right\rangle _{v}=\sum_{\mathbf{k}^{\prime}+\mathbf{k}^{\prime\prime}=\mathbf{k}}\Lambda_{ k^{\prime\prime}}^{k^{\prime}}\left\{\delta A_{\parallel k^{\prime}}\delta J_{k^{ \prime\prime}}/c\right.\] \[\left.-e_{j}\left\langle\left(J_{k}J_{k^{\prime}}-J_{k^{\prime \prime}}\right)\delta L_{k^{\prime}}\delta g_{k^{\prime\prime}j}\right\rangle_{ v}\right\}.\] Here, \(\Gamma_{k}\equiv I_{0}(b_{k})\exp(-b_{k})\), \(b_{k}=k_{\perp}^{2}\rho_{i}^{2}\), \(\rho_{i}^{2}=T_{i}/(m_{i}\Omega_{i}^{2})\), \(\omega_{d}=\mathbf{k}_{\perp}\cdot\mathbf{v}_{d}\) and \(I_{0}\) is the modified Bessel function. The first and second terms on the left hand side correspond, respectively, to the field line bending and inertia terms. Meanwhile, the third term corresponds to the curvature-pressure coupling term including the ballooning-interchange term and finite plasma compression. Note that, for TAE/KAW physics considered here, it can generally be ignored. The right hand side contains the nonlinear terms, where \(\Lambda_{k^{\prime\prime}}^{k^{\prime}}=(c/B_{0})\mathbf{b}\cdot(\mathbf{k}^{ \prime\prime}\times\mathbf{k}^{\prime})\), and the first and second terms correspond, respectively, to the Maxwell and generalized gyrokinetic ion Reynolds stresses. Note that, since eDW is predominantly electrostatic, the Maxwell stress makes negligible contribution in the present analysis. We now consider the effects on eDW linear stability due to nonlinear scattering by TAE. Letting \(\Omega_{0}(\omega_{0},\mathbf{k}_{0})\) and \(\Omega_{s}=(\omega_{s},\mathbf{k}_{s})\) denote, respectively, a small but finite-amplitude TAE with toroidal mode number, \(n_{0}\), and a test eDW with toroidal mode number, \(n_{s}\). Thus, \(|\omega_{0}|\simeq V_{A}/(2qR)\) with \(V_{A}\) being the Alfven speed and \(q\) the safety factor, \(\omega_{s}\sim\omega_{*c}\) the electron diamagnetic drift frequency, and \(|k_{s\theta}\rho_{i}|=|n_{s}q\rho_{i}/r|\sim O(1)\). Furthermore, we have, typically, \(|\omega_{s}/\omega_{0}|<1\) and \(|n_{0}/n_{s}|<1\). That is, TAE and eDW are disparate both in spatial and temporal scales. Consequently, the sidebands nonlinearly generated by TAE and eDW; i.e., \(\Omega_{\pm}=(\omega_{\pm},\mathbf{k}_{\pm})=\Omega_{s}\pm\Omega_{0}\), tend to have \(|\omega_{\pm}|\simeq|\omega_{0}|\) and \(|\mathbf{k}_{\pm}|\simeq|\mathbf{k}_{s}|\), and may be regarded as short-wavelength (high-\(n\)) KAWs. \(\Omega_{\pm}\), in turn, can interact with \(\Omega_{0}\); resulting in the nonlinear modification of eDW dispersion relation and, thereby, of its stability properties. The two-step scattering processes are illustrated schematically in Fig. 1. The first-step scattering process, i.e., the nonlinear generation of KAW sidebands is analyzed in the following Sec. III. Section IV analyzes the second step scattering process and the resultant nonlinear eDW dispersion relation. ## III Nonlinear generation of upper and lower sidebands of kinetic Alfven waves Let us first analyze the nonlinear generation of \(\Omega_{+}\); i.e, UKAW. The analysis for LKAW is similar. For electrons, we let \(\delta g_{ke}=\delta g_{ke}^{(1)}+\delta g_{ke}^{(2)}\), with superscripts "(1)" and "(2)" denoting, respectively, the linear and nonlinear responses. Thus, from Eq. 2, we have \[\delta g_{ke}^{(1)}\simeq-\frac{e}{T_{e}}F_{Me}\left(1-\frac{\omega_{se}}{ \omega}\right)_{k}\delta\psi_{k}, \tag{7}\] where \(\delta\psi_{k}=(\omega\delta A_{\parallel}/ck_{\parallel})_{k}\) is the effective potential due to the induced parallel electric field, \(-\partial_{t}\delta A_{\parallel}/c\), and we have taken \(|k_{\perp}\rho_{e}|\ll 1\) and the massless-electron \(|\omega_{k}/k_{\parallel}v_{te}|\ll 1\) limit, with \(v_{tj}\) the thermal speed of the \(j\)-specie. In Eq. (7), \(k\) stands for the TAE/KAW modes; viz., \(\Omega_{0}\) and \(\Omega_{\pm}\), and \(\delta g_{se}^{(1)}\simeq 0\) as \(\Omega_{s}\) is the predominantly electrostatic eDW mode. It then follows \[\delta g_{+e}^{(2)}\simeq 0. \tag{8}\] Meanwhile, for singly charged ions with \(|\omega_{k}/k_{\parallel}v_{te}|\gg 1\) for all the modes considered here, TAE, KAW and eDW, we have \[\delta g_{ki}^{(1)}\simeq\frac{e}{T_{i}}F_{Mi}J_{k}\delta\phi_{k}\left(1-\frac {\omega_{*i}}{\omega}\right)_{k}, \tag{9}\] and \[\delta g_{+i}^{(2)}\simeq-i\frac{\Lambda_{0}^{s}}{2\omega_{+}}J_{0}J_{s}\frac {e}{T_{i}}F_{Mi}\left(\frac{\omega_{*i}}{\omega}\right)_{s}\delta\phi_{s} \delta\phi_{0}. \tag{10}\] Substituting Eqs. (7) to (10) into the quasi-neutrality condition, Eq. (5), it is possible to derive \[\delta\psi_{+}=\sigma_{*+}\delta\phi_{+}+i\frac{\Lambda_{0}^{s}}{2\omega_{+}}D _{+}\delta\phi_{0}\delta\phi_{s}, \tag{11}\] where \[\sigma_{*+}=\left[1+\tau-\tau\Gamma_{+}(1-\omega_{*i}/\omega)_{+}\right]/(1- \omega_{*e}/\omega)_{+}, \tag{12}\] and \[D_{+}=\tau(\omega_{*i}/\omega)_{s}F_{+}/(1-\omega_{*e}/\omega)_{+}, \tag{13}\] \(\tau=T_{e}/T_{i}\), and \(F_{+}=\langle J_{0}J_{+}J_{s}F_{Mi}\rangle_{v}/N_{0}\). Meanwhile, the nonlinear gyrokinetic vorticity equation, Eq. (6), yields \[\tau b_{+}\left[\left(1-\frac{\omega_{*i}}{\omega}\right)_{+} \frac{1-\Gamma_{+}}{b_{+}}\delta\phi_{+}-\left(\frac{V_{A}^{2}k_{\parallel} bk_{\parallel}}{b\omega^{2}}\right)_{+}\delta\psi_{+}\right] \tag{14}\] \[= -i\frac{\Lambda_{0}^{s}}{2\omega_{+}}\gamma_{+}\delta\phi_{s} \delta\phi_{0},\] where \[\gamma_{+}=\tau\left[\Gamma_{s}-\Gamma_{0}+(\omega_{*i}/\omega)_{s}(F_{+}- \Gamma_{s})\right]. \tag{15}\] We note that, \(k_{\parallel}\) and \(b\propto k_{\perp}^{2}\) should be strictly considered as operators, and are thus, not commutative in, e.g., the field line bending term in Eq. (14). Combining Eqs. (11) and (14) then yields the equation describing the nonlinear generation of \(\Omega_{+}\) by \(\Omega_{0}\) and \(\Omega_{s}\); i.e., \[\tau b_{+}\epsilon_{A+}\delta\phi_{+}=-i(\Lambda_{0}^{s}/2\omega_{+})\beta_{+ }\delta\phi_{s}\delta\phi_{0}, \tag{16}\] where \[\epsilon_{Ak}=\left(1-\frac{\omega_{*i}}{\omega}\right)_{k}\frac{1-\Gamma_{k} }{b_{k}}-\left(\frac{V_{A}^{2}}{b}\frac{k_{\parallel}bk_{\parallel}}{\omega^{ 2}}\right)_{k}\sigma_{*k} \tag{17}\] is the linear SAW/KAW operator, and \[\beta_{+} = \tau(\Gamma_{s}-\Gamma_{0})+\tau\left(\frac{\omega_{*i}}{\omega} \right)_{s} \tag{18}\] \[\times \left[F_{+}-\Gamma_{s}-\left(\frac{k_{\parallel}bk_{\parallel}}{ \omega^{2}}\right)_{+}\frac{\tau V_{A}^{2}F_{+}}{(1-\omega_{*e}/\omega)_{+}} \right].\] Nonlinear generation of \(\Omega_{-}\) follows that of \(\Omega_{+}\), and we, therefore, present only the main results. For electrons, we have, again, \(\delta g_{-e}^{(2)}\simeq 0\), and, for ions, \[\delta g_{-i}^{(2)}\simeq i\frac{\Lambda_{0}^{s}}{2\omega_{-}}J_{0}J_{s}\frac{e }{T_{i}}F_{Mi}\left(\frac{\omega_{*i}}{\omega}\right)_{s}\delta\phi_{s}\delta \phi_{0}^{*}. \tag{19}\] The quasi-neutrality condition, Eq. (5), yields, \[\delta\psi_{-}=\sigma_{*-}\delta\phi_{-}-i(\Lambda_{0}^{s}/2\omega_{-})D_{-} \delta\phi_{s}\delta\phi_{0}^{*}, \tag{20}\] with \[D_{-}=\tau(\omega_{*i}/\omega)_{s}F_{-}/(1-\omega_{*e}/\omega)_{-}, \tag{21}\] and \(F_{-}=\langle J_{0}J_{-}J_{s}F_{Mi}\rangle_{v}/N_{0}\). Meanwhile, the nonlinear gyrokinetic vorticity equation, Eq. (6), yields \[\tau b_{-}\left[\left(1-\frac{\omega_{*i}}{\omega}\right)_{-} \frac{(1-\Gamma_{-})}{b_{-}}\delta\phi_{-}-\left(\frac{V_{A}^{2}k_{\parallel} bk_{\parallel}}{b\omega^{2}}\right)_{-}\delta\psi_{-}\right] \tag{22}\] \[= i\frac{\Lambda_{0}^{s}}{2\omega_{-}}\gamma_{-}\delta\phi_{s} \delta\phi_{0}^{*},\] and \[\gamma_{-}=\tau\left[\Gamma_{s}-\Gamma_{0}+(\omega_{*i}/\omega)_{s}(F_{-}- \Gamma_{s})\right]. \tag{23}\] Figure 1: Schematic diagram of the two-step scattering processes analyzed in the present work. The test eDW, ambient TAE and nonlinearly generated KAW sidebands are in blue, green and red, respectively. Finally, from Eqs. (20) and (22), we have \[\tau b_{-}\epsilon_{A-}\delta\phi_{-}=i(\Lambda_{0}^{s}/2\omega_{-})\beta_{-} \delta\phi_{s}\delta\phi_{0}^{*}, \tag{24}\] and \[\beta_{-} = \tau(\Gamma_{s}-\Gamma_{0})+\tau\left(\frac{\omega_{si}}{\omega} \right)_{s} \tag{25}\] \[\times \left[F_{-}-\Gamma_{s}-\left(\frac{k\|bk\|}{\omega^{2}}\right)_{- }\frac{\tau V_{A}^{2}F_{-}}{(1-\omega_{se}/\omega)_{-}}\right].\] We remark, again, that \(\epsilon_{A\pm}\) in Eqs. (16) and (24) are KAW operators. That is, in terms of physics, Eqs. (16) and (24) describe mode-converted KAWs (\(\Omega_{\pm}\)) driven by the nonlinear coupling between a TAE (\(\Omega_{0}\)) and eDW (\(\Omega_{s}\)). ## IV Nonlinear dispersion relation of electron drift wave We now analyze the second scattering process between \(\Omega_{\pm}\) and \(\Omega_{0}\) back into \(\Omega_{s}\). Again, let us first consider the \(\Omega_{+}\) channel; i.e., \(\Omega_{+}-\Omega_{0}\rightarrow\Omega_{s}\). From the nonlinear gyrokinetic equation, Eq. (2), we have, for electrons in the massless \(|\omega_{k}/k_{\parallel}v_{te}|\ll 1\) limit and noting Eqs. (7) and (8), \[\delta g_{se,+}^{(2)}\simeq-i\frac{\Lambda_{0}^{s}}{2\omega_{+}}\frac{e}{T_{e }}F_{Me}\delta\psi_{+}\delta\psi_{0}^{*}\left[1+\frac{k_{\parallel 0}}{k_{ \parallel s}}\frac{(\omega_{se}-\omega)_{s}}{\omega_{0}}\right]. \tag{26}\] Here, \(\delta g_{se,+}^{(2)}\) denotes nonlinear electron response of \(\Omega_{s}\) due to \(\Omega_{+}\) and \(\Omega_{0}^{*}\) coupling. For ions, meanwhile, we have \[\delta g_{si,+}^{(2)}\simeq i(\Lambda_{0}^{s}/2\omega_{s})\left(J_{+}\delta \phi_{+}\delta g_{0i}^{(1)*}-J_{0}\delta\phi_{0}^{*}\delta g_{+i}\right). \tag{27}\] Here, we note that \(\delta g_{+i}=\delta g_{+i}^{(1)}+\delta g_{+i}^{(2)}\) is given, respectively, by Eqs. (9) and (10). \(\delta g_{si,+}^{(2)}\) is then given by \[\delta g_{si,+}^{(2)} \simeq \left[i\frac{\Lambda_{0}^{s}}{2\omega_{+}}J_{0}J_{+}\delta\phi_{ +}\delta\phi_{0}^{*}-\frac{(\Lambda_{0}^{s})^{2}}{4\omega_{s}\omega_{+}}J_{0}^ {2}J_{s}|\delta\phi_{0}|^{2}\delta\phi_{s}\right] \tag{28}\] \[\times \left(\frac{\omega_{si}}{\omega}\right)_{s}\frac{e}{T_{i}}F_{Mi}.\] The analysis is similar for the \(\Omega_{-}+\Omega_{0}\rightarrow\Omega_{s}\) scattering channel. Then, we have \[\delta g_{se,-}^{(2)}\simeq i\frac{\Lambda_{0}^{s}}{2\omega_{-}}\frac{e}{T_{e}}F_{ Me}\delta\psi_{-}\delta\psi_{0}\left[1+\frac{k_{\parallel 0}}{k_{ \parallel s}}\frac{(\omega_{se}-\omega)_{s}}{\omega_{0}}\right], \tag{29}\] and \[\delta g_{si,-}^{(2)} \simeq -\left[i\frac{\Lambda_{0}^{s}}{2\omega_{-}}J_{0}J_{-}\delta\phi_ {-}\delta\phi_{0}+\frac{(\Lambda_{0}^{s})^{2}}{4\omega_{s}\omega_{-}}J_{0}^{2} J_{s}|\delta\phi_{0}|^{2}\delta\phi_{s}\right] \tag{30}\] \[\times \left(\frac{\omega_{si}}{\omega}\right)_{s}\frac{e}{T_{i}}F_{Mi}.\] Substituting the \(\delta g_{sj}=\delta g_{sj}^{(1)}+\delta g_{sj,+}^{(2)}+\delta g_{sj,-}^{(2)}\) for \(j=e,i\) into the quasi-neutrality condition, Eq. (5), of the \(\Omega_{s}\) mode, we then readily derive the following governing equation for \(\delta\phi_{s}\); \[\epsilon_{s}\delta\phi_{s} = i(\Lambda_{0}^{s}/2\omega_{+})\beta_{s+}\delta\phi_{0}^{*}\delta \phi_{+}-i(\Lambda_{0}^{s}/2\omega_{-})\beta_{s-}\delta\phi_{0}\delta\phi_{-} \tag{31}\] \[-\epsilon_{s}^{(2)}|\delta\phi_{0}|^{2}\delta\phi_{s}.\] Here, \(\epsilon_{s}\) is the eDW linear dielectric operator and, in the limit of adiabatic circulating electrons and neglecting trapped electrons, is given by \[\epsilon_{s}=1+\tau-\tau\left\langle\left(\frac{\omega-\omega_{si}}{\omega-k_{ \parallel}v_{\parallel}-\omega_{d}}\right)_{s}\frac{F_{Mi}}{N_{0}}J_{s}^{2} \right\rangle_{v}; \tag{32}\] and, in the lowest order, \[\epsilon_{s}\simeq 1+\tau(1-\Gamma_{s})+\tau\Gamma_{s}(\omega_{si}/\omega)_{s}. \tag{33}\] Meanwhile, \[\beta_{s\pm}=\tau\left(\frac{\omega_{si}}{\omega}\right)_{s}F_{\pm}+\sigma_{*0} \sigma_{*\pm}\left[1+\frac{k_{\parallel 0}}{k_{\parallel\pm}}\frac{(\omega_{se}- \omega)_{s}}{\omega_{0}}\right], \tag{34}\] and \[\epsilon_{s}^{(2)} = \sum_{l=+,-}\left\{\frac{F_{2}}{\omega_{s}\omega_{l}}+\sigma_{*0} \left[1+\frac{k_{\parallel 0}}{k_{\parallel l}}\frac{(\omega_{se}-\omega)_{s}}{\omega_{0}}\right]\right. \tag{35}\] \[\left.\times\left[\frac{F_{l}}{\omega_{l}^{2}(1-\omega_{se}/\omega) _{l}}\right]\right\}\frac{\left(\Lambda_{0}^{s}\right)^{2}}{4}\tau\left(\frac{ \omega_{si}}{\omega}\right)_{s}.\] Noting Eqs. (16) and (24) for, respectively, \(\delta\phi_{+}\) and \(\delta\phi_{-}\), Eq. (31) can be formally expressed as \[\left(\epsilon_{s}+\epsilon_{s}^{(2)}|\delta\phi_{0}|^{2}\right) \delta\phi_{s} = \left[\left(\frac{\Lambda_{0}^{s}}{2\omega_{+}}\right)^{2}\frac{ \beta_{s}^{-}\delta\phi_{0}^{*}\beta_{+}}{\tau b_{+}\epsilon_{A+}}\delta\phi_{0}\right. \tag{36}\] \[\left.+\left(\frac{\Lambda_{0}^{s}}{2\omega_{-}}\right)^{2}\frac{ \beta_{-}^{-}\delta\phi_{0}\beta_{-}}{\tau b_{-}\epsilon_{A-}}\delta\phi_{0}^{* }\right]\delta\phi_{s},\] which may be regarded as the nonlinear eigenmode equation of \(\Omega_{s}\) (eDW) in the presence of finite-amplitude \(\Omega_{0}\) (TAE) fluctuations. Equation (36), in general, needs to be solved numerically. We can, however, make analytical progress by employing the scale separation and obtain an analytical dispersion relation variationally. First, we adopt the ballooning-mode representation for \(\delta\phi_{s}\); \[\delta\phi_{s}=\exp(in_{s}\xi)\sum_{m_{s}}\exp(-im_{s}\theta)\Phi_{s}(n_{s}q-m_{ s}\equiv z_{s}), \tag{37}\] where \(\xi\) and \(\theta\) are, respectively the toroidal and poloidal angles, and denote the spatial scales of TAE and eDW as, respectively, \({\bf x}_{0}\) and \({\bf x}_{s}\); such that \(|{\bf x}_{s}|/|{\bf x}_{0}|\sim O(n_{0}/n_{s})\ll 1\). Multiplying Eq. (36) by \(\delta\phi_{s}^{*}\) and integrating over \({\bf x}_{s}\), we can derive \[D_{s}+\chi_{s}^{(2)}|\delta\phi_{0}({\bf x}_{0})|^{2}=R_{+}+R_{-}, \tag{38}\] where \[D_{s}=\left\langle\Phi_{s}^{*}(z_{s})\epsilon_{s}\Phi_{s}\right\rangle_{s} \tag{39}\] is the linear dielectric constant of \(\Omega_{s}\), \[\langle\Phi_{s}^{*}[A]\Phi_{s}\rangle_{s} \equiv \int_{-1/2}^{1/2}dz_{s}\sum_{m_{s}}\Phi_{s}^{*}[A]\Phi_{s} \tag{40}\] \[= \int_{-\infty}^{\infty}dz_{s}\Phi_{s}^{*}[A]\Phi_{s}\] with the normalization \(\langle|\Phi_{s}|^{2}\rangle_{s}=1\), \[\chi_{s}^{(2)}=\langle\Phi_{s}^{*}\epsilon_{s}^{(2)}\Phi_{s}\rangle_{s}, \tag{41}\] and \[R_{\pm} = \left\langle\Phi_{s}^{*}\left(\frac{\Lambda_{s}^{s}}{2\omega_{ \pm}}\right)^{2}\beta_{s}^{\pm}\left\{\begin{array}{c}\delta\phi_{0}^{*}\\ \delta\phi_{0}\end{array}\right\}\frac{\beta_{\pm}}{(\tau b\epsilon_{A})_{\pm }}\left\{\begin{array}{c}\delta\phi_{0}\\ \delta\phi_{0}^{*}\end{array}\right\}\Phi_{s}\right\rangle_{s}.\] Equation (38) is formally the variational nonlinear eDW dispersion relation in the presence of a finite-amplitude TAE given by \(\delta\phi_{0}\). We will later analyze it further using a trial function for \(\Phi_{s}(z_{s})\). We now make same qualitative observations. We note that \(\chi_{s}^{(2)}\) is real and, in general, \(R_{\pm}=Re(R_{\pm})+iIm(R_{\pm})\). Thus, \(\chi_{s}^{(2)}\) and \(Re(R_{\pm})\) lead to nonlinear frequency shift; while \(Im(R_{\pm})\) gives rise to nonlinear damping or growth. Focusing on \(Im(R_{\pm})\) first, we observe, from Eq. (42), that \(Im(R_{\pm})\propto Im(1/\epsilon_{A\pm})\); i.e., the imaginary component of the SAW/KAW operator, \(\epsilon_{A\pm}\), given by Eq. (17). Looking at Eq. (16) and letting \[\delta\phi_{+} = A_{+}({\bf x}_{0})\exp(in_{s}\xi) \tag{43}\] \[\times \sum_{m_{s}}\exp(-im_{s}\theta)\Phi_{+}(z_{s}\equiv n_{s}q-m_{s}),\] we then have, recalling the scale separation between \({\bf x}_{0}\) and \({\bf x}_{s}\), \[A_{+}({\bf x}_{0})\tau b_{s}\epsilon_{A+}^{s}\Phi_{+}(z_{s})=-i\frac{\Lambda_{ 0}^{s}}{2\omega_{+}}\beta_{+}\Phi_{s}(z_{s})\delta\phi_{0}({\bf x}_{0}). \tag{44}\] The same analysis can be carried out for \(\delta\phi_{-}\) given by Eq. (24) step by step. Further simplification of Eq. (44) and the analogue for the \(\Omega_{-}\) sideband can be obtained noting that \[\epsilon_{A\pm}^{s} = \left(1-\frac{\omega_{s}}{\omega}\right)_{\pm}\frac{1-\Gamma_{s} }{b_{s}}-\left(\frac{V_{A}^{2}}{b_{s}}\frac{k_{\parallel s}b_{s}k_{\parallel s }}{\omega_{\pm}^{2}}\right)\sigma_{s\pm}^{s}, \tag{45}\] \[\sigma_{s\pm}^{s} \simeq \left[1+\tau-\tau\Gamma_{s}\left(1-\omega_{si}/\omega\right)_{ \pm}\right]/(1-\omega_{se}/\omega)_{\pm},\] \(b_{s}=b_{s\theta}(1+\hat{s}^{2}\partial_{z_{s}}^{2})\), \(b_{s\theta}=k_{s\theta}^{2}\rho_{i}^{2}\), \(\hat{s}=rq^{\prime}/q\) denotes magnetic shear, and \(k_{\parallel s}=(n_{s}q-m_{s})/(qR)=z_{s}/(qR)\). Since \(|\hat{s}^{2}\partial_{z_{s}}^{2}|<1\) for moderately/strongly ballooning modes, \(\epsilon_{A\pm}^{s}\) further reduces to \[\epsilon_{A\pm}^{s}\simeq b_{s\theta}\frac{\partial\epsilon_{A\pm}^{s}}{ \partial b_{s\theta}}\hat{s}^{2}\partial_{z_{s}}^{2}-\left(\frac{\omega_{A}}{ \omega_{\pm}}\right)^{2}\sigma_{\pm s}(z_{s}^{2}-z_{\pm}^{2}). \tag{46}\] Here, \(\omega_{A}=V_{A}/(qR)\), \[\sigma_{\pm s}=\left[1+\tau-\tau\Gamma_{s}(b_{s\theta})(1-\omega_{si}/\omega)_ {\pm}\right]/(1-\omega_{se}/\omega)_{\pm}, \tag{47}\] and \[z_{\pm}^{2}=\left(\frac{\omega}{\omega_{A}}\right)_{\pm}^{2}\left(1-\frac{ \omega_{si}}{\omega}\right)_{\pm}\frac{1-\Gamma_{s}(b_{s\theta})}{b_{s\theta}} \frac{1}{\sigma_{\pm s}}<\frac{1}{4}; \tag{48}\] as \(|\omega/\omega_{A}|_{\pm}^{2}\simeq 1/4\). Equation (44), along with \(\epsilon_{A+}^{s}\) given by Eq. (46), indicates that the upper sideband is a mode converted KAW at the high-\(n\) Alfven resonance layer \(z_{s}=\pm z_{+}\). As noted in previous study of mode-converted KAW [16], for \(\tau=T_{e}/T_{i}\sim 1\), the finite electron Landau damping as well as the Airy swelling of the amplitude dictate that the damping occur predominantly around \(z=\pm z_{+}\). Furthermore, the spectrum of eDW is typically broad, which implies that the spectrum of mode-converted KAW is correspondingly broad. Thus, the energy absorption rate approximates that of the local Alfven resonance via the causality constraint \(Im(\omega_{s},\omega_{+},\omega_{-})>0\); i.e., \[Im\left(\frac{1}{\epsilon_{A+}}\right)\simeq-\pi\delta(\epsilon_{A+})\simeq- \pi\left(\frac{\omega_{+}}{\omega_{A}}\right)^{2}\frac{\delta(z_{s}^{2}-z_{+}^{ 2})}{\sigma_{+s}}. \tag{49}\] Similar processes occur for the \(\Omega_{-}\) KAW; i.e., \[Im\left(\frac{1}{\epsilon_{A-}}\right)\simeq\pi\delta(\epsilon_{A-})\simeq\pi \left(\frac{\omega_{-}}{\omega_{A}}\right)^{2}\frac{\delta(z_{s}^{2}-z_{-}^{2})}{ \sigma_{-s}}. \tag{50}\] Consequently, \[Im(R_{+}) = -\left\langle\Phi_{s}^{*}\left(\frac{\Lambda_{0}^{s}}{2\omega_{+} }\right)^{2}\frac{\beta_{s}^{+}\delta\phi_{0}^{*}\beta_{+}}{\tau b_{s}}\right. \tag{51}\] \[\left.\times\left(\frac{\omega_{+}}{\omega_{A}}\right)^{2}\frac{ \delta(z_{s}^{2}-z_{+}^{2})}{\sigma_{+s}}\delta\phi_{0}\Phi_{s}\right\rangle,\] and, omitted here, a similar corresponding expression can be obtained for \(Im(R_{-})\). To proceed further, we take the \(|k_{0\perp}\rho_{i}|^{2}\ll 1\) limit but keep the finite \(|\omega_{s}/\omega_{0}|<1\) correction. It is then straightforward to derive \[\beta_{s}^{\pm}\beta_{\pm}\delta(\epsilon_{A\pm}) \simeq \tau(1-\Gamma_{s\theta})\sigma_{s\theta}\frac{(\omega_{se}-\omega )_{s}(\omega-\omega_{si})_{s}}{\omega_{0}^{2}} \tag{52}\] \[\times\frac{k_{\parallel\pm}}{k_{\parallel s}}\delta(\epsilon_{A\pm }).\] Here, \(\Gamma_{s\theta}=\Gamma_{s}(b_{s\theta})\) and \(\sigma_{s\theta}=1+\tau(1-\Gamma_{s\theta})\). The variational nonlinear eDW dispersion relation, Eq. (38), then yields, letting \(\omega_{s}=\omega_{sr}+i\gamma_{s}\) and \(D_{sr}(\omega_{sr})=0\), \[\left(\gamma_{s}+\gamma_{s}^{l}\right)\frac{\partial}{\partial \omega_{sr}}D_{sr}=Im(R_{+}+R_{-})\] \[\simeq -\frac{\pi}{4\beta_{i}}\left(\frac{\Omega_{i}}{\omega_{0}}\right) ^{2}\left|\frac{\delta B_{0\theta}}{B_{0}}\right|^{2}(1-\Gamma_{s\theta}) \sigma_{s\theta}\] \[\times \frac{(\omega_{se}-\omega)_{s}(\omega-\omega_{si})_{s}}{\omega_{0}^{2 }}\left\langle\left[\left(\frac{\omega_{+}}{\omega_{A}}\right)^{2}\frac{\delta(z _{s}^{2}-z_{+}^{2})}{\sigma_{+s}}\right.\right.\] \[\left.\left.-\left(\frac{\omega_{-}}{\omega_{A}}\right)^{2}\frac{ \delta(z_{s}^{2}-z_{-}^{2})}{\sigma_{-s}}\right]\left|\Phi_{s}\right|^{2} \right\rangle_{s}. \tag{53}\] Here, \(\gamma_{s}^{l}\) is the linear damping/growth of eDW. Noting that \(\partial D_{sr}/\partial\omega_{sr}>0\), \(Im(R_{+})<0\) and \(Im(R_{-})>0\), scatterings to UKAW and LKAW, thus, lead to, respectively, damping and growth of eDW. As illustrated in Fig. 2, one may qualitatively regard UKAW scattering as stimulated absorption, and LKAW scattering as spontaneous emission, similar to the familiar parametric decay instability via a quasi-mode. Since the rate of the two processes are nearly the same, the number of \(\Omega_{s}\) quanta are approximately conserved, with a small overall effect on eDW growth/damping. In Eq. (53), we have also noted \(ck_{0}.\delta\phi_{0}/B_{0}\simeq V_{A}\delta B_{0\theta}/B_{0}\). To estimate the nonlinear damping/growth rate quantitatively, we adopt a trial function for \(\Phi_{s}\) as \(|\Phi_{s}|^{2}=(1/\sqrt{\pi}\Delta_{s})\exp(-z_{s}^{2}/\Delta_{s}^{2})\) with \(\Delta_{s}>1\) for a typical moderately ballooning eDW. Equation (53) then yields \[Im(R_{+}+R_{-})\simeq-\frac{\sqrt{\pi}}{4\beta_{i}}\left(\frac{ \Omega_{i}}{\omega_{0}}\right)^{2}\left|\frac{\delta B_{0\theta}}{B_{0}} \right|^{2}(1-\Gamma_{s\theta})\,\sigma_{s\theta} \tag{54}\] \[\times \frac{(\omega_{se}-\omega)_{s}(\omega-\omega_{si})_{s}}{\omega_{ 0}^{2}}\] \[\times \left[\left(\frac{\omega_{+}}{\omega_{A}}\right)^{2}\frac{1}{ \sigma_{+s}z_{+}\Delta_{s}}-\left(\frac{\omega_{-}}{\omega_{A}}\right)^{2} \frac{1}{\sigma_{-s}z_{-}\Delta_{s}}\right].\] Taking typical tokamak parameters, \(\Omega_{i}/\omega_{0}\sim O(10^{2})\), \(\beta_{i}\sim O(10^{-2})\), \(b_{s\theta}\sim O(1)\), \(|\omega_{s}/\omega_{0}|^{2}\sim O(10^{-1})\), and \(|\Delta_{s}z_{\pm}|\sim O(1)\), we then find \[|Im(R_{+}+R_{-})|<O(10^{5})\left|\frac{\delta B_{0\theta}}{B_{0}}\right|^{2}. \tag{55}\] Noting that \(\partial D_{sr}/\partial\omega_{sr}\sim 1/\omega_{sr}\) and \(\gamma_{s}^{l}/\omega_{sr}\sim O(10^{-1})\) as, e.g., in the trapped electron mode [21], we then find that, for TAE fluctuations with \(|\delta B_{0\theta}/B_{0}|^{2}\lesssim O(10^{-7})\)[22], the nonlinear contribution of damping/growth \(\sim|Im(R_{+}+R_{-})|\lesssim O(10^{-2})\), and should have negligible effects on the eDW stability. We also remark that one can, furthermore, straightforwardly show that the nonlinear frequency shift due to \(\chi_{s}^{(2)}|\delta\phi_{0}|^{2}\) and \(Re(R_{+}+R_{-})\) is also typically negligible. ## V Conclusions and Discussions In this work, we have employed the nonlinear gyrokinetic equations and investigated analytically direct wave-wave interactions between a test electron drift wave (eDW) and ambient finite-amplitude toroidal Alfven eigenmodes (TAEs) in low-\(\beta\) circular tokamak plasmas. Here, nonlinear scatterings generate upper and lower sidebands of mode-converted kinetic Alfven waves (KAWs) at high totoial mode numbers which are typically damped by electrons around the mode conversion positions. Furthermore, we find that scattering to upper-sideband KAW gives rise to stimulated absorption and, hence, damping of the eDW. Scattering to lower-sideband KAW, on the other hand, gives rise to spontaneous emission and, thereby, growth of the eDW; i.e., TAE parametrically decays to eDW via the lower-sideband KAW quasi-mode. For typical tokamak parameters and TAE fluctuation intensity, our analysis indicates that the net effects on eDW stability properties should be negligible. We remark again that, as noted in Sec. I, the present results are different with those obtained previously for the case of direct wave-wave interactions between a test TAE and ambient eDW [15]. In that case, both channels of scatterings to KAWs lead to stimulated absorption and, thereby, significant damping of the TAE. As noted above, our analysis adopts the electron drift waves without temperature gradients as a paradigm model in order to simplify the analysis and delineate more clearly the underlying nonlinear physics mechanisms. It is clearly desirable to extend the investigations to include ion-temperature-gradient (ITG) modes, trapped particle effects, as well as other types of AEs; such as reversed shear Alfven eigenmodes (RSAEs) [23; 24] and beta-induced Alfven eigenmodes (BAEs) [25; 26]. While detailed analyses for such cases remain to be carried out, one may conjecture that the physical pictures outlined in the current paradigm model should hold at least qualitatively. Finally, that the present results indicating negligible effects on eDW via direct wave-wave interactions with TAE suggests the possible significance of indirect interaction via, e.g., the zonal structures consisting of flow, field and phase space nonlinearly generated by AEs [8; 10; 27; 28]. This interesting subject remains to be further investigated in the future. Figure 2: Sketch illustrating the nonlinear scattering processes of (a) stimulated absorption and (b) spontaneous emission. Here, \(\Omega_{0}\) is the finite-amplitude TAE, \(\Omega_{s}\) is the test eDW, and \(\Omega_{\pm}\) are, respectively, the upper and lower sideband KAWs.. ## Acknowledgement This work was supported by the National Science Foundation of China under Grant Nos. 12275236 and 12261131622, and "Users of Excellence program of Hefei Science Center CAS" under Contract No. 2021HSC-UE016. This work was was supported by the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No. 101052200 EUROfusion). The views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
2304.03223
DexDeform: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics
In this work, we aim to learn dexterous manipulation of deformable objects using multi-fingered hands. Reinforcement learning approaches for dexterous rigid object manipulation would struggle in this setting due to the complexity of physics interaction with deformable objects. At the same time, previous trajectory optimization approaches with differentiable physics for deformable manipulation would suffer from local optima caused by the explosion of contact modes from hand-object interactions. To address these challenges, we propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstration and refines the learned skills with differentiable physics. Concretely, we first collect a small set of human demonstrations using teleoperation. And we then train a skill model using demonstrations for planning over action abstractions in imagination. To explore the goal space, we further apply augmentations to the existing deformable shapes in demonstrations and use a gradient optimizer to refine the actions planned by the skill model. Finally, we adopt the refined trajectories as new demonstrations for finetuning the skill model. To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks. Compared with baselines, DexDeform is able to better explore and generalize across novel goals unseen in the initial human demonstrations.
Sizhe Li, Zhiao Huang, Tao Chen, Tao Du, Hao Su, Joshua B. Tenenbaum, Chuang Gan
2023-03-27T17:59:49Z
http://arxiv.org/abs/2304.03223v1
DexDeform: Dexterous Deformable Object Manipulation with Human Demonstrations and Differentiable Physics ###### Abstract In this work, we aim to learn dexterous manipulation of deformable objects using multi-fingered hands. Reinforcement learning approaches for dexterous rigid object manipulation would struggle in this setting due to the complexity of physics interaction with deformable objects. At the same time, previous trajectory optimization approaches with differentiable physics for deformable manipulation would suffer from local optima caused by the explosion of contact modes from hand-object interactions. To address these challenges, we propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstration, and refines the learned skills with differentiable physics. Concretely, we first collect a small set of human demonstrations using teleoperation. And we then train a skill model using demonstrations for planning over action abstractions in imagination. To explore the goal space, we further apply augmentations to the existing deformable shapes in demonstrations and use a gradient optimizer to refine the actions planned by the skill model. Finally, we adopt the refined trajectories as new demonstrations for finetuning the skill model. To evaluate the effectiveness of our approach, we introduce a suite of six challenging dexterous deformable object manipulation tasks. Compared with baselines, DexDeform is able to better explore and generalize across novel goals unseen in the initial human demonstrations. Additional materials can be found at our project website 1. Footnote 1: Project website: [https://sites.google.com/view/dexdeform](https://sites.google.com/view/dexdeform) ## 1 Introduction The recent success of learning-based approaches for dexterous manipulation has been widely observed on tasks with rigid objects (OpenAI et al., 2020; Chen et al., 2022; Nagabandi et al., 2020). However, a substantial portion of human dexterous manipulation skills comes from interactions with deformable objects (e.g., making bread, stuffing dumplings, and using sponges). Consider the three simplified variants of such interactions shown in Figure 1. **Folding** in row 1 requires the cooperation of the front four fingers of a downward-facing hand to carefully lift and fold the dough. **Bun** in row 4 requires two hands to simultaneously pinch and push the wrapper. Row 3 shows **Flip**, an in-hand manipulation task that requires the fingers to flip the dough into the air and deform it with agility. In this paper, we consider the problem of deformable object manipulation with a simulated Shadow Dexterous hand (ShadowRobot, 2013). The benefits of human-level dexterity can be seen through the lens of versatility (Feix et al., 2015; Chen et al., 2022). When holding fingers together, the robot hands can function as a spatula to fold deformable objects (Fig. 1, row 1). When pinching with fingertips, we can arrive at a stable grip on the object while manipulating the shape of the object (Fig. 1, row 2). Using a spherical grasp, the robot hands are able to quickly squeeze the dough into a folded shape (Fig. 1, row 3). Therefore, it is necessary and critical to learn a manipulation policy that autonomously controls the robot hand with human-like dexterity, with the potential for adapting to various scenarios. Additionally, using a multi-fingered hand adds convenience to demonstration collection: (1) controlling deformable objects with hands is a natural choice for humans, resulting in an easy-to-adapt teleoperation pipeline. (2) there exists a vast amount of in-the-wild human videos for dexterous deformable object manipulation (e.g., building a sand castle, making bread). Vision-based teleoperation techniques can be employed for collecting demonstrations at scale (Sivakumar et al., 2022). As with any dexterous manipulation task, the contact modes associated with such tasks are naturally complex. With the inclusion of soft bodies, additional difficulties arise with the tremendous growth in the dimension of the state space. Compared to the rigid-body counterparts, soft body dynamics carries infinite degrees of freedom (DoFs). Therefore, it remains challenging to reason over the complex transitions in the contact state between the fingers and the objects. Given the high dimensionality of the state space, the learning manipulation policy typically requires a large number of samples. With no or an insufficient amount of demonstrations, interactions with the environment are needed to improve the policy. Indeed, past works in dexterous manipulation have leveraged reinforcement learning (RL) approaches for this purpose (Rajeswaran et al., 2017; Chen et al., 2022). However, the sample complexity of most RL algorithms becomes a limitation under the deformable object manipulation scenarios due to the large state space. Recent works have found trajectory optimization with the first-order gradient from a differentiable simulator to be an alternative solution for soft body manipulation (Huang et al., 2021; Li et al., 2022; Lin et al., 2022). However, the gradient-based optimizers are found to be sensitive to the initial conditions, such as contact points. It remains unclear how to leverage the efficiency of the gradient-based optimizer and overcome its sensitivity to initial conditions at the same time. In this work, we aim to learn dexterous manipulation of deformable objects using multi-fingered hands. To address the inherent challenges posed by the high dimensional state space, we propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstrations and refines the learned skills with differentiable physics. DexDeform consists of three components: Figure 1: We present a framework for learning dexterous manipulation of deformable objects, covering tasks with a single hand (**Folding** and **Wrap**, row 1-2), in-hand manipulation (**Flip**, row 3), and dual hands (**Bun**, **Rope**, **Dumpling**, row 4-6). Images in the rightmost column represent goals. (1) collecting a small number of human demonstrations (10 per task variant) with teleoperation for initializing the training data. (2) extracting abstractions of the dexterous action sequences from demonstrations with a skill model. This model decomposes the manipulation process and allows for planning for a novel goal with a learned skill dynamics predictor. (3) using differentiable physics to refine trajectories planned by the skill model on augmented goals, which adds new trajectories to further fine-tune the skill model. Hence, DexDeform is capable of avoiding local minima of the gradient-based optimizer by initializing trajectories with the abstractions of dexterous actions. At the same time, DexDeform enjoys the efficiency of the gradient-based optimizer to augment demonstrations for bootstrapping the learned skill model. To evaluate the effectiveness of DexDeform, we propose a suite of six challenging dexterous deformable object manipulation tasks with a differentiable simulator. Extensive experiment results suggest that DexDeform can successfully accomplish the proposed tasks and explore different goal shapes on a set of dexterous deformable object manipulation tasks. In summary, our work makes the following contributions: * We perform, to the best of our knowledge, the first investigation on the learning-based dexterous manipulation of deformable objects. * We build a platform that integrates a low-cost teleoperation system with a soft-body simulation that is differentiable, allowing humans to provide demonstration data. * We propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstration, and refines the learned skills with differentiable physics. * Our approach outperforms the baselines and successfully accomplishes six challenging tasks such as **Flip**, learning complex soft-body manipulation skills from demonstrations. ## 2 Method Given a goal configuration of deformable shapes, our objective is to plan actions to perform dexterous deformable object manipulation using the Shadow Dexterous hand. We assume we know the full point cloud observation of the scene, which includes the multi-fingered hand(s) and the object(s). To tackle this problem, we propose DexDeform (Fig. 2), a framework that consists of three components: (1) a small set of human demonstrations for policy initialization (Sec. 2.1); (2) learning Figure 2: **An overview of DexDeform** (Sec.2). (1): We first collect human demonstrations using hand tracking teleoperation. (2): We then train a skill model from demonstrations, which consists of a skill sequence encoder, a skill dynamics predictor, and a skill action decoder. (3): To explore the high dimensional state space, we use the skill model to plan for novel goals, and apply a gradient-based optimizer to refine the actions planned by the skill model. Lastly, we store the successful trajectories as new demonstrations and repeat (2)-(3). skill abstractions of dexterous actions from demonstrations (Sec. 2.2); (3) exploring novel goals with planning and differentiable physics (Sec. 2.3). ### Collection of human manipulation demonstration Dexterous manipulation with a high degrees-of-freedom robot hand poses challenges for policy learning, since the state dimension is high and dexterous tasks involve frequent contact making and breaking between the hand and the objects. Learning such a policy from scratch would be extremely time consuming (OpenAI et al., 2020). One way to overcome the exploration challenge is by providing human demonstration data. However, many prior works use a complex and expensive system such as a motion capture system (Rajeswaran et al., 2017; Gupta et al., 2016) or many cameras (Handa et al., 2020) for capturing human demonstration data. We built a low-cost ($100) and simple teleoperation system that allows a human operator to control the simulated hand in real time to perform dexterous manipulation tasks. Our system is built based on the Leap Motion Controller (Spiegelmock, 2013) which is an optical hand tracking device. By constructing an inverse kinematics model based on the Shadow hand, our system re-targets the detected human finger positions into the joint positions of the simulated robot hand that is controlled via a position-based PD controller. More details on teleoperation setup can be found in Appendix B. ### Learning Abstractions of Dexterous Skills Humans execute abstractions of dexterous skills to interact with deformable objects instead of planning every finger muscle movement involved. In the same spirit, we would like to learn abstractions of actions present in the collected human demonstrations. Our skill model consists of three components, a skill encoder, a skill dynamics predictor, and a skill action decoder. The skill model uses dynamics predictor for planning, and action decoder to predict actions from skill embeddings. The skill model is built on the implicit scene representations of point clouds, which we will describe first. **Implicit Representation of the Scene.** We leverage the translational equivariance offered by the Convolutional Occupancy Network (Peng et al., 2020), or ConvONet, to build a continuous implicit representation of the scene. Concretely, let \(o_{t}\in\mathcal{O}\) describe the unordered point cloud observation at time \(t\), where \(o_{t}=\{x_{1},x_{2},...,x_{n}\}\) with \(x_{i}\in\mathbb{R}^{6}\) (the 3D position and 3D color label). The encoder of ConvONet \(\psi_{enc}:\mathcal{O}\rightarrow\mathbb{R}^{H\times W\times D}\) maps a point cloud to a set of 2D feature maps. Given a query point \(p\in\mathbb{R}^{3}\), we get its point feature \(\psi_{enc}(o_{t})|_{p}\) from the feature maps \(\psi_{enc}(o_{t})\) via bilinear interpolation. An occupancy decoder \(\psi_{dec}(p,\psi_{enc}(o_{t})|_{p}):\mathbb{R}^{3}\times\mathbb{R}^{D} \rightarrow\mathbb{R}^{3}\) is then used to map a query point \(p\) and its point feature \(\psi_{enc}(o_{t})|_{p}\) into the occupancy probabilities of being free space, hand, and the deformable object, based on one-hot encoding. Our ConvONet is trained with self-supervision in simulation. We use the 2D feature maps from the encoder as the translational-equivariant representation of the scene. **Latent encoding of the implicit scene representation.** As will be explained later, our choice of implicit representation of the scene can be naturally integrated with our skill model for planning for target deformable shapes. To extract a compact scene representation for dynamics modeling and planning in the latent space, we train a VAE to reconstruct the 2D feature maps from ConvONet encoder outputs. The VAE includes an encoder \(\phi_{enc}\) that encodes the scene representation into a latent vector \(s_{t}\) and a decoder \(\phi_{dec}\) that decodes the latent back into the scene representation space. Specifically, \(s_{t}=\phi_{enc}(\psi_{enc}(o_{t}))\), and \(\phi_{dec}(s_{t}):=\psi_{enc}(o_{t})\). **Skill Encoder.** Using our learned latent encoding of the scene, we encode the point cloud observation \(o_{t}\) at each timestep into \(s_{t}\). We then train a skill encoder \(f\) that maps a sequence of \(K\)-step observation-action pairs \((s_{t},a_{t},...,s_{t+K},a_{t+K})\) into a skill embedding space containing \(z_{t}\), i.e., \(z_{t}\sim f(s_{t},a_{t},...,s_{t+K},a_{t+K})\). We use \(z_{t}\) for decoding actions between timesteps \(t\) and \(t+K\). **Skill Dynamics and Skill Decoder.** For each skill embedding, we follow SkiMo (Shi et al., 2022) to jointly learn to predict the resulting dynamics of applying the skill and decode the actions responsible for forming the skill. Concretely, we train a skill dynamics predictor \(\hat{s}_{t+K}=\mathcal{T}(z_{t},s_{t})\) that predicts the future latent scene encoding \(K\) steps away in imagination. We also train a skill action decoder \(\pi(a_{t}|z_{t},s_{t})\) that recovers the actions corresponding to the skill abstractions. We refer the readers to Appendix C for the training details and objective functions. **Long-horizon planning in the space of skill abstractions.** Our choice of implicit representation of the scene allows us to decode the latent encoding for computing the shape occupancy loss for planning. Given a target shape \(\mathbf{g}\) described by a point cloud and horizon \(H\), we hope to find the sequence of skills \(z_{1},z_{K},...,z_{H}\) such that the final predicted scene encoding \(\hat{s}_{H+K}\) is occupied by the points in target shape under the object category. Let \(\mathcal{L}_{occ}(\mathbf{g},h_{H+K})\) be the sum of the cross entropy loss computed between each point \(x_{i}\) within target shape \(\mathbf{g}\) described as a point cloud and the predicted occupancy probabilities \(\psi_{dec}(x_{i},\phi_{dec}(s_{H+K}))\) when \(x_{i}\) is queried in the decoded scene representation. We formulate our planning problem as a continuous optimization problem. \[\operatorname*{arg\,min}_{z_{1},z_{K},...,z_{H}}C(\mathbf{g},\mathbf{z})= \mathcal{L}_{occ}(\mathbf{g},\mathcal{T}(z_{H},\mathcal{T}(z_{H-K},... \mathcal{T}(z_{1},s_{1}))) \tag{1}\] Here, \(\mathbf{z}=z_{1},z_{K},...,z_{H}\) the sequence of skills we are optimizing over, and we iteratively apply \(\mathcal{T}\) in a forward manner \(\lfloor H/K\rfloor\) times to predict the resulting scene encoding \(\hat{s}_{H+K}\). In practice, we optimize a batch of initial solutions \(\{z_{1},z_{K},...,z_{H}\}_{j=1}^{J}\) and choose the best one based on \(C(\mathbf{g},\mathbf{z})\). We refer the readers to Appendix C for more details on skill planning. ### Differentiable Physics Guided Exploration Given that the skill model could be limited to interactions captured in the current demonstration set, more interactions are needed for the skill model to generalize across novel goals. Two challenges exist: (1) Given the high degrees of freedom of soft bodies and human demonstrations, our shape distribution cannot be easily defined in closed-form expressions. How can we sample novel goals in the first place? (2) Suppose that a novel target shape is provided and is not closely captured by the demonstration set the skill model is trained on. How can we efficiently enable the skill model to achieve the novel shape, which would allow us to expand our demonstration set? We present two ideas for overcoming the two challenges. **Shape augmentation for novel goal generation.** To tackle the intractability of the distribution of deformable shapes and sample new shapes, we explore the space of deformable shapes based on the shapes covered by the current demonstrations. We employ two simple geometric transformations: translation on the xz-plane and rotation around the y-axis, which is similar to data augmentation practices in training neural networks for image classification. We randomly sample target shapes from the existing demonstrations and apply augmentations to generate new target shapes. **Differentiable-physics based trajectory refinement.** We use trajectories planned by the skill model as optimization initialization to overcome the local optima caused by the complex contacts, and use the gradient-based optimizer to refine the planned trajectories within tens of iterations. **The DexDeform algorithm.** Putting all ingredients together, we present the DexDeform algorithm (Algo. 2) in Appendix D. During training, our framework learns implicit scene representations and the skill model. During exploration, our framework leverages a differentiable physics optimizer to expand the demonstrations to new goals. ## 3 Experiments In this section, we conduct experiments aiming to answer the following questions: * Q1: How does DexDeform compare against trajectory optimization, imitation learning, and RL approaches? * Q2: How much improvement does differentiable physics guided exploration bring for solving dexterous deformable manipulation tasks? * Q3: What are the benefits of the skill model? * Q4: Are skill dynamics predictions consistent with the resulting states from applying the decoded actions? * Q5: What does the latent space of skill embedding look like? ### Environmental Setup **Tasks and Environments.** Inspired by human dexterous deformable object manipulation tasks, we design six tasks (Fig. 1): three single-hand tasks (**Folding, Wrap, Flip**), including in-hand manipulation, and three dual-hand tasks (**Rope**, **Dumpling**, **Bun**). Detailed descriptions of our environments and tasks can be found in Appendix A. Each Shadow hand has 28 degrees of freedom with a movable base. **Human Demonstration Collection.** Using our teleoperation setup described in Section 2.1, We collected 10 demonstrations for each task variant. There exist 4 variants for **Folding**, corresponding to left, right, front, and back folding directions. All other tasks have 1 task variant. The total demonstration amounts to approximately \(60,000\) environment steps, or 2 hours of human interactions with the environment. **Evaluation metric.** We report the normalized improvement (i.e., decrease) in Earth Mover distance (EMD) computed as \(d(t)=\frac{d_{0}-d_{t}}{d_{0}}\), where \(d_{0}\) and \(d_{t}\) are the initial and current values of EMD. Hence, a normalized improvement score of 0 represents a policy that results in no decrease in the EMD, while a score of 1 indicates that the policy is able to result in a shape that matches perfectly with the goal. We threshold the minimum of the score to be 0, as negative distances could occur if the policy results in shapes further away from the goal shape than the initial one. We approximate the EMD using Sinkhorn Divergence (Sojourne et al., 2019) between the source and target particles to quantify the fine-grained difference between a state and a goal. **Baselines.** We consider four categories of baselines: * **Model-free Reinforcement Learning.** We compare against Proximal Policy Optimization (PPO) (Schulman et al., 2017), an model-free RL method. The RL agent takes point cloud as input. * **Behavior Cloning.** We compare with a baseline that directly learns a goal-conditioned policy with Behavior Cloning (BC) and hindsight relabeling. The agent is trained with the same human demonstration set and takes point cloud as input. * **Model-free Reinforcement Learning with Data Augmentation.** We compare with demonstration augmented policy gradient (DAPG), a method that combines demonstrations with an RL agent (PPO). We train the agent using the same demonstration set with point cloud as input observations. * **Trajectory Optimization.** We compare against the trajectory optimization (TrajOpt) that uses first-order gradients from a differentiable simulator to solve for an open-loop action sequence (Kelley, 1960). This method takes the full state of the simulation as the observation at each timestep. ### Object Manipulation Results Given a goal configuration of deformable shapes, our objective is to perform dexterous deformable object manipulation using the Shadow Dexterous hand. We created five goal configurations for each task to evaluate different approaches. We report the mean and standard deviation of the normalized improvement (Q1). We show the quantitative results in Table 1, and the qualitative results in Figure 3. We find that DexDeform is capable of completing the challenging long-horizon dexterous manipulation tasks, and significantly outperforms the compared baselines. On the challenging in-hand manipulation task **Flip**, we find that all baseline approaches fail to complete the task, while DexDeform is able to swiftly flip the wrapper into the air with fingertips and deform the dough. We hypothesize that the success of DexDeform comes from the ability to leverage the skill model for decomposing the high dimensional state space, which allows for efficient planning. On single-hand task **Folding**, we find that the BC agent would fold the dough in the wrong direction, while such behavior is not found for DexDeform. We hypothesize that this is because DexDeform is able to leverage the translational equivariance offered by the implicit representation during planning. DAPG agent is able to squeeze the dough and move it towards a location that best matches the general shape, but is unable to dexterously fold the dough over. PPO agent and TrajOpt agents are unable to squeeze or create meaningful shapes, and would slightly move the initial dough towards the target shape. On dual-hand task **Rope**, we find that DexDeform is able to match the shape with fine-grained details. The BC agent is able to generally match the shape, while DAPG, PPO, and TrajOpt agents fail to create meaningful shapes due to the high dimensional space created by two Shadow hands and two deformable objects. Due to the sample complexity of RL approaches, the speed of soft-body simulation limits the speed of convergence. We believe that with a large amount of samples, the performance of RL agents should improve, constituting a novel future direction. ### Ablation Analysis of DexDeform To quantify the improvement brought by differentiable physics (Q2), we perform ablatively compare DexDeform against a baseline, named Skill-Only, that does not use differentiable physics for exploration. In a fashion similar to our previous table, we report the normalized improvement in Table 2. We find that the Skill-Only agent, trained entirely on the initial human demonstrations, is unable to generalize across evaluation goals that are uncovered by the initial dataset. Gradient-based trajectory optimization leveraged the interactions with the environment and exploited the gradient information to achieve fine-grained control over the soft-body shapes. To evaluate the benefits of the skill model (Q3), we perform an ablation that compares DexDeform with a baseline (NN-TrajOpt) that replaces the skill model with a heuristic. Given a goal shape, NN-TrajOpt uses EMD to find the nearest neighbor of that shape from the initial human demonstration data. NN-TrajOpt then uses a gradient-based optimizer to refine the corresponding trajectory of this nearest neighbor. We report the qualitative comparison in Figure 4. We illustrate that pure EMD might not be a good measure for soft bodies with large topological variations Feydy (2020). In contrast, DexDeform leverages skill embedding and is able to compositionally represent the deformation process, allowing for finding the suitable policy. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Env** & Folding & Rope & Bun \\ \hline TrajOpt & \(0.032\pm 0.061\) & \(0.079\pm 0.026\) & \(0.000\pm 0.000\) \\ PPO & \(0.361\pm 0.173\) & \(0.460\pm 0.257\) & \(0.069\pm 0.117\) \\ DAPG & \(0.538\pm 0.308\) & \(0.246\pm 0.626\) & \(0.460\pm 0.079\) \\ BC & \(0.685\pm 0.388\) & \(0.557\pm 0.377\) & \(0.379\pm 0.258\) \\ DexDeform & \(\mathbf{0.970\pm 0.021}\) & \(\mathbf{0.972\pm 0.010}\) & \(\mathbf{0.874\pm 0.078}\) \\ \hline **Env** & Dumpling & Wrap & Flip \\ \hline TrajOpt & \(0.000\pm 0.000\) & \(0.000\pm 0.000\) & \(0.195\pm 0.275\) \\ PPO & \(0.000\pm 0.000\) & \(0.000\pm 0.000\) & \(0.223\pm 0.328\) \\ DAPG & \(0.000\pm 0.000\) & \(0.000\pm 0.000\) & \(0.000\pm 0.000\) \\ BC & \(0.506\pm 0.314\) & \(0.134\pm 0.595\) & \(0.253\pm 0.359\) \\ DexDeform & \(\mathbf{0.888\pm 0.055}\) & \(\mathbf{0.845\pm 0.050}\) & \(\mathbf{0.842\pm 0.057}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The averaged normalized improvements and the standard deviations of each method. Figure 3: Qualitative results of each method on four environments: **Folding, Rope, Dumpling, Flip** (from top to bottom). The robot hand is not rendered for the first three environments to avoid occlusion of the final shape. ### Skill Model Visualization To see whether skill dynamics predictions are consistent with action rollouts (Q4), we visualize the latent encoding of the scene \(\hat{s}_{t}\), predicted by the skill dynamics model given a skill embedding, as well as the ground truth state \(S_{t}\) obtained from actions predicted by the skill decoder. Ideally, the two visualizations should show consistency. As shown in Figure 7, we observe a high level of consistency between the skill dynamics and the skill decoder. To find out what the latent space of skill embedding looks like (Q5), we visualize the skill embeddings using t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten and Hinton, 2008) on **Folding**. We label each embedding based on the location of the final shape achieved by the corresponding skill sequence. We partition the ground plane into five parts: left, right, front, back, and center. As shown in Figure 8, the skill embeddings are correlated with the label categories. Details and visualizations can be found in Appendix E. ## 4 Related Work **Dexterous Manipulation.** Dexterous manipulation has been a long-standing challenge in robotics, with the early works dating back to Salisbury and Craig (1982); Mason et al. (1989). Different from parallel-jaw grasping, dexterous manipulation typically continuously controls force to the object through the fingertips of a robotic hand (Dafle et al., 2014). There have been many prior works on using trajectory optimization (Mordatch et al., 2012; Bai and Liu, 2014; Sundaralingam and Hermans, 2019) or kinodynamic planning (Rus, 1999) to solve for the controllers. However, to make the optimization or planning tractable, prior works usually make assumptions on known dynamics properties and simple geometries. Another line of works uses reinforcement learning to train the controller. Some model-based RL works learned a dynamics model from the rollout data (Kumar et al., 2016; Nagabandi et al., 2020), and used online optimal control to rotate a pen or Baoding balls on a Shadow hand. OpenAI et al. (2020, 2019) uses model-free RL to learn a controller to reorient a cube and transfer the controller to the real world. To speed up the policy learning when using model-free RL, Chen et al. (2022) uses a teacher-student framework to learn a controller that can reorient thousands of geometrically different objects with both the hand facing upward and downward. (Radosavovic et al., 2020; Zhu et al., 2019; Rajeswaran et al., 2017; Jeong et al., 2020; Gupta et al., 2016; Qin et al., 2021) bootstraps the RL policy learning from demonstration data for reorienting a pen, opening a door, assembling LEGO blocks, etc. Handa et al. (2020); Arunachalam et al. (2022); Sivakumar et al. (2022) developed a teleoperation system for dexterous manipulation by tracking hand pose and re-targeting it to a robot hand. Unlike previous works with rigid bodies, our work performs the first investigation on the learning-based dexterous manipulation of soft bodies that carries infinite degrees of freedom, and provides a differentiable simulation platform for teleoperation. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Env** & Folding & Rope & Bun & Dumpling \\ \hline Skill-Only & \(0.908\pm 0.058\) & \(0.914\pm 0.023\) & \(0.820\pm 0.008\) & \(0.725\pm 0.244\) \\ \hline DexDeform & \(\mathbf{0.970\pm 0.021}\) & \(\mathbf{0.972\pm 0.010}\) & \(\mathbf{0.874\pm 0.078}\) & \(\mathbf{0.888\pm 0.055}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Ablative comparison with Skill-Only that does not use the gradient-based optimizer. We show the averaged normalized improvements and the standard deviation of each method. Figure 4: Ablative comparison with NN-TrajOpt that replaces skill model with a heuristic. **Learning Skills from Demonstrations.** Our skill model shares the same spirits with hierarchical imitation learning (Fang et al., 2019; Shi et al., 2022; Gupta et al., 2019; Lynch et al., 2020) and motion synthesis (Peng et al., 2018, 2022), which view skill learning as sequential modeling tasks (Janner et al., 2021; Chen et al., 2021) in a low-dimensional space. Following Shi et al. (2022), we learn a latent dynamic model to compose skills with model-based planning (Hafner et al., 2019, 2020) in the latent space. We employ these ideas for deformable object manipulation, where we integrate skill abstraction and latent dynamics into our pipeline. Our additional innovation is an exploration phase guided by the gradient-based trajectory optimizer, learning dexterous soft-body manipulation skills with a small number of demonstrations. **Deformable Object Manipulation.** Deformable object manipulation have attracted great attention because of its wide range of applications in the real world. Previous works have explored manipulating different materials from objects humans interact with on a daily basis, including cloth (Matin-Shepard et al., 2010; Hoque et al., 2021; Lin et al., 2021; Huang et al., 2022; Weng et al., 2021; Liang et al., 2019; Wu et al., 2020), rope (Sundaresan et al., 2020; Mitrano et al., 2021; Yan et al., 2020; Wu et al., 2020), and fluid materials (Ma et al., 2018; Holl et al., 2020; Li et al., 2022; Schenck and Fox, 2017; Gauthman et al., 2022). Our work is built upon Huang et al. (2020), which uses the MPM (Jiang et al., 2016) to simulate elastoplastic objects (Huang et al., 2020; Li et al., 2019; Shi et al., 2022; Figueroa et al., 2016; Matl and Bajcsy, 2021; Heiden et al., 2021), and is able to represent materials such as dough and clay. Different from previous works, we investigate how to interact with deformable objects using multi-fingered hands, which carry versatility across different scenarios. **Differentiable physics.** The development of differentiable simulator (Bern et al., 2019; Geilinger et al., 2020; Liang et al., 2019; Hu et al., 2019; Huang et al., 2020; Qiao et al., 2021; Du et al., 2021; Heiden et al., 2019; Geilinger et al., 2020; Werling et al., 2021; Howell et al., 2022) enables fast planning (Huang et al., 2020), demonstration generation (Lin et al., 2022) and adaptation (Murthy et al., 2020). Systems have been developed to generate high-performance simulation code for the support of automatic differentiation (Hu et al., 2019; Macklin, 2022; Freeman et al., 2021). However, many works have discovered that trajectory optimizers with first-order gradients are sensitive to local optima (Li et al., 2022; Suh et al., 2022; Xu et al., 2022; Antonova et al., 2022). Many have found that the gradient-based optimizer can benefit from the integration of sampling-based methods, which enables global search to escape from local optima. The skill model employed by our method can be viewed as a form of planning. Different from previous methods, the skill model can decompose the high dimensional policy space, which enables efficient planning in the latent skill space. ## 5 Conclusion In this work, we perform, to the best of our knowledge, the first investigation of the learning-based dexterous manipulation of deformable objects. We build a platform that integrates low-cost teleoperation with a soft-body simulation that is differentiable. We propose DexDeform, a principled framework that abstracts dexterous manipulation skills from human demonstrations, and refines the learned skills with differentiable physics. We find that DexDeform outperforms the baselines and accomplishes all six challenging tasks. There are a few interesting directions for future work. With our simulation platform, it would be interesting to leverage the vast amount of in-the-wild videos (e.g., making bread, stuffing dumpling, building sand castle) for learning dexterous deformable manipulation policies in the future. It is also intriguing to speed up the soft-body simulation for large-scale learning with RL. Our work assumes full point cloud observation. Although our choice of implicit representation has been shown to transfer from the simulation into real-world robotic deployment by Shen et al. (2022), we would like to work with real-world observations in the future. **Acknowledgement.** This project was supported by the DARPA MCS program, MIT-IBM Watson AI Lab, and gift funding from MERL, Cisco, and Amazon.
2305.14476
A Transient Overcooling in the Early Universe? Clues from Globular Clusters Formation
The mere existence of multiple stellar generations in Milky Way globular clusters indicates that each generation was unable to stop star formation, that instead persisted unimpeded for several million years. This evidence argues for an extended stage of star formation within a forming globular cluster, during which stellar feedback was substantially ineffective and the nascent globular cluster was able to accrete processed gas from its surrounding, and efficiently convert it into successive stellar generations. It has been argued that such delayed feedback results from core collapse in most massive stars failing to trigger an energetic supernova explosion, but rather leading directly to black hole formation. Thus, globular clusters offer a concrete phenomenological example for the lack of feedback in young starbursts, an option that has been widely advocated to account for the unexpected abundance of UV-luminous galaxies at z = 9-16, as revealed by JWST observations. The paper is meant to attract attention to this opportunity for a synergic cooperation of globular cluster and high redshift research.
Alvio Renzini
2023-05-23T19:08:06Z
http://arxiv.org/abs/2305.14476v2
# A Transient Overcooling in the Early Universe? ###### Abstract The mere existence of multiple stellar generations in Milky Way globular clusters indicates that each generation was unable to stop star formation, that instead persisted unimpedpeded for several million years. This evidence argues for an extended stage of star formation within a forming globular cluster, during which stellar feedback was substantially ineffective and the nascent globular cluster was able to accrete processed gas from its surrounding, and efficiently convert it into successive stellar generations. It has been argued that such delayed feedback results from core collapse in most massive stars failing to trigger an energetic supernova explosion, but rather leading directly to black hole formation. Thus, globular clusters offer a concrete phenomenological example for the lack of feedback in young starbursts, an option that has been widely advocated to account for the unexpected abundance of UV-luminous galaxies at \(z=9-16\), as revealed by JWST observations. The paper is meant to attract attention to this opportunity for a synergic cooperation of globular cluster and high redshift research. keywords: Galaxy: formation - globular clusters: general - galaxies: evolution- galaxies: formation ## 1 Introduction On its first Cycle, the _James Webb Space Telescope_ (JWST) has revealed an unexpected abundance of UV-bright galaxies at \(z=9-16\) (e.g., Finkelstein et al. 2023; Donnan et al. 2023; McLeod et al. 2023; Harikane et al. 2023a,b and many others). Their finding was _unexpected_ in the frame of existing theoretical models of galaxy formation and evolution, that had been fine tuned to reproduce observables in the lower redshift Universe, see e.g., Figure 14 in Finkelstein et al. (2023). As a consequence, a number of possible alternatives to current assumptions in cosmological simulations are being explored to cope with this discrepancy (e.g., Boylan-Kolchin 2023; Wilkins et al. 2023). For example, Yung, et al. (2023) discuss several options to ease the factor \(\sim\)30 underprediction of their semianalytical model at \(z\sim\)13 compared to the observed UV luminosity function. These include a top heavy IMF, that automatically would bust the UV luminosity function, exclude other possible limitations of their model such as a too inefficient baryon cooling within halos and inefficient gas-to-stars conversion. They conclude that too _efficient stellar feedback_ is the main ingredient that could be responsible for their model underpredicting the number of luminous galaxies at these redshifts. Indeed, the abundance of UV luminous galaxies appears to be consistent with the limiting case of (feedback unimpeded) \(\sim\)100 per cent efficiency in converting baryons into stars in dark matter halos at these redshifts (Finkelstein et al. 2023), as estimated by Behroozi & Silk (2018). In alternative, Shen et al. (2023 ) appeal to intrinsic UV luminosity variability of dwarf galaxies at high redshifts, such objects being likely dominated by a bursty mode of star formation. Thanks to a kind of intrinsic Eddington bias, the result of variability is to boost the top end of the luminosity function, with consistency with the observed counts being achieved for a variability described by a gaussian luminosity spread with \(\sigma_{\rm UV}\simeq 2.5\) mag. A top heavy IMF is also favoured by Harikane et al. (2023a), who, for lack of evidence, exclude AGN boosting the UV luminosity of \(z\sim\)9 - 16 galaxies. A lack of suppression of star formation by the UV background in the pre-ionization era is also mentioned as a possible contributor. A specific case of feedback suppression at high redshifts is advocated by Dekel et al. (2023). Based on the Starburst99 models (Letherer et al. 1999), Dekel et al. argue that at low metallicities, such as those prevailing at high redshifts, stellar winds from massive stars convey little energy and momentum, hence prior to supernova (SN) explosions stellar feedback is very inefficient, hence boosting star formation and the UV luminosity function. This low-feedback phase lasts up to \(\sim\)3 Myr since a burst of star formation, then coming to an end as the most massive stars eventually undergo SN explosions and efficient feedback begins. This assumes that all massive stars undergoing core collapse also end with a SN display, hence ejecting \(\sim\)10\({}^{51}\) erg in kinetic energy. While all these options are being currently considered, this pa per expands on the possibility of an even stronger reduction of stellar feedback, that can be achieved by substantially extending the no-supernovae period, past a burst of star formation. The justification for such a scenario comes from the multiple stellar generation phenomenon in Galactic globular clusters (GC), which suggests a substantially more extended period of inefficient feedback, up to \(\sim\)10 Myr, as discussed in Renzini, Marino & Milone (2022, hereafter Paper I). Section 2 succinctly recaps the evidence on GC formation supporting the concept of an extended feedback-free time at their formation. Section 3 then expands on the consequences of such a delayed feedback for star formation in the early Universe, and finally, Section 4 returns on the key issues and on the plausibility of the whole scenario, including mentioning some caveats. ## 2 Globular cluster formation Virtually all (Galactic) GCs host multiple stellar generations, with the first generation (1G) reflecting the chemical composition of the ISM prior to the 1G formation, whereas second generation (2G) stars are depleted, to various degrees, in carbon and oxygen and enriched in helium, nitrogen and sodium (e.g., Gratton, Carretta & Bragaglia, 2012; Milone et al., 2017). Thus, the material having formed 2G stars had to be exposed to proton-capture processes at high temperatures in stars of the first generation, though their nature has been matter of debate ever since the multiple generation phenomenon was discovered. In Paper I it is argued that the most likely candidates are massive interacting binaries, as originally suggested by de Mink et al. (2009). The majority of massive stars are indeed members of interactive binaries (Sana, de Mink & de Koter, 2012) and, most of the nuclearly-processed material is shed with low kinetic energy as a result of common-envelope events (de Mink et al., 2009). It is worth emphasising that the fraction of 2G stars increases from \(\sim\)50 per cent in lower mass GCs to over 80 per cent in the most massive ones (Milone et al., 2017). In all evidence, **the formation of the first generation did not stop star formation, which actually had to continue with an even increasing rate and efficiency.** In Paper I, this absence of feedback was ascribed to a temporary lack of supernova explosions, as also required by most 2G stars having the same iron abundance of 1G ones, indicating no or very small contamination by supernova ejecta (see also Milone et al., 2017). This also demonstrates that 2G stars had to form _before_ SN explosions began to pollute the ISM, hence all star formation had to be confined within a few (up to \(\sim\)10) Myr. As discussed in Paper I, this can be due to stars more massive than \(\sim\)20 \(M_{\odot}\) failing to produce a SN display at their core collapse, but rather _silently_ sinking into black holes (see also Krause et al., 2013). Thus, following a burst of star formation there would be no supernova events for the first \(\sim\)10 Myr, while stars more massive than \(\sim\)20 \(M_{\odot}\) complete their evolutionary cycle and pollute the ISM with proton-capture products. Then supernovae begin and continue for another \(\sim\) 25 Myr, while stars from \(\sim\) 20 down to \(\sim\) 8 \(M_{\odot}\) complete their evolution. Strong feedback from these supernovae will then bring to an end star formation within the young GC. Finally, no more core-collapse supernovae occur, from \(\sim\)35 Myr on after the burst, when stars less massive than \(\sim\)8 \(M_{\odot}\) end their evolution as white dwarfs. In most GCs there appears to be a small 1G-2G discontinuity in chemical composition, suggesting a brief pause in star formation, then followed by a number of successive bursts, each generating a 2G sub-population. Such number increases with cluster mass (Milone et al., 2017) and becomes as high as 15 in \(\omega\) Cen (Bellini et al., 2017), the most massive Galactic GC. Clearly, even successive (2G) bursts were unable to stop star formation, before supernovae eventually succeeded. In all evidence, feedback was ineffective also during most of the formation of the second generations. During this no-supernova phase, may stellar winds still provide sufficient feedback to contrast this scenario? At low metallicity winds carry little energy, as argued by Dekel et al. (2023), but GCs exist with near-solar metallicity and they still exhibit the multiple generation phenomenon (Kader et al., 2022). However, for the high ISM densities at GC formation, energy dissipation can be so high to neutralize most of the feedback from stellar winds, while radiative feedback or UV background may have only minor effects (Elmegreen, 2017; Dekel et al., 2023). In any event, the empirical fact remains, that, even in near-solar metallicity GCs, the first generation did not prevent the formation of second generation stars. The dominance of 2G stars demands that the 1G population having processed the material to form 2G stars had a substantially higher mass than that of the 1G still bound to the clusters. This is known as the _mass budget problem_(e.g., Renzini et al., 2015), whereby the first generation returns only \(\sim\)10 per cent of its mass with the required composition to form 2G stars. Hence, 1G stars still bound to the cluster today fall short by at least a factor of \(\sim\)10 in producing enough material to build the 2G stars. This mismatch would be even worse if star formation was restricted to the 3 Myr period prior to the first core-collapse event, as only the most massive stars would have had time to contribute, hence the necessity to extend longer the no-supernova phase, indicatively to \(\sim\)10 Myr as justified in Paper I. As far as the mass of the 1G contributors is concerned, in Paper I it is argued that, within the (dwarf) galaxy hosting a nascent GC (e.g., as seen at high redshifts, Vanzella et al., 2019, 2023), this extended 1G feeder population inhabited a wide region around the forming GC, that was also actively star forming and feeding the central cluster with material to form the second generations. During this _overcooling_ phase, the young GC was actually the centre of a converging accretion flow, lasting some 10 Myr, before supernovae finally brought it to a halt. This extended feedback-free time is critical to ensure that a sufficient number of 1G donor stars, down to \(\sim\)20 \(M_{\odot}\), had time to evolve and shed enough processed materials to build the 2G stars in GCs. Thus, the converging material had been processed inside stars of a first generation that collectively was several times more massive then the final mass of the bound GC (see also the GC formation models of Elmegreen, 2017). It has been suggested that the young cluster R136, and its surrounding 30 Dor star-forming complex in LMC, may represent a local analog for GCs forming in high redshift low-mass galaxies (Schneider et al. (2018, Paper I). It is worth emphasizing that the dominance of 2G stars requires that their formation had to take place with an extreme efficiency in gas to stars conversion, close to 100 per cent, which was actually promoted by the lack of supernova feedback (Paper I). Clearly, if this is the way GCs formed, then the temporary lack of supernova explosions, hence with the feedback delay promoting high star formation efficiency, ought to have important consequences for star formation in general, and possibly so for star formation in extremely high redshift galaxies. In summary, the hypothesis of most massive stars (\(\gtrsim\)20 \(M_{\odot}\)) failing to result in a SN explosion has five decisive advantages for GC formation: 1) it promotes and extended, unimpeded formation of multiple stellar generations, 2) It boosts star formation efficiency, up to \(\sim\)100 per cent, with almost full conversion of gas into stars, 3) it avoids contaminating 2G stars with heavy-element SN products, 4) it allows an extended range of stellar masses (i.e., above \(\sim\)20 \(M_{\odot}\)) to provide processed material for the formation of 2G stars, and 5) compared to the assumption of no delay in SN feedback, the extended period of star formation (from \(\sim\) 3 to \(\sim\)10 Myr) allows a \(\sim\) 30 times bigger volume around the nascent GC to contribute material for the 2G formation (Paper I). As such, it appears to be promising to explore the consequences of this hypothesis for star formation in the early Universe. ## 3 Overcooling in the early Universe? White & Rees (1978) early noticed that cooling in the early Universe could be so effective that most baryons in dark matter halos would quickly turn into stars, an effect dubbed _overcooling_. Clearly, global overcooling did not happen, because even by the present time not much more than 10 per cent of the cosmic baryons now reside in stars (Fukugita, Hogan & Peebles, 1998). Yet, if GCs formed as sketched above, with an early overcooling phase promoting the accumulation of multiple stellar generations in a short time interval, then an early overcooling as a generic properties of star formation may help accounting for the excess of UV-bright galaxies at \(z\hbox to 0.0pt{$<$}{\lower 4.3pt\hbox{$\sim$}}\)9. Indeed, it offers an effective reduction of stellar feedback, invoked as the single simplest way of boosting star formation at very high redshifts, as from JWST observations (Finkelstein et al., 2023; Yung, et al., 2023; Harikane et al., 2023; Dekel et al., 2023). Indeed, compared to Dekel et al. (2023), the proposed scenario extends from \(\sim\)3 to \(\sim\)10 Myr the "feedback free" time past a burst of star formation. Besides this indirect role on the high-redshift Universe, where GC provide a hint favoring delayed feedback and overcooling, young GCs and their immediate environment may also play a direct role on global star formation at high redshift. High-redshift globulars and their precursors, their possible contribution to re-ionization and observational detectability, have been considered for some time (see e.g., Carlberg, 2002; Schraerer & Charbonnel, 2011; Katz & Ricotti, 2013; Trenti, Padoan & Jimenez, 2015; Renzini, 2017; Boylan-Kolchin, 2018; Pozzetti, Maraston & Renzini, 2019). Thus, nascent GCs, with their 1G feeding surrounding, may contribute significantly to the stellar mass and luminosity in the very early Universe. Given the old ages of most Galactic GCs, they had to form beyond \(z\sim\) 3, and the metal poor ones possibly well beyond it. Given the mass of GCs in the local Universe, young GCs, along with their \(\sim\)10 times more massive star-forming environment, may have dominated over the whole stellar mass if formed beyond \(z\sim\) 5 (Renzini, 2017). Also, given the extreme densities of typical GCs today, corresponding to some \(\sim\) 10\({}^{7}\) atoms cm\({}^{-3}\), the similarly high gas densities of forming GCs at very high redshift may have contributed significantly to the total emission measure in emission lines and nebular continuum. The object GN-z11 at \(z=10.6\)(Bunker et al., 2023) with density higher than 10\({}^{6}\) cm\({}^{-3}\) has been proposed as (hosting) a possible GC in formation (Senchyna et al., 2023; Belokurov & Kravtsov, 2023), though the high density may rather refer to the broad line region of an AGN (Maiolino et al., 2023). Worth noting is that its high N/O ratio is just what one expects for the ISM of a globular cluster while forming its 2G stars, which indeed are strongly enhanced in nitrogen and depleted in oxygen. ## 4 Discussion and Conclusions It has been argued that GCs in the local Universe, with their ubiquitous multiple stellar generations, offer strong evidence for a lack of feedback during their formation in the early Universe. Stars more massive than \(\sim\) 20 \(M_{\odot}\) failing to produce an energetic supernova would accomplish such effect, and delay feedback by some 10 Myr. If so, such delayed feedback and corresponding transient overcooling phase, would result in a major reduction in the stellar feedback, as now widely advocated as an obvious way of accounting for the excess of UV-bright galaxies at \(z=9-16\)(Finkelstein et al., 2023; Dekel et al., 2023; Yung, et al., 2023). However, a few caveats are in order. For example, could such delayed feedback result in an overproduction of stars in cosmological simulations, not only in the early Universe but also all along cosmic times? It would certainly do so, but this could be compensated by suitably increasing the feedback efficiency past the \(\sim\) 10 Myr delay. In any event, the overall star formation history may change little, but star formation would be more _bursty_ on short timescales, and simulated galaxies may become more clumpy. It would be instructive to see how simulated galaxies would react to such a different recipe for feedback. As already warned in Paper I, restricting core-collapse supernovae to below \(\sim\)20 \(M_{\odot}\) would have the collateral effect of reducing theoretical metal yields by roughly a factor of 2. Empirical yields, of course, would remain the same. Still, how physically-based is the assumption that the most massive stars fail to produce energetic SN events? This is actually a widely entertained possibility (e.g., Krause et al., 2013; Sukhbold et al., 2016; Adams et al., 2017; Sander et al., 2019; Eldridge & Stanway, 2022 and references therein). From the theoretical point of view, the opposite problem has, along the years, affected attempts of modelling the outcomes of core-collapse events, namely the difficulty of producing explosions, especially in more massive stars (Sukhbold et al., 2016). Ultimately, how well established is the empirical scenario of GC formation proposed in Paper I? Like most previous attempts of describing how GCs may have formed, this is a phenomenological scenario, not a theoretical model. Yet, it is based on the enormous progress in the study of multiple stellar generations in GCs achieved mainly thank to HST imaging and VLT spectroscopy. As such, it is built to comply, as well as possible, with all the resulting observational constraints. In the end, for what matters for the feedback in starbursts, the mere existence of multiple stellar generations in GCs is tell the evidence of inefficient feedback. Other details of the formation process may be irrelevant as far as the consequences for star formation at high redshifts are concerned. Still, the empirical evidence of high, close to 100 percent efficiency in gas to star conversion refers to the scale of forming globular clusters, whereas the evidence for the \(z=9-16\) Universe requires it at the scale of full young galaxies. A supernova avoidance period extended to \(\sim\) 10 Myr would boost the star formation efficiency on all scales, but it remains to be seen whether this can reach close 100 per cent also on galactic scales at those redshifts. Nevertheless, supernova avoidance and ensuing overcooling favour the formation of high density clumps, with further enhanced efficiency. The mentioned case of GN-z11, the best known galaxy at these redshifts, exhibits a very high gas density to the point of having the full galaxy line emission being dominated by what has been proposed to be a globular cluster in formation. ## Acknowledgments I wish to thank Mauro Giavalisco for constructive comments and for his encouragement to write this paper. I wish also to thank the anonymous reviewer for their questions that helped to improve the manuscript.. ## Data Availability No new data were generated or analysed in support of this research.
2306.02744
Towards Better Explanations for Object Detection
Recent advances in Artificial Intelligence (AI) technology have promoted their use in almost every field. The growing complexity of deep neural networks (DNNs) makes it increasingly difficult and important to explain the inner workings and decisions of the network. However, most current techniques for explaining DNNs focus mainly on interpreting classification tasks. This paper proposes a method to explain the decision for any object detection model called D-CLOSE. To closely track the model's behavior, we used multiple levels of segmentation on the image and a process to combine them. We performed tests on the MS-COCO dataset with the YOLOX model, which shows that our method outperforms D-RISE and can give a better quality and less noise explanation.
Van Binh Truong, Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Quoc Khanh Nguyen, Quoc Hung Cao
2023-06-05T09:52:05Z
http://arxiv.org/abs/2306.02744v2
# Towards Better Explanations for Object Detection ###### Abstract Recent advances in Artificial Intelligence (AI) technology have promoted their use in almost every field. The growing complexity of deep neural networks (DNNs) makes it increasingly difficult and important to explain the inner workings and decisions of the network. However, most current techniques for explaining DNNs focus mainly on interpreting classification tasks. This paper proposes a method to explain the decision for any object detection model called D-CLOSE. To closely track the model's behavior, we used multiple levels of segmentation on the image and a process to combine them. We performed tests on the MS-COCO dataset with the YOLOX model, which shows that our method outperforms D-RISE and can give a better quality and less noise explanation. ## 1 Introduction Lately, deep neural networks (DNNs) in object detection for images have become popular because of their superior performance in several domains, such as healthcare [18, 22] and self-driving cars [8]. However, the lack of transparency in decision-making leads to suspicion among end-users, which could negatively affect the widespread AI applications, especially in areas that require trust from users. Furthermore, newer regulations like the European General Data Protection Regulation (GDPR) [27] strictly require the transparency of using black-box models. Thus, a growing chorus of researchers is calling for eXplainable Artificial Intelligence (XAI) methods. Today, many XAI methods are proposed mainly for classification problems [7]. However, explaining object detectors is a big challenge due to the structural differences between the classification and object detection models. Several state-of-the-art XAI methods for object detectors are proposed, such as Surrogate Object Detection Explainer (SODEx) [30] and Detector-Randomized Input Sampling for Explanation of Black-box Models (D-RISE) [25]. Yet, these methods meet problems in giving interpretable explanations, tuning hyperparameters for each object without a feature regions' size information, and degraded performance with large datasets. Hence, in this paper, our main contributions are as follows: 1. We proposed a new agnostic XAI method for object detectors, called Detector-Cascading multiple Levels of Segments to Explain (D-CLOSE). It can explain any object detector's prediction by giving a saliency map that estimates each pixel's importance in the input image to the model's prediction for each individual object. 2. We evaluated the proposed D-CLOSE on the MS-COCO validation dataset [15]. Results show that D-CLOSE requires less computation time and provides better performance both in classification and localization than D-RISE, the best XAI method for detector model as for as we know. 3. We proposed quantitative and qualitative evaluations for each object-size group to demonstrate the stability of the methods with large datasets. 4. We analyzed D-CLOSE on four cases of the YOLOX's prediction errors [9] on the MS-COCO dataset. 5. We further evaluated D-CLOSE with real-world images affected by bad conditions, images containing overlapping objects, and different spectra images to demonstrate the method's applicability. ## 2 Related Work ### Object Detection Model Object detection is an essential field in computer vision that detects the instances of visual objects of a particular class in digital images. One-stage and two-stage models are the two main approaches to building an object detector [44]. One-stage models are often based on fixed grids and make predictions directly from the input image, such as YOLOX [9], a recent release in the YOLO series [26]. Primarily, YOLOX uses a decoupled head to avoid the problem of collisions between classification and regression branches that reduce model accuracy. The two-stage model, Faster-RCNN [28], proposed a set of regions of interest by selected search or Region Proposal Network. The proposed regions are sparse as the potential bounding box candidates can be infinite. Then, another fine-tuned network processes these regional proposals to decide the final prediction. ### Explainable AI A series of XAI methods were born and divided into two categories based on visualization of the explanations: Pixel-based saliency and Region-based saliency methods [12]. #### 2.2.1 Pixel-based Saliency Methods Pixel-based saliency methods measure each pixel's significance score in the input by backpropagating from the prediction to the desired class, such as Gradient [33], LRP [3], and Deep Taylor [32], which focus on explaining classification models. A framework called "Explain to Fix" [11], based on Shapley Additive Explanation (SHAP) [17], was proposed to extend the applicability of these methods to object detection problems. Later, Contrastive Relevance Propagation [36], an extension of the LRP method, was proposed to explain the output decisions of the Single-Shot Object Detector [16] by scoring for object classes and offsets in bounding box locations and generating heatmaps highlighting the inputs that contribute significantly to the output. However, pixel-based saliency methods, where pixels are scored separately, are often less interpretable [37]. #### 2.2.2 Region-based Saliency Methods Region-based saliency methods usually provide a heatmap or regions in the input image representing the key factors contributing to the model's predictions. Hence, its explanation is comprehensible to end-users rather than optimizing the accuracy of the explanation on individual pixels [6]. The first studies showed that the final activation layers often carry complete feature information, which is the main basis for final predictions [31, 43], so several Class Activation Mapping (CAM)-based methods [5, 21, 31, 43] are proposed to calculate the importance of each feature map in the final activation layer of classification models. Instead of using only one final convolutional layer, Semantic Input Sampling for Explanation (SISE) [34] uses multiple intermediate convolutional layers to provide better spatial resolution and completeness of the explanation. Other techniques aim to interpret any model regardless of the model's architecture. For instance, Local Interpretable Model-agnostic Explanations (LIME) [29] generates perturbations from subdivided superpixels from the image, then computes the output values and fits into a single regression model to calculate the weights. Another method using input noise for sampling is Randomized Input Sampling for Explanation of Black-box Models (RISE) [24], which generates random perturbation masks and transitions through the model, taking the predicted probabilities for the target class as weights for those masks. Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations (MFPP) [40] generates a saliency map using multiple levels of superpixels [14] to sample perturbations and then combines them. An advantage of utilizing region-based methods is their explanation for model object detectors. Object detectors' output, classification and localization, need some different techniques to explain them. Furthermore, most object detectors do not use fully connected (FC) layers but convolutional layers, resulting in the "receptive field" of the target output being only part of the input image rather than the entire image as in the classification task [39]. To our knowledge, there are currently Surrogate Object Detection Explainer (SODEx) [30] and Detector-Randomized Input Sampling for Explanation of Black-box Models (D-RISE) [25] can explain both one-stage and two-stage object detectors. In detail, SODEx uses Surrogate Binary Classifier to convert object detectors' outputs, then uses LIME to explain. While SODEx only shows the most important regions based on a segmentation algorithm, D-RISE provides a more intuitive and easier-to-understand explanation as a saliency map. D-RISE leverages the idea from RISE, replacing the weights for each perturbation sample by calculating the similarity between the proposal vectors and the target vector. However, D-RISE has a major disadvantage; namely, tuning hyperparameters for each object is difficult because we cannot know the size of the feature regions, which causes great obstacles in model execution and evaluation on a larger dataset. Additionally, methods, like D-RISE, use the way of creating masks from RISE [24], only gives the best results with rectangular objects; otherwise, D-RISE's performance degrades because it generates random masks based on the mesh [40]. Hence, we proposed D-CLOSE, a new saliency method that can also interpret any object detector, which is more efficient than D-RISE due to using fewer data samples, saving computational time with superior qualitative and quantitative results. ## 3 Proposed method We proposed Detector-Cascading multiple Levels of Segments to Explain (D-CLOSE) to generate saliency maps that can explain the decision of both one-stage and two-stage object detectors. Given input image \(I\) of size \(h\times w\times 3\), an object detector \(D\), our method generates the explanation for detected objects in seven steps (Sec. 3.5). The overall architecture of D-CLOSE is illustrated in Fig. 1. ### Random masks generation Images often contain several objects in various sizes and shapes. We were inspired by mask generation from MFPP [40] to generate random masks to explain object detectors. We inherit the mask generation approach of MFPP as follows: * We use Simple Linear Iterative Clustering (SLIC) [1], a quick method to split the image into superpixels with different \(L\) levels by changing the number of superpixels [\(F_{1},F_{2},...,F_{L}\)] to segment the image. * We generate \(N\) binary masks of size \(h\times w\) by setting the segments to \(1\) with probability \(p\) and \(0\) with the remaining segments. * We upsample all masks using bilinear interpolation as this formula \(\lfloor(r+1)h\rfloor\times\lfloor(r+1)w\rfloor\). * We crop masks \(h\times w\) with uniformly random indents from \((0,0)\) up to \((\lfloor rh+1\rfloor,\lfloor(rw+1\rfloor)\). ### Similarity score The \(d_{i}\) vector that encodes the predictions of the object detection models is as follows: \[d_{i}=(x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i},p_{obj}^{i},p_{1}^{i},...,p_{C }^{i}) \tag{1}\] where: * Detection box (B): coordinates of the predicted objects, \((x_{1}^{i},y_{1}^{i})\) is top-left corner, \((x_{2}^{i},y_{2}^{i})\) is bottom-right corner * Objectness score (O): \(p_{obj}^{i}\) is the probability of predicting a bounding box containing any one object * Detection object's score (C): \((p_{1}^{i},p_{2}^{i},..,p_{n}^{i})\) is the vector representing the correctly predicted scores of the classes in the bounding box. We calculate the correlation between the target vector \(d_{t}\) and the proposal vectors \(d_{p}\), then use it as a weight for the mask, which is forwarded into the model [25]. We use a similarity score calculation formula from D-RISE: \[s(d_{p},d_{t})=IoU(B_{p},B_{t})\cdot O_{p}\cdot\frac{C_{p}C_{t}}{\|C_{p}\|\|C_ {t}\|} \tag{2}\] ### Density map Given an \(h\)-by-\(w\), the mask \(M\), \(M_{(i,j)}\) is the value of each pixel at position \((i,j)\). Randomly generated masks produce a non-uniform distribution. Some maybe appear more, and some appear very little, leading to unfair results. We aggregate all weighted masks with the output prediction forming a density map \(P\) of size \(h\times w\) to compute the randomly generated masks' distribution. \[P_{(i,j)}=\sum M_{(i,j)} \tag{3}\] \[S_{(i,j)}^{{}^{\prime}}=S_{(i,j)}^{{}^{\prime\prime}}\odot\frac{1}{P_{(i,j)}} \tag{4}\] where \(S_{(i,j)}^{{}^{\prime\prime}}\) and \(S_{(i,j)}^{{}^{\prime}}\) are the importance scores of each pixel before and after the processing, respectively. We have found that normalizing the density map can help produce smoother and less noisy explanations (Fig. 2). In addition, we also set up experiments and used metric evaluations to strengthen our argument further (Sec. 5.3). ### Fusion feature map After normalizing the density map generated from Sec. 3.3, we obtain \(L\) feature maps corresponding to \(L\) levels of the superpixel segment. Each feature map interprets the object's small to large features. Intuitively, small features are the most important to identify the object's class, while large features contain the generic and relevant context in which the object is found. Our method, inspired by SISE [34], aggregates feature maps at the semantic level by prioritizing more detailed features and descending to more general features. However, SISE forcibly removes noises using threshold parameters with Otsu's algorithm [23], making the explanations sometimes confusing to end-users. Additionally, when SISE deals with complex models, selecting the final convolutional layers in the blocks is extremely difficult, especially in object detection problems, because the pooling layers are not at the end of each block. We build a flexible and natural framework (Fig. 3), which does not access model internals, does not use threshold parameters, and can construct a wide range of feature maps with areas from detailed to generic regardless of the model architecture. ### Saliency maps inference We combine all the above operations as a procedure, including random mask generation, similarity score, density map, and fusion feature map, which can infer saliency maps for any object detector. The procedure is as follows: 1. The input image \(I\) is divided into \(L\) levels of segments. Each segmentation level generated \(N\) perturbation masks \(M_{i}^{k}\), where \(1\leq k\leq L\), \(1\leq i\leq N\). 2. We generate masked images by element-wise masks created with image input (\(I\odot M_{i}^{k}\)). 3. Masked images are forwarded \(I\odot M_{i}^{k}\) into the object detectors \(D\) to get \(T\) vectors of prediction: \[d_{p}=D(I\odot M_{i}^{k})=(d_{i}^{j})^{k}\] (5) where \(1\leq j\leq T,1\leq i\leq N,1\leq k\leq L\). 4. We calculate the similarity between target vector \(d_{t}\) to be explained and proposal vectors \(d_{p}\). Then, we take the maximum score for each masked image on each Figure 1: The overall D-CLOSE procedure (upper part) and the detailed saliency map generation process (lower part). Our method builds a standard process for generating saliency maps for segment levels. We work with \(L\) different segmentation levels to obtain \(L\) feature maps and then aggregate each feature map (shown in Fig. 3) to obtain an explanation for the feature. Our method follows seven steps described in Sec. 3.5. Figure 3: Our process combines feature maps with multiple levels of segmentation on the image. Figure 2: A density map calculates the density distribution of the pixels generated during the masking process. Then, we calculate the average contribution per pixel using a density map to remove noise from the saliency maps. target vector: \[w_{i}^{k}=max(s(d_{t},(d_{i}^{j})^{k}))\] (6) where \(1\leq j\leq T,1\leq i\leq N,1\leq k\leq L\). 5. We calculate the density map \(P_{k}\) representing the distribution of randomly generated masks \(M_{i}^{k}\) in Eq. 3, where \(1\leq k\leq L\). 6. We compute the average score of each pixel by dividing the weighted sum of masks \(M_{i}^{k}\) by the density map \(P_{k}\) to obtain the saliency map \(S_{k}\) for the \(k^{th}\) segment level. \[S_{k}=\frac{1}{P_{k}}\odot\sum_{i=1}^{N}w_{i}^{k}M_{i}^{k}\] (7) where \(1\leq k\leq L\). 7. We combine each semantic level's \(S_{k}\) saliency maps from the features by cascading in blocks, as shown in Fig. 3. Mathematically, our process works like a recursive algorithm where \(A_{k}\) is the saliency map obtained after each step. In the last step, the saliency map obtained is \(A_{L-1}\): \[A_{k}=\begin{cases}(S_{k}+S_{k+1})S_{k+1}&k=1\\ (A_{k-1}+S_{k+1})S_{k+1}&2\leq k\leq L-1\end{cases}\] (8) ## 4 Experiments and results ### Datasets and models We evaluate our method with a pre-trained YOLOX model [9]1 on the MS-COCO validation dataset [15]. In quantitative evaluation, results are obtained with the same set of parameters on the same data set. All experiments use Nvidia Tesla T4 and 24GB RAM as the benchmark. Footnote 1: [https://github.com/Megvii-BaseDetection/YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) ### Parameters setup For D-RISE, we use default parameters proposed in the original paper with \(N=5000\) masks, probability \(p=0.5\) and resolution \((h_{s},w_{s})=(16,16)\). For D-CLOSE, during the masking process, we use a segmentation algorithm that splits the image into \(L=5\) levels with the number of segments \([150,\,300,\,600,\,1200,\,2400]\), respectively. For each image, we perform our method with \(N=800\) masks per segmentation level, a kernel width \(\alpha=0.25\), and resize offset ratio \(r=2.2\). ### Sanity checks A saliency map can explain which features the model considers to predict and why the model gives the correct explanation or not. Hence, the saliency map reveals the weights that the model learned during training. We use sanity checks [2] to check whether D-CLOSE results faithfully reflect the decision-making behavior of the model. We check whether the saliency map changes by changing the model's weights. In the experiment, our method's saliency map shows that the model focuses on another region with altered weights, which means that our method can faithfully reflect the model's behavior (Fig. 4). ### Model errors Inspired by [13], we evaluate whether D-CLOSE can generate feature maps that explain the incorrect decisions of the model. We further extend the test case where the model predicts an object that does not exist in the image. Also, we analyze the classification error and localization error of the YOLOX model on the MS-COCO dataset. * Object is correctly localized but misclassified (The first row in Fig. 5). * Object is detected with a correct classification but mislocalized (The second row in Fig. 5). * The model fails to detect the object (Fig. 5(a)). * The model detects an object not labeled as ground truth (Fig. 5(b)). We compare the two generated explanations for the ground truth and predicted bounding box, then calculate the difference by subtracting the corresponding pixel values between them. The difference between the two saliency maps can indicate the source of the error. ### Model in images affected by bad conditions Our experiment utilizes D-CLOSE to explain the model's detection of images from surveillance cameras that record images of pedestrians and vehicles in bad conditions such as low light, fog, and night. Also, based on [4], we evaluate D-CLOSE with images in different spectra. Test images are obtained from the WIDER Pedestrian Detection Dataset [42] and Multispectral Object Detection Dataset [35]. In all cases (Fig. 8), D-CLOSE's saliency map is high quality and stable. While D-RISE produces a more noisy Figure 4: (a) Ground truth, (b) Saliency map with pre-trained weight, (c) Saliency map with altered weight. saliency map and is unfocused. Because our method builds on combining feature maps of multiple segmentation levels, each error map level can be entirely offset by other maps. ## 5 Evaluation metrics ### Qualitative This section evaluates the visualizations from D-CLOSE in two approaches: the stability of the method when changing parameters compared to previous methods and the object discrimination ability of our method. #### 5.1.1 Stability visualization Since XAI methods can perform differently with varying number of samples [20], we compare D-CLOSE with D-RISE in different amounts of data (Fig. 9) to show that our method produces better results without being influenced by the number of generated samples. When evaluating both methods on large datasets, we use a fixed set of parameters for all objects in the image. We found out that D-RISE produced good results only by fine-tuning each parameter to fit each object's geometry; otherwise, D-RISE's saliency map is quite unfocused and provides weak localization in some cases. While D-CLOSE's explanation is more stable with the number of samples generated, the noise in the saliency map decreases, and more focus is on the important regions in the image. Figure 5: Examples of localization and classification error prediction model cases. The green box is the ground truth, the red box is the model’s prediction. In the first row, the model is biased toward the outside context and misclassifies the “bed” as the “couch”. In the second row, the model correctly predicts the “dining table”, but the model only focuses on the tabletop and ignores table legs leading to poor localization. Figure 6: In (a), the model fails to detect the labeled “bench”, but the explanation shows that the model can still capture the bench’s features because possibly those features are not strong enough to influence the model’s final decision. In (b), the model detects an object not labeled as ground truth. The explanation shows that the model focuses mostly on the ground (not the wooden planks) to predict the “bench”. Figure 7: Examples of partially and fully overlapping objects. In (a), the bounding box wraps two “tennis racket” objects, D-CLOSE indicates that the model mostly focuses on the front “tennis racket”. In (b) and (c), D-CLOSE generates differentiated saliency maps for overlapping objects. #### 5.1.2 Object discrimination visualization We conduct experiments to measure whether our proposed D-CLOSE has good object discrimination ability. As shown in Fig. 7, D-CLOSE's explanations focus more on the object's shape inside the bounding box. They often clearly distinguish the boundaries of objects, significantly when multiple objects are overlapped. ### Quantitative We apply various metrics to compare our method's plausibility and faithfulness with other methods. #### 5.2.1 Plausibility Evaluation We use two standard metrics to evaluate XAI's plausibility: _Sparsity_[10] and _Energy-based pointing game (EBPG)_[38], based on human-annotated bounding boxes. In our evaluation, we only consider explanations for detected bounding boxes that best match the ground truths for each class to compute these metrics. #### 5.2.2 Faithfulness Evaluation Faithfulness evaluation metrics, including _Deletion_, _Insertion_[24] and _Over-all_[41] measure the explanation's completeness and consistency for the model's predictions. _Deletion_ checks whether removing these important pixels severely degrades the model's predictions for that object. For _Insertion_, it measures the increase in probability as more important pixels are included. _Over-all_ score is the difference between _Insertion_ and _Deletion_. ### Ablation studies Our method proposes to combine three important components, including multi-scale superpixel segment, density normalization, and multi-scale feature fusion. We perform the experiments to investigate the contribution level of each part to the final explanation performance. The results are reported in Table 1. **Superpixel Segment.** To validate the effectiveness of the superpixel segment step, we compare the evaluation metrics results using the superpixel segment results (row 1 of Table 1) and the D-RISE results (Table 2.) With this, we help increase 1.07% EBPG and 10.38% over-all score. **Density map.** In the proposed D-CLOSE, we have introduced density map normalization, which can produce smoother and more object-focused saliency maps. From Table 1 (row 2), we can see that the density map can boost 3.01 sparsity, 2.93% EBPG, and 0.06% over-all score, compared to just adding superpixel segment step. **Multi-scale Feature Fusion.** As shown in Table 1, we get impressive performance (21.09 sparsity, 22.03% EBPG, 0.79% over-all score) using multi-scale feature fusion. This result is comprehensible since this step removes the most noise and keeps only the most important features. Experiments also reflect that this step has the most significant influence on the final explanation. Finally, the amalgama \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Ablation Settings} & \multicolumn{1}{c|}{Sparsity} & EBPG (\%) & Over-all (\%) \\ \hline Supervised Segment & Density Map & Feature Fusion & & & \\ \hline ✓ & ✗ & ✗ & 3.42 & 12.25 & 87.31 \\ ✓ & ✓ & ✗ & 3.93 & 13.42 & 87.35 \\ ✓ & ✗ & ✓ & 22.01 & 32.52 & 88.08 \\ ✓ & ✓ & ✓ & **25.02** & **35.45** & **88.14** \\ \hline \end{tabular} \end{table} Table 1: Quantitative evaluation metrics on different ablation settings. The best is shown in bold. Figure 8: D-CLOSE provides object-focused explanations in images affected by bad conditions: Low-light (Underexposed and affected by red light), Foggy, Night, and in different Night spectra: FIR, NIR, MIR [4]. tion of the three aforementioned steps yields a result that is markedly superior to the outcome of implementing each step in isolation. ## 6 Results As mentioned in Sec. 1, D-RISE is only effective when fine-tuning hyperparameters for individual objects. While our method can be applied to different sizes of objects with only one set of parameters for the entire dataset. To validate this, we first use 5000 images from the MS-COCO validation dataset [15] and calculate the ratio of the bounding box size with the input image. We then use these ratios as input to the \(k\)-means clustering algorithm [19] to divide the objects into three groups corresponding to the objects belonging to small, middle, and large groups. We calculate quantitative metrics for each of these groups and for the entire data set to demonstrate the effectiveness of D-CLOSE. Based on the results in Table 2, we observe that D-CLOSE outperforms Grad-CAM and D-RISE on all quantitative metrics. To the best of our knowledge, our evaluation is the first to calculate quantitative metrics for distinct object groups. Besides, in Table 3, we also summarize the average inference time results for D-RISE and the proposed D-CLOSE. Our method is 1.4 times faster than D-RISE and achieves much better performance. ## 7 Conclusions In this paper, we introduced D-CLOSE, a new XAI method that can explain the decisions of any object detector. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Small} & \multicolumn{3}{c}{Middle} & \multicolumn{3}{c}{Large} & \multicolumn{3}{c}{Small+Middle+Large} \\ \cline{2-13} **Metrics** & Grad-CAM & D-RISE & D-CLOSE & Grad-CAM & D-RISE & D-CLOSE & Grad-CAM & D-RISE & D-CLOSE & Grad-CAM & D-RISE & D-CLOSE \\ \hline **Sparsity**\(\uparrow\) & 21.46 & 4.81 & **28.00** & 9.22 & 2.67 & **12.18** & 7.80 & 2.41 & **8.49** & 19.47 & 4.43 & **25.02** \\ \hline **EBPG (\%)**\(\uparrow\) & 11.52 & 0.06 & **28.34** & 59.11 & 26.89 & **69.50** & 81.39 & 55.57 & **84.22** & 26.86 & 11.17 & **35.45** \\ \hline **Del (\%)**\(\downarrow\) & 3.27 & 2.27 & **1.21** & 13.92 & 12.28 & **6.53** & 25.07 & 17.43 & **12.57** & 5.73 & 4.26 & **2.71** \\ \hline **Ins (\%)**\(\uparrow\) & 71.13 & 83.27 & **92.35** & 68.78 & 73.06 & **85.74** & 62.03 & 64.05 & **78.92** & 70.18 & 81.19 & **90.85** \\ \hline **Over-all (\%)**\(\uparrow\) & 67.86 & 81.00 & **91.14** & 54.86 & 60.78 & **79.21** & 36.96 & 46.62 & **66.35** & 64.45 & 76.93 & **88.14** \\ \hline \hline \end{tabular} \end{table} Table 2: Mean accuracy of quantitative results of all XAI methods evaluated on the whole MS-COCO validation set, further categorized into small, middle, and large groups (as shown in Fig. 10). For each metric, the best is shown in bold. The arrows \(\uparrow/\downarrow\) indicate higher or lower scores are better. Figure 10: Saliency maps generated by the D-CLOSE method. \begin{table} \begin{tabular}{c c c} \hline \hline **Method** & **D-RISE** & **D-CLOSE** \\ \hline Running time (s) \(\downarrow\) & 98.67 & **70** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparative evaluation in terms of inference time (seconds, averaged for each object) on MS-COCO validation set. The better is in bold. Figure 9: We compare two methods, D-CLOSE and D-RISE, with different numbers of samples of 500, 2000, and 4000 samples, respectively. D-CLOSE produces stable and better-quality saliency maps than D-RISE, even with a small number of samples. Our method samples the input with multiple levels of segmentation to make the explanation more stable and smooth. We proposed using quantitative metrics for each object group to demonstrate that our approach performs better than previous state-of-the-art methods. We conducted in-depth analyzes of the model's prediction errors and the implementation of D-CLOSE in real-world images. In future work, we want to optimize the computation time of XAI by removing redundant data samples during masking to debug and create workflows that improve model performance. Our code is available at [https://github.com/Binh24399/D-CLOSE](https://github.com/Binh24399/D-CLOSE).
2308.05587
Search for Dark Photons with the FASER detector at the LHC
The FASER experiment at the LHC is designed to search for light, weakly-interacting particles produced in proton-proton collisions at the ATLAS interaction point that travel in the far-forward direction. The first results from a search for dark photons decaying to an electron-positron pair, using a dataset corresponding to an integrated luminosity of 27.0 fb$^{-1}$ collected at center-of-mass energy $\sqrt{s} = 13.6$ TeV in 2022 in LHC Run 3, are presented. No events are seen in an almost background-free analysis, yielding world-leading constraints on dark photons with couplings $\epsilon \sim 2 \times 10^{-5} - 1 \times 10^{-4}$ and masses $\sim$ 17 MeV - 70 MeV. The analysis is also used to probe the parameter space of a massive gauge boson from a U(1)$_{B-L}$ model, with couplings $g_{B-L} \sim 5 \times 10^{-6} - 2 \times 10^{-5}$ and masses $\sim$ 15 MeV - 40 MeV excluded for the first time.
FASER Collaboration
2023-08-10T13:51:04Z
http://arxiv.org/abs/2308.05587v2
# Search for Dark Photons with the FASER detector at the LHC FASER Collaboration ###### Abstract The FASER detector at the LHC FASER detector at the LHC FASER detector at the LHC FASER detector at the LHC FASER detector at the LHC FASER Collaboration Henso Abreu Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel John Anders CERN, CH-1211 Geneva 23, Switzerland Claire Antel Departement de Physique Nucleaire et Corpusculaire, University of Geneva, CH-1211 Geneva 4, Switzerland Akitaka Ariga Albert Einstein Center for Fundamental Physics, Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland Tomoko Ariga Department of Physics and Astronomy, University of California, Irvine, CA 92697-4575, USA Jeremy Atkinson CERN, CH-1211 Geneva 23, Switzerland Florian U. Bernlochner Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel Tobias Boeckh Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel Jamie Boyd CERN, CH-1211 Geneva 23, Switzerland D. Lydia Brenner Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel Franck Cadoux CERN, CH-1211 Geneva 23, Switzerland David W. Casper CERN, CH-1211 Geneva 23, Switzerland Charlotte Cavanagh Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel Xin Chen Department of Physics and Astronomy, Technion--Israel Institute of Technology, Haifa 32000, Israel Andrea Coccaro CERN, CH-1211 Geneva 23, Switzerland Monica D'Onofrio Department of Physics and Astronomy, Technion--Israel Institute of Technology \({}^{20}\)Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel \({}^{21}\)Institut fur Physik, Universitat Mainz, Mainz, Germany \({}^{22}\)Department of Physics & Astronomy, University of Sussex, Sussex House, Falmer, Brighton, BN1 9RH, United Kingdom \({}^{23}\)Science and Technology Policy Fellow at the American Association for the Advancement of Science \({}^{24}\)Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan \({}^{25}\)University of Manchester, School of Physics and Astronomy, Schuster Building, Oxford Rd, Manchester M13 9PL, United Kingdom \({}^{26}\)Dipartimento di Fisica "Ettore Pancini", Universita di Napoli Federico II, Complesso Universitario di Monte S. Angelo, I-80126 Napoli, Italy \({}^{27}\)Institute of Particle and Nuclear Studies, KEK, Oho 1-1, Tsukuba, Ibaraki 305-0801, Japan \({}^{28}\)Astrocent, Nicolaus Copernicus Astronomical Center Polish Academy of Sciences, ul. Rektorska 4, 00-614, Warsaw, Poland \({}^{29}\)National Centre for Nuclear Research, Pasteura 7, Warsaw, 02-093, Poland \({}^{30}\)Charles University, Faculty of Mathematics and Physics, Prague; Czech Republic The FASER experiment at the LHC is designed to search for light, weakly-interacting particles produced in proton-proton collisions at the ATLAS interaction point that travel in the far-forward direction. The first results from a search for dark photons decaying to an electron-positron pair, using a dataset corresponding to an integrated luminosity of \(27.0\,{\rm fb}^{-1}\) collected at center-of-mass energy \(\sqrt{s}=13.6\,{\rm TeV}\) in 2022 in LHC Run 3, are presented. No events are seen in an almost background-free analysis, yielding world-leading constraints on dark photons with couplings \(\epsilon\sim 2\times 10^{-5}-1\times 10^{-4}\) and masses \(\sim 17\,\,{\rm MeV}-70\,\,{\rm MeV}\). The analysis is also used to probe the parameter space of a massive gauge boson from a \({\rm U}(1)_{B-L}\) model, with couplings \(g_{B-L}\sim 5\times 10^{-6}-2\times 10^{-5}\) and masses \(\sim 15\,\,{\rm MeV}-40\,\,{\rm MeV}\) excluded for the first time. (c) 2023 CERN for the benefit of the FASER Collaboration. Reproduction of this article or parts of it is allowed as specified in the CC-BY-4.0 license. Introduction The existence of cold dark matter is strong evidence for new particles beyond the Standard Model (SM) of particle physics. Dark matter may be composed of a single particle or of more than one kind of particle, and the dark matter particles may interact only through gravity or also through additional forces. Dark matter therefore motivates a rich variety of ideas for beyond-the-SM (BSM) physics, and new insights into the particle nature of dark matter are of great interest in both particle physics and astrophysics [1; 2]. Although dark matter is only known to interact through gravity, the identification of its particle properties will be possible only if it is detected via other interactions. Among the best-motivated possibilities are interactions with SM particles through renormalisable couplings. If dark matter is a component of a dark sector that contains a U(1) electromagnetic force, the dark sector may interact through a renormalisable interaction of the form \(F^{\mu\nu}F^{D}_{\mu\nu}\), where \(F_{\mu\nu}\) and \(F^{D}_{\mu\nu}\) are the electromagnetic field strength tensors of the SM and the dark sector, respectively. As a result of this interaction, the dark gauge boson mixes with the SM gauge boson, leading to a new particle, the dark photon \(A^{\prime}\)[3]. If dark photons are light and weakly interacting, they are long-lived particles (LLPs) and can be produced in large numbers in the proton-proton collisions at the Large Hadron Collider (LHC) [4]. They can travel a macroscopic distance, and then decay to pairs of charged particles, producing a spectacular signal of new physics. Other considerations also motivate BSM physics with signals similar to the dark photon scenario. For example, the accidental conservation of baryon number \(B\) and (total) lepton number \(L\) in the SM suggests that these conserved quantities may be linked not just to global, but to local gauge symmetries. A particularly well-motivated example is the gauge symmetry U(1)\({}_{B-L}\)[5; 6], which is not only conserved classically, but is also free of quantum anomalies, once three sterile neutrinos are introduced to give neutrinos mass. This model predicts a new particle, the \(B-L\) gauge boson \(A^{\prime}_{B-L}\). For masses in the MeV to GeV range and small \(B-L\) gauge couplings, the \(A^{\prime}_{B-L}\) may also be produced in large numbers at the LHC and travel long distances before decaying to pairs of SM particles with \(B-L\) charge [7]. FASER is a new LHC experiment designed to search for light, weakly-interacting particles, including dark photons, \(B-L\) gauge bosons, and other long-lived particles [7; 8; 9]. The FASER detector is located approximately 480 m from the ATLAS interaction point (IP1) along the beam collision axis line-of-sight (LOS). Because they interact very weakly, dark photons and other LLPs produced at IP1 can travel along the LOS, pass through \(\sim\)100 m of rock and concrete without interacting, and then decay in FASER. At the same time, most SM particles produced at the ATLAS IP will either be bent away by the LHC magnets or stopped in the rock and concrete. FASER is therefore well suited to search for dark photons and many other light and weakly-interacting particles in a very low background environment. This study presents the results of a search for LLPs using the FASER detector and a dataset corresponding to an integrated luminosity of 27.0 fb\({}^{-1}\) collected at center-of-mass energy \(\sqrt{s}=13.6\) TeV from September to November 2022 during Run 3 of the LHC. In particular, the scenario where LLPs are produced in LHC collisions, travel to the FASER detector, and then decay to electron-positron pairs, \(pp\to\text{LLP}\to e^{+}e^{-}\), is considered. ## II Long-lived particles at FASER In this section, the parameter spaces of the dark photon and \(B-L\) gauge boson models are defined and the dominant production and decay processes that determine the signal at FASER are described. The properties of the dark photon are defined through the Lagrangian terms \[{\cal L}\supset\frac{1}{2}\,m_{A^{\prime}}^{2}A^{\prime 2}-\epsilon\,e\sum_{f}q_{f}A^ {\prime\,\mu}\,\bar{f}\gamma_{\mu}f\, \tag{1}\] where \(m_{A^{\prime}}\) is the dark photon's mass, \(\epsilon\) is the dark photon's kinematic mixing parameter, and the sum is over all SM fermions \(f\) with SM electric charge \(q_{f}\). The dark photon may also couple to additional particles in the dark sector, such as the dark matter particle \(\chi\). In this analysis, it is assumed that \(m_{A^{\prime}}<2m_{\chi}\) and that the dark photon decays visibly to SM particles. Thermal freeze-out is then determined by the processes \(\chi\chi\leftrightarrow A^{\prime}\leftrightarrow f\bar{f}\). For light masses \(m_{A^{\prime}}\sim{\rm MeV}-{\rm GeV}\) and loop-induced or otherwise suppressed couplings \(\epsilon\sim 10^{-6}-10^{-3}\), the dark matter particle's thermal relic density is in the right range to be a significant fraction of cosmological dark matter [10; 11; 12]. These values of \(m_{A^{\prime}}\) and \(\epsilon\) are therefore cosmologically favoured and provide a well-defined thermal relic target in the dark photon parameter space for experimental searches. At the LHC, with these thermal relic target parameters and in the parameter space that FASER has discovery sensitivity, the dominant source of dark photons is SM meson decay and dark bremsstrahlung: * Neutral pion decay \(\pi^{0}\to A^{\prime}\gamma\): This mode is accessible for \(m_{A^{\prime}}<m_{\pi^{0}}\simeq 135~{}{\rm MeV}\). The branching fraction is \(B(\pi^{0}\to A^{\prime}\gamma)=2\epsilon^{2}(1-m_{A^{\prime}}^{2}/m_{\pi^{0}} ^{2})^{3}B(\pi^{0}\to\gamma\gamma)\) where \(B(\pi^{0}\to\gamma\gamma)\simeq 0.99\)[13]. * Eta meson decay \(\eta\to A^{\prime}\gamma\): This mode is open for \(m_{A^{\prime}}<m_{\eta}\simeq 548~{}{\rm MeV}\). The branching fraction is \(B(\eta\to A^{\prime}\gamma)=2\epsilon^{2}(1-m_{A^{\prime}}^{2}/m_{\eta}^{2})^ {3}B(\eta\to\gamma\gamma)\) where \(B(\eta\to\gamma\gamma)\simeq 0.39\)[13]. * Dark bremsstrahlung \(pp\to ppA^{\prime}\): In this process, a dark photon is emitted via initial or final state radiation from colliding protons in a coherent way. This mode is open for dark photon masses up to \({\cal O}(2~{}{\rm GeV})\)[8]. These processes produce a high-intensity beam of dark photons in the far-forward direction along the beamline. Neutral pion decay is typically the leading signal contribution, but \(\eta\) decay can be comparable for \(m_{A^{\prime}}\sim 100~{}{\rm MeV}\), and dark bremsstrahlung can be comparable near the boundary of FASER's sensitivity [8]. Other production mechanisms include the decays of heavier mesons (such as \(\eta^{\prime}\) or \(\omega\)) and direct Drell-Yan production \(q\bar{q}\to A^{\prime}\), but these are subdominant and are neglected. Once produced, dark photons then may travel a macroscopic distance, producing a striking signal of high-energy particles far from the \(pp\) interaction point. FASER's dark photon sensitivity is largely determined by its location. For \(E_{A^{\prime}}\gg m_{A^{\prime}}\gg m_{e}\), the decay length for a dark photon with lifetime \(\tau\) travelling at speed \(\beta=v/c\) is [8] \[L=c\beta\tau\gamma\approx(80~{}{\rm m})\left[\frac{10^{-5}}{\epsilon}\right]^ {2}\left[\frac{E_{A^{\prime}}}{{\rm TeV}}\right]\left[\frac{100~{}{\rm MeV}}{ m_{A^{\prime}}}\right]^{2}. \tag{2}\] For dark photons with TeV energies, FASER can be expected to be sensitive to parameter space with \(\epsilon\sim 10^{-5}\) and \(m_{A^{\prime}}\sim 100~{}{\rm MeV}.\) For dark photon masses in the range \(2m_{e}<m_{A^{\prime}}<2m_{\mu}\simeq 211~{}{\rm MeV}\), dark photons decay to electrons with \(B(A^{\prime}\to e^{+}e^{-})\approx 100\%\). In the \(B-L\) model, the properties of the \(B-L\) gauge boson \(A^{\prime}_{B-L}\) are determined by the Lagrangian terms [7] \[{\cal L}\supset\frac{1}{2}\,m_{A^{\prime}_{B-L}}^{2}A^{\prime 2}_{B-L}-g_{B-L} \sum_{f}Q^{f}_{B-L}A^{\prime\,\mu}_{B-L}\,\bar{f}\gamma_{\mu}f\, \tag{3}\] where \(Q_{B-L}^{f}\) is the \(B-L\) charge of fermion \(f\). The parameter space of this model is defined by the \(B-L\) gauge boson's mass \(m_{A_{B-L}^{\prime}}\) and the \(B-L\) gauge coupling \(g_{B-L}\). The \(A_{B-L}^{\prime}\) gauge boson is produced in a similar manner to the dark photon, with light meson decays and dark bremsstrahlung the dominant production mechanisms; the production rates are proportional to \(g_{B-L}^{2}\), compared to \(\epsilon^{2}\) as in the dark photon model. The boson can decay to all kinematically accessible states that possess \(B-L\) charge. In this analysis, the region of phase space which FASER is sensitive to is confined to the mass range \(2m_{e}<m_{A_{B-L}^{\prime}}<2m_{\mu}\simeq 211\) MeV, where the possible decays are to electrons, SM neutrinos, and possibly sterile neutrinos. It is assumed that decays to sterile neutrinos are kinematically inaccessible. The visible signal from decays to electrons therefore has a branching fraction of \(B(A_{B-L}^{\prime}\to e^{+}e^{-})\approx 40\%\). If decays to sterile neutrinos are allowed, the visible branching fraction could be as low as \(B(A_{B-L}^{\prime}\to e^{+}e^{-})\approx 25\%\), slightly reducing the search sensitivity, but not to a significant extent. ## III The FASER detector The FASER detector, located approximately 480 m away from IP1 in the TI12 tunnel that connects the LHC with the Super Proton Synchrotron (SPS), is aligned with the IP1 LOS. However, due to the crossing angle in IP1, the LOS is offset vertically by 6.5 cm with respect to the centre of the detector, which is properly accounted for in the simulation. The detector is described in detail in Ref. [9]; a brief description is given here. The FASER\(\nu\) tungsten/emulsion detector is dedicated to neutrino measurements, and it is not used in this analysis, but the eight interaction lengths of tungsten suppress potential backgrounds. Figure 1 presents a sketch of the detector. In this analysis, the detector components of interest are the 1.5 m long detector decay volume and the tracking spectrometer, both of which are immersed in a 0.57 T dipole magnetic field, as well as the scintillator system and the electromagnetic calorimeter. The active transverse area of the detector is defined by the circular magnet aperture with a radius of 10 cm. The scintillator system is composed of four stations, each consisting of multiple scintillator counters. At the front of the detector is the VetoNu station, composed of two scintillator counters. Further downstream is the Veto station, constructed from three scintillator counters in front of the decay volume. Both the VetoNu and Veto stations have scintillators with a transverse size (30 \(\times\) 35 cm\({}^{2}\) and 30 \(\times\) 30 cm\({}^{2}\) respectively) significantly larger than the active region of the detector, which allow for the rejection of muons entering the detector at an angle with respect to the LOS. The next scintillator station is the Timing station with two scintillator counters that separately cover the top and bottom half of the detector (with a small overlap) installed in front of the tracking spectrometer, used for triggering and timing measurements. Finally, the Pre-shower Figure 1: A sketch of the FASER detector, showing the different detector systems as well as the signature of a dark photon (\(A^{\prime}\)) decaying to an electron-positron pair inside the decay volume. The white blobs depict where measurements are taken for the \(A^{\prime}\) signal and the solid red lines represent the reconstructed tracks produced by the \(e^{+}e^{-}\) pair. station is in front of the calorimeter and constructed from two scintillator counters with both a graphite absorber and a tungsten radiator in front of each counter. The tracking spectrometer is built from three tracking stations, each with three layers of double-sided silicon microstrip detectors, interleaved with two 1 m-long 0.57 T dipole magnets. The tracker sensors are SCT barrel modules from the ATLAS experiment [14], which have a hit position resolution of about 20 \(\mu\)m in the precision coordinate, and about 0.6 mm in the other coordinate. Each tracker plane contains eight SCT modules, arranged as a 24 \(\times\) 24 cm\({}^{2}\) square in the transverse plane. The magnets bend charged tracks in the vertical direction, corresponding to the precision coordinate of the tracker. The FASER tracker is described in more detail in Ref. [15]. The electromagnetic energy of particles is measured by an electromagnetic calorimeter, the most downstream component of the detector. The calorimeter is constructed from four outer ECAL modules from the LHCb experiment [16]. Each module is 12 \(\times\) 12 cm\({}^{2}\) in the transverse plane, with 66 layers of interleaved 4 mm thick plastic scintillator and 2 mm thick lead plates, corresponding to a total of 25 radiation lengths. A module has 64 wavelength-shifting fibers that penetrate the length of the module and end in a photomultiplier tube (PMT). The readout of the PMTs saturates for large pulses corresponding to energy deposits above 3 TeV. From July to August 2022 the readout was set to saturate at 300 GeV for commissioning purposes, and this data is excluded from the dark photon search. The calorimeter energy resolution has been measured with high energy electrons in a testbeam, provided by the CERN SPS and carried out in July 2021 [17], to be \(\mathcal{O}(1\%)\). Readout is triggered by signals from the scintillators or calorimeter system, with a typical trigger rate of 1 kHz dominated by high energy muons from IP1. The average detector deadtime was 1.3%, which is accounted for when calculating the luminosity collected by FASER. The trigger and data acquisition systems are described in more detail in Ref. [18]. ## IV Dataset and Simulation Samples This search uses 27.0 fb\({}^{-1}\) of Run 3 collision data collected by FASER between September and November 2022. The luminosity of the dataset is provided by the ATLAS experiment [19; 20; 21]. Monte Carlo (MC) simulation samples are used to evaluate the signal efficiency, in the estimation of background yields, and to calculate the systematic uncertainties. All samples are simulated using GEANT4[22] with a perfectly aligned and detailed description of the detector geometry, including passive material. The samples include a realistic level of detector noise, and are reconstructed in the same way as the data. Signal events are generated using FORESEE[23] with the EPOS-LHC[24] generator to model very forward \(\pi^{0}\) and \(\eta\) meson production in the LHC collisions. The production of dark photons via dark bremsstrahlung is also included, which is modelled using the Fermi-Weizsacker-Williams approximation following Ref. [25] with the additional requirement of \(p_{\rm T}(A^{\prime})<1\) GeV to ensure the validity of the calculation. Numerous \(A^{\prime}\) and \(A^{\prime}_{B-L}\) signal samples are generated covering the relevant ranges in both coupling and mass. A high-statistics high-energy muon sample with \(2\times 10^{8}\) events entering FASER from IP1 is used for several background and systematic uncertainty studies. The sample uses the expected energy and angle of the muons as estimated by FLUKA[26; 27; 28] simulations of incoming muons from IP1. The samples include a detailed description of the LHC components and infrastructure between IP1 and FASER. A similar sample of \(8\times 10^{5}\) large-angle (15-60 mrad) muon events generated slightly upstream of the VetoNu scintillators, and with a radius spanning 15-30 cm covering the edge region of the scintillators, is produced and used to study the background from large-angle muons that miss the veto system. Neutrino interactions in FASER [29] are simulated by the GENIE[30; 31] generator, following the fluence, energy spectrum and flavour composition obtained in Ref. [32]. The sample used for the neutrino background study corresponds to 300 ab\({}^{-1}\) of data, and only includes neutrino interactions upstream of the Veto scintillators and in the active detector area. ## V Event Reconstruction Event reconstruction is performed using FASER's Calypso [33] offline software system, based on the open-source Athena framework [34; 35] from the ATLAS experiment. Charged particle track reconstruction is performed using the combinatorial Kalman filter from the ACTS library [36]. When reconstructing multiple tracks, it is required that they do not share more than 6 clusters of hits in contiguous silicon strips on each side of an SCT module; if the number of shared hits exceeds this threshold, then the track with the higher \(\chi^{2}\) is discarded. A track-based alignment of the tracking detector is performed using an iterative local \(\chi^{2}\) alignment method, and shows an improved agreement in the hit residual and track \(\chi^{2}\) distributions when comparing to the perfectly aligned MC. The alignment only considers the most sensitive distortions, translations in the precision tracker coordinate (vertical) and rotations around the longitudinal axis, at both the individual module and tracking layer level. Extracting the PMT charge from the scintillator and calorimeter modules is done by summing the digitised waveform values after pedestal subtraction. The calorimeter charge-to-energy scale calibration is determined using high energy electron and muon beams from the testbeam data described in Sec. III. To take into account differences between the detector configurations in the testbeam and in collision data, the most probable calorimeter charge deposited by muons as minimum ionising particles (MIPs) is used as an in-situ normalisation of the energy scale. Special calibration runs are performed at high calorimeter gain to measure the MIP signal. After individually normalising each calorimeter module signal to the MIP scale, the testbeam data is used to estimate the initial electromagnetic energy of the particle entering the calorimeter. ## VI Event Selection The typical \(A^{\prime}\) detector signature, shown in Figure 1, provides a unique signature to investigate. Since the \(A^{\prime}\) is weakly interacting, no signal is expected in the veto scintillator systems. The \(A^{\prime}\) can then decay in the decay volume to a very collimated, high momentum, \(e^{+}e^{-}\) pair, leaving two closely-spaced oppositely-charged particle tracks in the tracker. The \(e^{+}e^{-}\) then leave signals in both the Timing and Pre-shower scintillators as well as a large energy deposit in the calorimeter. There are no significant SM processes that can mimic this signature, allowing for a close-to background-free search. To avoid unconscious bias affecting the analysis, a blinding procedure is applied to events where there is both no signal in any veto scintillator and the calorimeter energy is above 100 GeV. The event selection, background estimation and systematic uncertainties are then finalised before looking in this signal-dominated region of the data. The signal region event selection requires the following: * event time is consistent with a colliding bunch at IP1; * no signal in any of the five veto scintillators; * required to be less than half that expected from a MIP * signal in the scintillators that are downstream of the decay volume; * required to be compatible with or larger than expected for two MIPs * two fiducial reconstructed tracks of good quality; * a good quality track has a track fit \(\chi^{2}/\)(number of degrees of freedom) \(<\) 25, at least 12 hits on track, and a momentum \(>\) 20 GeV * a fiducial track has an extrapolated position of \(<\) 9.5 cm radius at all scintillators and tracking stations * total calorimeter energy greater than 500 GeV; The efficiency of this selection on a representative signal model in the parameter space where the analysis is most sensitive (\(\epsilon=3\times 10^{-5}\), \(m_{A^{\prime}}=25.1\,\mathrm{MeV}\)) was found to be about 50% for dark photons that decay in the decay volume (where the probability of the dark photon to decay while within the volume is \(\mathcal{O}(10^{-3})\)), with the largest inefficiency arising from the two track requirement. A requirement that the Timing scintillator trigger fired ensures that the trigger efficiency, measured using orthogonal triggers on two-track events, is 100% for the \(A^{\prime}\) phase space of interest. The probability to veto a signal event, due to the presence of an uncorrelated beam-background muon in the same or neighbouring bunch crossing, is estimated to be less than 1 per mille. ## VII Backgrounds Several sources of background are considered in the analysis. The dominant background arises from neutrino interactions in the detector, while other processes such as neutral hadrons entering the detector, or from muons that miss the veto scintillator systems but enter the detector volume, also contribute to the background. Inefficiencies in the veto scintillators can lead to an instrumental background from unvetoed muons entering the detector volume. Finally, non-collision backgrounds from cosmic-rays or nearby LHC beam interactions are also considered. The contribution of each of these background sources is described and quantified in the following sub-sections. ### Background Due to Veto Inefficiency The inefficiency of each of the five planes of veto scintillators is measured independently with data, by selecting events in which there is a single good fiducial reconstructed track and then measuring the fraction of such events in which the scintillator charge is below that of a MIP signal. Thanks to the thick scintillators and tight fiducial track requirements, the inefficiencies are at the \(10^{-5}\) level or smaller. Since the planes are independent, this leads to a combined veto inefficiency of smaller than \(10^{-20}\). As \(\mathcal{O}(10^{8})\) incoming muons are observed in the 2022 dataset, the background due to the veto inefficiency is taken to be negligible. ### Background from Neutral Hadrons Neutral hadrons produced in muon interactions in the rock in front of FASER can be a possible source of background if, when passing through the veto systems undetected and interacting or decaying inside the detector decay volume, they produce exactly two reconstructed charged particle tracks and a calorimeter energy deposit above 500 GeV. This background is heavily suppressed by the need for the neutral hadron to traverse the full eight interaction lengths of the FASER\(\nu\) detector, and by the need for the parent muon to scatter to miss the veto scintillators. To determine the fraction of neutral hadron events that deposit at least 500 GeV of energy in the calorimeter, a three-track control region is used, where the parent muon enters the detector and is reconstructed along with the neutral hadron decay products. In these three-track events, the ratio of events with low calorimeter energy (\(E<100\) GeV) to high energy (\(E>500\) GeV) is used to scale the number of events with two reconstructed tracks (in which the parent muon is not present in the detector) at low-energy (\(E<100\) GeV) to estimate the expected background number of two-track events with \(E>500\) GeV. To allow sufficient event counts in the two-track low-energy control region, the veto requirements are relaxed, requiring no signal in the VetoNu scintillators, but with no requirements on the other Veto scintillator signals. Photon conversion events (with the accompanying parent muon) constitute a significant fraction of the three-track sample defined above and must be removed. This is done by requiring that the invariant mass of the two lowest momentum tracks, where the muon is assumed to be the highest momentum track, is greater than 200 MeV, which was found to be optimal when separating \(K_{S}\) events from photon conversions in MC simulation. After discarding the photon conversion events, the number of data events in the low- and high-energy three-track regions are 404 and 19 respectively. This ratio is used to extrapolate the one event observed in the low-energy two-track region to the two-track high energy region, resulting in an estimate of 0.047 expected events. This method provides an estimate of the number of neutral hadron events that lead to two reconstructed tracks, contain more than 500 GeV of calorimeter energy, and leave no signal in the VetoNu scintillators. To obtain the final background estimate, the results are corrected to account for the fact that the signal region selection requires no signal in the downstream Veto station as well. The correction is derived by studying the signal recorded in the Veto station using three-track events. With a clear separation in the Veto scinilator signal size for when only one track (the parent muon) traverses the Veto station versus when the other two tracks also leave a signal in the Veto station, the scintillators can be used to select both types of events and the ratio of the number of events in the two cases is used as the correction. After correcting for the fraction of events that will decay or interact before the second veto system, a final estimate of \((8.4\pm 11.9)\times 10^{-4}\) events is found; where the 100% statistical uncertainty is driven by the single event observed in the low-energy two-track data region, and an additional 100% systematic uncertainty is applied to account for the assumptions in the method. In performing this estimation, potential neutrino background to the low-energy two-track data region, predicted to be \(3.6\pm 3.8\) events from GENIE simulation, is conservatively neglected. ### Background from Large-Angle Muons Another potential background source arises from large-angle muons that miss the veto system and then enter the FASER decay volume. This background is heavily suppressed by the fact that the tracks extrapolated to the front veto scintillators are required to be within the fiducial volume. The MC sample with large-angle muons generated at the edge of the scintillators, described in section IV, is used to study this background. No two-track events are seen in this sample, even before applying the fiducial requirements on the extrapolated tracks or the calorimeter energy requirement, suggesting that this background is negligible in the final analysis. This was validated via a data-driven method by using events with a signal in the veto scintillators and calculating the ratio of the number of such events with \(>500\) GeV or \(<500\) GeV in the calorimeter, which is then used to extrapolate from the number of events with no signal in the veto scintillators and \(<\) 500 GeV in the calorimeter to the number of events with no signal in the veto and \(>\) 500 GeV in the calorimeter. The results of this validation are consistent with those from the MC estimate, providing confidence that this background is negligible. ### Background from Neutrinos The large flux of high energy neutrinos, whose interaction cross section rises with energy, at the FASER location constitutes an important background, since the neutrinos do not leave any signal in the veto scintillators, and can interact to produce high energy particles. To suppress this background, the detector was designed to minimise the amount of material in the main detector volume. The expected background from neutrino interactions inside the detector is estimated using the 300 ab\({}^{-1}\) (\(\sim 10000\times\) larger than the data used in this analysis) neutrino MC sample described in section IV. The MC simulation shows that 0.0015 neutrino events (0.0012 electron (anti)neutrino events and 0.0003 muon (anti)neutrino events) pass the signal region selection when scaled to 27.0 fb\({}^{-1}\) of data, with these interactions occurring in the Timing scintillator station or the first tracking station. Figure 2 shows the calorimeter energy distribution for neutrino events that pass the signal region selection when disregarding the requirement on the calorimeter energy. The figure shows that a requirement of \(\geq\) 500 GeV gives a good suppression of the neutrino background. The uncertainty on the incoming neutrino flux [32] is taken to be 100% for electron neutrinos and 25% for muon neutrinos, and an additional 100% uncertainty is applied to account for the effect of uncertainties in the modelling of neutrino interactions. The total neutrino background estimate when scaled to 27.0 fb\({}^{-1}\) is (1.5 \(\pm\) 0.5 (stat.) \(\pm\) 1.9 (syst.))\(\times 10^{-3}\) events. ### Background from Non-collision events The background from cosmic rays and the non-colliding beam background are considered by studying events collected at times when there are no colliding bunches in IP1. Cosmic rays are studied during 330 hours of data-taking with no beam in the machine, which corresponds to Figure 2: The calorimeter energy in simulated neutrino events passing all signal selection requirements, besides that on the calorimeter energy. GENIE is used to simulate the neutrino interactions. The figure is scaled to a luminosity of 27.0 fb\({}^{-1}\). a similar running time to the full 2022 physics data-taking period. During this time, no event is observed with a calorimeter energy deposit above 100 GeV, and no events are found when requiring at least one good quality track. The beam background from LHC beam-1, the incoming beam to ATLAS in the FASER location, is the most relevant for FASER. Beam-1 interactions with gas or tails of the beam interacting with the beampipe aperture can lead to particles boosted in the direction of FASER, where low-energy activity is observed in correlation with beam-1 bunches passing the back of the detector. This beam background is studied by checking the detector activity in events with the relevant bunch timing, but which do not correspond to colliding bunches at IP1. It is found that beam background events without signal in the veto scintillators do not have a good reconstructed track, and for these events without a track, there are zero events with calorimeter energies above 400 GeV. The overall contribution from non-collision backgrounds is therefore considered to be negligible. ### Summary of the Expected Background As background contributions from the veto inefficiency, large-angle muons, and non-collision events are estimated to provide a negligible contribution in the signal region, the total expected background is obtained by combining just the neutrino and neutral hadron estimates, leading to a total background of \((2.3\pm 2.3)\times 10^{-3}\) events. ## VIII Systematic Uncertainties on the Signal Yield Systematic uncertainties on the expected signal yields arise from several sources. The uncertainty in the integrated luminosity is provided by the ATLAS collaboration, and is 2.2% [21], following the methodology discussed in Ref. [19]. The statistical uncertainty from the number of MC simulated signal events is included and ranges from 1 to 3%. Spin correlations between production and decay are not included in the MC simulated signal, but their effect on this search is negligible [37]. The remainder of the systematic uncertainties, discussed below, arise from the signal generator and from the modelling of the detector response in the MC simulation. Uncertainties on the number of signal events decaying inside the FASER decay volume are derived by comparing the estimates from using different event generators to model very forward \(\pi^{0}\) and \(\eta\) meson production in the LHC collisions. Comparing signal yields from QGSJET II-04[38] and SIBYLL 2.3d[39] with the central estimate from the EPOS-LHC[24] generator, where these generators have been validated using LHCf's forward photon measurements [40], provides an envelope of estimates as a function of the energy of the signal (\(E(A^{\prime})\)), that is parameterized and used as the uncertainty: \[\frac{\Delta N}{N}=\frac{0.15+(E(A^{\prime})/4\ \text{TeV})^{3}}{1+(E(A^{ \prime})/4\ \text{TeV})^{3}}. \tag{4}\] The parameterization also envelops the uncertainty on the signal predictions due to changing the \(p_{\text{T}}\)-cutoff in modelling of the dark bremsstrahlung as described in Sec. IV. Figure 3 presents the \(A^{\prime}\) energy distribution as estimated by the different generators for a representative signal model. The parameterisation is checked for numerous signal models spanning the relevant phase space for both the \(A^{\prime}\) and \(A^{\prime}_{B-L}\) gauge bosons, and is found to be in good agreement with the envelope of the generators. The remaining uncertainties arise from the modelling of the detector response in the MC simulation, which is used to calculate the signal yield. The scintillator efficiencies are measured to be 100% in both data and MC for the \(A^{\prime}\) signal, based on the scintillator PMT charge observed in events with two reconstructed tracks, thus no corresponding uncertainty is assigned. The calorimeter energy scale calibration, as described in section V, is applied to both the data and MC simulation identically. The stability of the calorimeter system across the data taking period is tested with regular calibrations using an LED pulse injected into the calorimeter modules [9]. A conservative analysis taking into account all components of the energy calibration leads to a 6% uncertainty on the difference in the calibration of the energy scale between data and MC simulation. This uncertainty is checked in data by using the \(E/p\) distribution in three-track events, which are dominated by photon conversions initiated by high-energy muons. Only the two lowest momentum tracks are considered when calculating the \(E/p\) since the highest momentum track is assumed to be the muon. The reconstructed \(E/p\) peak position in data and MC simulation is consistent and well within the 6% uncertainty across the momentum range probed, as shown in Figure 4. The uncertainty due to the tracking efficiency of single tracks is assessed by comparing the relative efficiency for finding tracks in events with a single track segment in each of the three tracking stations between data and MC simulation. This yields a 1.5% uncertainty per track. The track finding procedure is more complex when there are two closely-spaced tracks, as in the signal, in particular when the tracks share hits. The uncertainty due to this is assessed by overlaying the raw tracker data from two different events, each of which has a single reconstructed track. The track reconstruction is then re-run on the combined event, built from the two overlaid events, so that the tracking efficiency can be calculated. This is performed using both data and simulation, shown in Figure 5, where the ratio of the efficiency between the two, as a function of the distance between the two tracks at their first measurements, is used to assess the uncertainty. The efficiency in data is up to 7% less than in MC simulation, at track separations comparable to that expected in the \(A^{\prime}\) signals, hence a 7% correction to the two-track tracking efficiency is Figure 3: The energy spectrum of dark photons in FASER produced with meson production modeled by different generators (EPOS-LHC, QGSJET II-04 and SIBYLL 2.3d). Also shown is production from bremsstrahlung with a factor of two variation in the \(p_{\mathrm{T}}\) cut off. The bottom panel shows the ratio between the different estimates, and the parameterisation of the uncertainty as a function of energy. A representative signal model (with m\({}_{A^{\prime}}\)=50 MeV and \(\epsilon\)=3 \(\times\) 10\({}^{-5}\)) is shown. applied, with a corresponding systematic uncertainty, assumed to be the difference between the nominal and corrected efficiency, applied. The track momentum scale and resolution uncertainty is derived by comparing the mass peak of photon conversion events in data and MC simulation. Upon comparison, both a shift or Gaussian smear of the MC track momentum by 5% were shown to more than account for the difference in the photon conversion mass peak position between data and MC, leading to a conservative uncertainty of 5% on both the track momentum scale and momentum resolution. Table 1 summarises the various sources of uncertainty on the signal, showing the size of the individual uncertainties, and the range of the effect on the overall uncertainty on the signal yield. ## IX Results After applying the signal selection described in Sec. VI, zero events are observed in the data, which is compatible with the expected background of (2.3 \(\pm\) 2.3) \(\times 10^{-3}\) events. Figure 6 shows the calorimeter energy distribution for data and three representative signal models at different stages \begin{table} \begin{tabular}{|c|c|c|} \hline Source & Value & Effect on signal yield \\ \hline Signal Generator & \(\frac{0.15+(E_{\mu^{\prime}}/4\mathrm{TeV})^{2}}{1+(E_{\mu^{\prime}}/4\mathrm{ TeV})^{3}}\) & 15-65\% (15-45\%) \\ Luminosity & 2.2\% & 2.2\% \\ MC Statistics & \(\sqrt{\sum W^{2}}\) & 1-3\% (1-2\%) \\ Track Momentum Scale & 5\% & \(<\) 0.5\% \\ Track Momentum Resolution & 5\% & \(<\) 0.5\% \\ Single Track Efficiency & 3\% & 3\% \\ Two-track Efficiency & 7\% & 7\% \\ Calo E scale & 6\% & 0-8\% (\(<\) 1\%) \\ \hline \end{tabular} \end{table} Table 1: Summary of the systematic uncertainties on the signal yield. For each of the sources of uncertainty, the source and size of the uncertainty is presented. The effect on the signal yield across the full signal parameter space probed is also shown. The numbers in parenthesis indicate the effect on the signals within the parameter space for which this analysis is sensitive. Figure 4: The Gaussian-fitted peak position of the \(E/p\) in data and MC simulation as a function of the momentum of photon conversion candidates. of the signal region selection on the veto scintillator and track information. There are events that have no veto signal and at least one track, but the calorimeter energies are well below the 500 GeV threshold; and there are no events upon further requiring two fiducial tracks. As no significant excess of events over the background is observed, the results are used to set exclusion limits in the signal scenarios considered. The exclusion limits are made using a profile likelihood approach implemented via the HistFitter framework [41], and are set at 90% confidence level to allow for direct comparison with constraints from other experiments. Hypothesis tests are performed using profile likelihood test statistics [42] and the CLs method [43] to test the exclusion of new physics scenarios. For dark photons, the analysis excludes signal models in the range \(\epsilon\sim 4\times 10^{-6}-2\times 10^{-4}\) and \(m_{A^{\prime}}\sim 10~{}{\rm MeV}-80~{}{\rm MeV}\), and provides the world-leading exclusion for scenarios in the range \(\epsilon\sim 2\times 10^{-5}-1\times 10^{-4}\) and \(m_{A^{\prime}}\sim 17~{}{\rm MeV}-70~{}{\rm MeV}\). Figure 7(a) shows the \(A^{\prime}\) exclusion limit in the signal parameter space, where the grey regions are already excluded by experimental data from BaBar [44], E141 [45], NA48 [46], NA64 [47], Orsay [48; 49], and NuCal [25; 50], which are adapted from DarkCast [51]. A key reason for investigating dark photons is their potential as intermediaries between the SM and a dark sector. In particular, they allow for obtaining the correct value of the dark matter relic density, \(\Omega_{\chi}^{\rm total}h^{2}\simeq 0.12\)[52], via the thermal freeze-out mechanism. In Figure 7(a), an example thermal relic contour is included, obtained for the scenario where the dark photons couple to a light complex scalar dark matter field \(\chi\)[23]. In particular, this line assumes that the mass ratio Figure 5: (Top) The two-track reconstruction efficiency versus track separation for overlaid tracks in both data and MC events are shown. The distribution of the separation in \(e^{+}e^{-}\) tracks of an \(A^{\prime}\) sample is also shown in red with the axis on the right-hand side. (Bottom) The ratio of the overlay tracking efficiencies between MC and data is depicted. between the dark matter candidate and the dark photon is always equal to \(m_{\chi}/m_{A^{\prime}}=0.6\) and that the dark photon coupling constant to dark matter has a fixed value of \(\alpha_{D}=0.1\). This mass ratio guarantees that the dark photon decays visibly into the SM species and that the dark matter primarily annihilates via \(\chi\chi\to A^{\prime}\to ff\). Variations of both the coupling and mass ratio in the dark sector are possible and will lead to a shift of the relic target line. Notably, in the context of this particular dark matter model, the region below the target line would have an over-abundance of dark matter and would be excluded cosmologically: FASER therefore probes a significant fraction of the cosmologically-allowed region of parameter space. The exclusion contours for the \(B-L\) gauge boson are shown in Figure 7(b), where FASER provides the first exclusion for models in the range \(g_{B-L}\sim 5\times 10^{-6}-2\times 10^{-5}\) and \(m_{A^{\prime}_{B-L}}\sim 15~{}\mathrm{MeV}-40~{}\mathrm{MeV}\), with a total region between \(g_{B-L}\sim 3\times 10^{-6}-4\times 10^{-5}\) and \(m_{A^{\prime}_{B-L}}\sim 10~{}\mathrm{MeV}-50~{}\mathrm{MeV}\) excluded. In grey are the regions already excluded by experimental data from Orsay [48; 49] and NuCal [25; 50] as adapted from DarkCast [51], as well as from a dedicated search for invisible final states by NA64 [53]. In this model, the region probed by FASER is also cosmologically relevant. Assuming a dark matter particle \(\chi\) with a mass in the range of \(0.5\times m_{A^{\prime}_{B-L}}<m_{\chi}<m_{A^{\prime}_{B-L}}\) and a very large \(B-L\) charge, the region of parameter space favored by thermal freeze-out includes regions of parameter space that are now excluded by the new FASER constraint [54; 55]. Alternatively, since the \(B-L\) model necessarily includes 3 sterile neutrinos, it is natural to consider the possibility that these sterile neutrinos are the dark matter. These sterile Figure 6: The calorimeter energy distribution for data and three representative MC simulated signal models are shown for (a) all events with at least one good track, (b) events that have no signal in the veto stations and at least one good track, and (c) events that have no signal in the veto stations and exactly two good fiducial tracks. The distributions and expected events from the MC samples are scaled to 27.0 fb\({}^{-1}\). neutrinos may be produced through the freeze-in mechanism, and the resulting relic density may be significant in the regions of parameter space probed by FASER [55; 56; 57]. ## X Conclusions The first search for dark photons by the FASER experiment has been presented, providing a proof of principle that very low background searches for long-lived particles in the very forward region are possible at the LHC. The search applies an event selection requiring no signal in the veto scintillator systems, two good quality reconstructed charged particle tracks and more than 500 GeV of energy deposited in the calorimeter. No events are observed passing the selection, with an expected background of (2.3 \(\pm\) 2.3) \(\times 10^{-3}\) events. At the 90% confidence level, FASER excludes the region of \(\epsilon\sim 4\times 10^{-6}-2\times 10^{-4}\) and \(m_{A^{\prime}}\sim 10~{}\mathrm{MeV}-80~{}\mathrm{MeV}\) in the dark photon parameter space, as well as the region of \(g_{B-L}\sim 3\times 10^{-6}-4\times 10^{-5}\) and \(m_{A^{\prime}_{B-L}}\sim 10~{}\mathrm{MeV}-50~{}\mathrm{MeV}\) in the \(B-L\) gauge boson parameter space. In both the dark photon and \(B-L\) gauge boson models, these results are one of the first probes of these regions of parameter space since the 1990's, and they exclude previously-viable models motivated by dark matter. ## XI Acknowledgments We thank CERN for the very successful operation of the LHC during 2022. We thank the technical and administrative staff members at all FASER institutions for their contributions to the success of the FASER project. We thank the ATLAS Collaboration for providing us with accurate luminosity estimates for the used Run 3 LHC collision data. FASER gratefully acknowledges the donation of spare ATLAS SCT modules and spare LHCb calorimeter modules, without which the experiment would not have been possible. We also acknowledge the ATLAS collaboration software, Athena, on which FASER's offline software system is based [34] and the ACTS tracking software framework [36]. Finally we thank the CERN STI group for providing detailed FLUKA simulations of the muon fluence along the LOS, which have been used in this analysis. This Figure 7: 90% confidence level exclusion contours in (a) the dark photon and (b) the \(B-L\) gauge boson parameter space are shown. Regions excluded by previous experiments are shown in grey. The red line shows the region of parameter space that yields the correct dark matter relic density, with the assumptions discussed in the text. work was supported in part by Heising-Simons Foundation Grant Nos. 2018-1135, 2019-1179, and 2020-1840, Simons Foundation Grant No. 623683, U.S. National Science Foundation Grant Nos. PHY-2111427, PHY-2110929, and PHY-2110648, JSPS KAKENHI Grants Nos. JP19H01909, JP20K23373, JP20H01919, JP20K04004, and JP21H00082, BMBF Grant No. 05H20PDRC1, DFG EXC 2121 Quantum Universe Grant No. 390833306, ERC Consolidator Grant No. 101002690, Royal Society Grant No. URF\R1\201519, UK Science and Technology Funding Councils Grant No. ST/ T505870/1, the National Natural Science Foundation of China, Tsinghua University Initiative Scientific Research Program, and the Swiss National Science Foundation.
2305.04665
Riesz networks: scale invariant neural networks in a single forward pass
Scale invariance of an algorithm refers to its ability to treat objects equally independently of their size. For neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, neural networks may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform which is a scale equivariant operation. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider detecting and segmenting cracks in tomographic images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths. An additional experiment is carried out on the MNIST Large Scale data set.
Tin Barisin, Katja Schladitz, Claudia Redenbach
2023-05-08T12:39:49Z
http://arxiv.org/abs/2305.04665v2
# Riesz networks: scale invariant neural networks ###### Abstract Scale invariance of an algorithm refers to its ability to treat objects equally independently of their size. For neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, neural networks may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform which is a scale equivariant operation. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider detecting and segmenting cracks in tomographic images of concrete. In this context,'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real tomographic images featuring a wide range of crack widths. An additional experiment is carried out on the MNIST Large Scale data set. **Keywords:** scale invariance, Riesz transform, neural networks, crack segmentation, generalization to unseen scales, computed tomography, concrete ## 1 Introduction In image data, similar objects may occur at highly varying scales. Examples are cars or pedestrians at different distances from the camera, cracks in concrete of varying thickness or imaged at different resolution, or blood vessels in biomedical applications (see Fig. 1). It is natural to assume that the same object or structure at different scales should be treated equally i.e. should have equal or at least similar features. This property is called scale or dilation invariance and has been investigated in detail in classical image processing [1, 2, 3]. Neural networks have proven to segment and classify robustly and well in many computer vision tasks. Nowadays, the most popular and successful neural networks are Convolutional Neural Networks (CNNs). It would be desirable that neural networks share typical properties of human vision such as translation, rotation, or scale invariance. While this is true for translation invariance, CNNs are not scale or rotation invariant by default. This is due to the excessive use of convolutions which are local operators. Moreover, training sets often contain a very limited number of scales. To overcome this problem, CNNs are often trained with rescaled images through data augmentation. However, when a CNN is given input whose scale is outside the range covered by the training set, it will not be able to generalize [4, 5]. To overcome this problem, a CNN trained at a fixed scale can be applied to several rescaled versions of the input image and the results can be combined. This, however, requires multiple runs of the network. One application example, where the just described challenges naturally occur is the task of segmenting cracks in 2d or 3d gray scale images of concrete. Crack segmentation in 2d has been a vividly researched topic in civil engineering, see [6] for an overview. Cracks are naturally multiscale structures (Fig. 1, left) and hence require multiscale treatment. Nevertheless, adaption to scale (crack thickness1) has not been treated explicitly so far. Footnote 1: Crack scale, thickness, and width refer to the same characteristic and will be interchangeably used throughout the paper. Recently, crack segmentation in 3d images obtained by computed tomography (CT) has become a subject of interest [6, 7]. Here, the effect of varying scales is even more pronounced [8]: crack thicknesses can vary from a single pixel to more than 100 pixels. Hence, the aim is to design and evaluate crack segmentation methods that work equally well on all possible crack widths without complicated adjustment by the user. In this work, we focus on 2d multiscale crack segmentation in images of concrete samples. We design the Riesz network which replaces the popular 2d convolutions by first and second order Riesz transforms to allow for a scale invariant spatial operation. The resulting neural network is provably scale invariant in only one forward pass. It is sufficient to train the Riesz network on one scale or crack thickness, only. The network then generalizes automatically without any adjustments or rescaling to completely unseen scales. We validate the network performance using images with simulated cracks of constant and varying widths generated as described in [6, 10]. Our network is compared with competing methods for multiscale segmentation and finally applied to real multiscale cracks observed in 2d slices of tomographic images. There is just one publicly available dataset which allows for testing scale equivariance - MNIST Large Scale [5]. Additional experiments with the Riesz network on this dataset are reported in Appendix A. ### Related work #### The Riesz transform The Riesz transform is a generalization of the Hilbert transform to higher dimensional spaces, see e.g. [11]. First practical applications of the Riesz transform arise in signal processing through the definition of the monogenic signal [12] which enables a decomposition of higher dimensional signals into local phase and local amplitude. First, a bandpass filter is applied to the signal to separate the band of frequencies. Using the Riesz transform, the local phase and amplitude can be calculated for a selected range of frequencies. For more details we refer to [12, 13]. As images are 2d or 3d signals, applications of the Riesz transform naturally extend to the fields of image processing and computer vision through the Poisson scale space [14, 15] which is an alternative to the well-known Gaussian scale space. Kothe [16] compared the Riesz transform with the structure tensor from a signal processing perspective. Unser and van de Ville [17] related higher order Riesz transforms and derivatives. Figure 1: Examples of similar objects appearing on different scales: section of a CT image of concrete showing a crack of locally varying thickness (left) and pedestrians at difference distances from the camera (right, taken from [9]). Furthermore, they give a reason for preferring the Riesz transform over the standard derivative operator: The Riesz transform does not amplify high frequencies. Higher order Riesz transforms were also used for analysis of local image structures using ideas from differential geometry [18, 19]. Benefits of using the first and second order Riesz transforms as low level features have also been shown in measuring similarity [20], analyzing and classification of textures [11, 21], and orientation estimation [22, 23]. The Riesz transform can be used to create steerable wavelet frames, so-called _Riesz-Laplace wavelets_[17, 24], which are the first ones utilizing the scale equivariance property of the Riesz transform and have inspired the design of _quasi monogenic shearlets_[25]. Interestingly, in early works on the Riesz transform in signal processing or image processing [12, 13, 15], scale equivariance has not been noticed as a feature of the Riesz transform and hence remained sidelined. Benefits of the scale equivariance have been shown later in [17, 19]. Recently, the Riesz transform found its way into the field of deep learning: Riesz transform features are used as supplementary features in classical CNNs to improve robustness [26]. In our work, we will use the Riesz transforms for extracting low-level features from images and use them as basis functions which replace trainable convolutional filters in CNNs or Gaussian derivatives in [27]. #### Scale invariant deep learning methods Deep learning methods which have mechanisms to handle variations in scale effectively can be split in two groups based on their scale generalization ability. _Scale invariant deep learning methods for a limited range of scales_ The first group can handle multiscale data but is limited to the scales represented either by the training set or by the neural network architecture. The simplest approach to learn multiscale features is to apply the convolutions to several rescaled versions of the images or feature maps in every layer and to combine the results by maximum pooling [4] or by keeping the scale with maximal activation [28] before passing it to the next layer. In [29, 30], several approaches based on downscaling images or feature maps with the goal of designing robust multiscale object detectors are summarized. However, scaling is included in the network architecture such that scales have to be selected a priori. Therefore, this approach only yields local scale invariance, i.e. an adaption to the scale observed in a given input image is not possible after training. Another intuitive approach is to rescale trainable filters, i.e. convolutions, by interpolation [31]. In [29], a new multiscale strategy was derived which uses convolution blocks of varying sizes sequenced in several downscaling layers creating a pyramidal shape. The pyramidal structure is utilized for learning scale dependent features and making predictions in every downsampling layer. Layers can be trained according to object size. That is, only the part of the network relevant for the object size is optimized. This guarantees robustness to a large range of object scales. Similarly, in [30], a network consisting of a downsampling pyramid followed by an upsampling pyramid is proposed. Here, connections between pyramid levels are devised for combining low and high resolution features and predictions are also made independently on every pyramid level. However, in both cases, scale generalization properties of the networks are restricted by their architecture, i.e. by the depth of the network (number of levels in the image pyramid), the size of convolutions as spatial operators as well as the size of the input image. Spatial transformer networks [32] focus on invariance to affine transformations including scale. This is achieved by using a so-called _localisation network_ which learns transformation parameters. Finally, using these transformation parameters, a new sampling grid can be created and feature maps are resampled to it. These parts form a trainable module which is able to handle and correct the effect of the affine transformations. However, spatial transformer networks do not necessarily achieve invariant recognition [33]. Also, it is not clear how this type of network would generalize to scales not represented in the training set. In [34], so-called _structured receptive fields_ are introduced. Linear combinations (\(1\times 1\) convolutions) of basis functions (in this case Gaussian derivatives up to 4th order) are used to learn complex features and to replace convolutions (e.g. of size \(3\times 3\) or \(5\times 5\)). As a consequence, the number of parameters is reduced, while the expressiveness of the neural network is preserved. This type of network works better than classical CNNs in the case where little training data is available. However, the standard deviation parameters of the Gaussian kernels are manually selected and kept fixed. Hence, the scale generalization ability remains limited. Making use of the semi-group property of scale spaces, _scale equivariant neural networks_ motivate the use of _dilated convolutions_[35] to define scale equivariant convolutions on the Gaussian scale space [36] or morphological scale spaces [37]. Unfortunately, these neural networks are unable to generalize to scales outside those determined by their architecture and are only suitable for downscale factors which are powers of 2, i.e. \(\{2,4,8,16,\cdots\}\). Furthermore, scale equivariant steerable networks [38] show how to design scale invariant networks on the scale-translation group without using standard or dilated convolutions. Following an idea from [34], convolutions are replaced by linear combinations of basis filters (Hermite polynomials with Gaussian envelope). While this allows for non-integer scales, scales are still limited to powers of a positive scaling factor \(a\). Scale space is again discretized and sampled. Hence, a generalization to arbitrary scales is not guaranteed. #### Scale invariant deep learning methods for arbitrary scales The second group of methods can generalize to arbitrary scales, i.e. any scales that are in range bounded from below by image resolution and from above by image size, but not necessarily contained in the training set. Our Riesz network also belongs to this second group of methods. An intuitive approach is to train standard CNNs on a fixed range of scales and enhance their scale generalization ability by the following three step procedure based on image pyramids: downsample by a factor \(a>1\), forward pass of the CNN, upsample the result by \(\frac{1}{a}\) to the original image size [5, 8]. Finally, forward passes of the CNN from several downsampling factors \(\{a_{1},\ \cdots\,a_{n}\ >\ 0\quad|\quad n\in\mathbb{N}\}\) are aggregated by using the maximum or average operator across the scale dimension. This approach indeed guarantees generalization to unseen scales as scales can be adapted to the input image and share the weights of the network [5]. However, it requires multiple forward passes of the CNN and the downsampling factors have to be selected by the user. Inspired by Scattering Networks [39, 40], normalized differential operators based on first and second order Gaussian derivatives stacked in layers or a cascade of a network can be used to extract more complex features [41]. Subsequently, these features serve as an input for a classifier such as a support vector machine. Varying the standard deviation parameter \(\sigma\) of the Gaussian kernel, generalization to new scales can be achieved. However, this type of network is useful for creating _handcrafted_ complex scale invariant features, only, and hence is not trainable. Its expansion to trainable networks by creating so-called Gaussian derivative networks [27] is one of the main inspirations for our work. For combining spatial information, \(\gamma\)-normalized Gaussian derivatives are used as scale equivariant operators (\(\gamma=1\)). Similarly as in [34], linear combinations of normalized derivatives are used to learn more complex features in the spirit of deep learning. During the training step, prior knowledge of the scale for every instance in the training set is required, i.e. the standard deviation parameter \(\sigma\) of the Gaussian kernel is set to reflect the scale of every instance, while the trainable weights are shared. In the inference step, the scale dimension needs to be discretized, sampled, and for each scale \(\sigma\), the forward pass of the network has to be executed. ## 2 The Riesz transform Let \(L_{2}(\mathbb{R}^{d})=\{f:\mathbb{R}^{d}\rightarrow\mathbb{R}\ |\ \int_{ \mathbb{R}^{d}}|f(x)|^{2}dx<\infty\}\) be the set of square integrable functions. Formally, for a \(d\)-dimensional signal \(f\in L_{2}(\mathbb{R}^{d})\) (i.e. an image or a feature map), the Riesz transform of first order \(\mathcal{R}=(\mathcal{R}_{1},\cdots,\mathcal{R}_{d})\) is defined in the spatial domain as \(\mathcal{R}_{j}:L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) \[\mathcal{R}_{j}(f)(x)=C_{d}\lim_{\epsilon\to 0}\int_{\mathbb{R}^{d} \setminus B_{\epsilon}}\frac{y_{j}f(x-y)}{|y|^{d+1}}dy,\] where \(C_{d}=\Gamma((d+1)/2)/\pi^{(d+1)/2}\) is a normalizing constant and \(B_{\varepsilon}\) is ball of radius \(\varepsilon\) centered at the origin. Alternatively, the Riesz transform can be defined in the frequency domain via the Fourier transform \(\mathcal{F}\) \[\mathcal{F}(\mathcal{R}_{j}(f))(u)=-i\frac{u_{j}}{|u|}\mathcal{F}(f)(u)=\frac{ 1}{|u|}\mathcal{F}(\partial_{j}f)(u),\] for \(j\in\{1,\cdots,d\}\). Higher order Riesz transforms are defined by applying a sequence of first order Riesz transforms. That is, for \(k_{1},k_{2},...,k_{d}\in\mathbb{N}\cup\{0\}\) we set \[\mathcal{R}^{(k_{1},k_{2},...,k_{d})}(f)(x):=\mathcal{R}_{1}^{k_{1}}( \mathcal{R}_{2}^{k_{2}}(\cdots(\mathcal{R}_{d}^{k_{d}}(f(x)))),\] where \(\mathcal{R}_{j}^{k_{j}}\) refers to applying the Riesz transform \(\mathcal{R}_{j}\)\(k_{j}\) times in a sequence. The Riesz transform kernels of first and second order resemble those of the corresponding derivatives of smoothing filters such as Gaussian or Poisson filters (Fig. 2). This can be explained by the following relations \[\mathcal{R}(f)=(-1)(-\triangle)^{-1/2}\nabla f\] \[\mathcal{R}^{(k_{1},k_{2},...,k_{d})}(f)(x)=(-1)^{N}(-\triangle)^ {-N/2}\frac{\partial^{N}f(x)}{\partial^{k_{1}}x_{1}\cdots\partial^{k_{d}}x_{ d}},\] for \(k_{1}+...+k_{d}=N\) and \(N\in\mathbb{N}\). The fractional Laplace operator \(\triangle^{N/2}\) acts as an isotropic low-pass filter. The main properties of the Riesz transform can be summarized in the following way [17]: * **translation equivariance:** For \(x_{0}\in\mathbb{R}^{d}\) define a translation operator \(\mathcal{T}_{x_{0}}(f)(x):L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) as \(\mathcal{T}_{x_{0}}(f)(x)=f(x-x_{0})\). It holds that \[\mathcal{R}_{j}(\mathcal{T}_{x_{0}}(f))(x)=\mathcal{T}_{x_{0}}(\mathcal{R}_{j }(f))(x),\] where \(j\in\{1,\cdots,d\}\). This property reflects the fact that the Riesz transform commutes with the translation operator. * **steerability:** The directional Hilbert transform \(\mathcal{H}_{v}:L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) in direction \(v\in\mathbb{R}^{d}\), \(||v||=1\), is defined as \(\mathcal{F}(\mathcal{H}_{v}(f))(u)=i\ \text{sign}(\langle u,v\rangle)\). \(\mathcal{H}_{v}\) is steerable in terms of the Riesz transform, that is it can be written as a linear combination of the Riesz transforms \[\mathcal{H}_{v}(f)(x)=\sum_{j=1}^{d}v_{j}\mathcal{R}_{j}(f)(x)=\langle\mathcal{R} (f)(x),v\rangle.\] Note that in 2d, for a unit vector \(v=(\cos\theta,\sin\theta)\), \(\theta\in[0,2\pi]\), the directional Hilbert transform becomes \(\mathcal{H}_{v}(f)(x)=\cos(\theta)\mathcal{R}_{1}(f)(x)+\sin(\theta)\mathcal{ R}_{2}(f)(x)\). This is equivalent to the link between gradient and directional derivatives [17] and a very useful property for learning oriented features. * **all-pass filter [12]:** Let \(H=(H_{1},\cdots,H_{d})\) be the Fourier transform of the Riesz kernel, i.e. \(\mathcal{F}(\mathcal{R}_{j}(f))(u)=i\frac{u_{j}}{|u|}\mathcal{F}(f)(u)=H_{j}(u )\mathcal{F}(f)(u)\). The energy of the Riesz transform for frequency \(u\in\mathbb{R}^{d}\) is defined as the norm of the \(d\)-dimensional vector \(H(u)\) and has value \(1\) for all non-zero frequencies \(u\neq 0\), i.e. \[||H(u)||=1,\quad u\neq 0.\] The all-pass filter property reflects the fact that the Riesz transform is a non-local operator and that every frequency is treated fairly and equally. Combined with scale equivariance, this eliminates the need for multiscale analysis or multiscale feature extraction. * **scale (dilation) equivariance:** For \(a>0\) define a dilation or rescaling operator \(L_{a}:L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) as \(L_{a}(f)(x)=f(\frac{x}{a})\). Then \[\mathcal{R}_{j}(L_{a}(f))(x)=L_{a}(\mathcal{R}_{j}(f))(x),\] Figure 2: Visualizations of Riesz transform kernels of first and second order. First row (from left to right): \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\). Second row (from left to right): \(\mathcal{R}^{(2,0)}\), \(\mathcal{R}^{(1,1)}\), and \(\mathcal{R}^{(0,2)}\). for \(j\in\{1,\cdots,d\}\). That is, the Riesz transform does not only commute with translations but also with scaling. Scale equivariance enables an equal treatment of the same objects at different scales. As this is the key property of the Riesz transform for our application, we will briefly present a proof. We restrict to the first order in the Fourier domain. The proof for higher orders follows directly from the one for the first order. **Lemma 1**.: _The Riesz transform is scale equivariant, i.e._ \[[\mathcal{R}_{i}f\Big{(}\frac{\cdot}{a}\Big{)}](x)=[\mathcal{R}_{i}f]\Big{(} \frac{x}{a}\Big{)}. \tag{1}\] _for \(f\in L_{2}(\mathbb{R}^{d})\) and every \(x\in\mathbb{R}^{d}\)._ Proof.: Remember that the Fourier transform of a dilated function is given by \(\mathcal{F}(f(\alpha\cdot))(u)=\frac{1}{\alpha^{2}}\mathcal{F}(f)(\frac{u}{ \alpha})\). Setting \(g(x)=f(\frac{x}{a})\), we have \(\mathcal{F}(g)(u)=a^{d}\mathcal{F}(f)(au)\). This yields \[\mathcal{F}\Bigg{(}\mathcal{R}_{j}\Big{(}f\big{(}\frac{\cdot}{a} \big{)}\Big{)}\Bigg{)}(u)=\mathcal{F}\Big{(}\mathcal{R}_{j}(g)\Big{)}(u)=\] \[=i\frac{u_{j}}{|u|}\mathcal{F}(g)(u)=i\frac{u_{j}}{|u|}a^{d} \mathcal{F}(f)(au)=\] \[=a^{d}\Big{(}i\frac{au_{j}}{a|u|}\Big{)}\mathcal{F}(f)(au)=a^{d} \mathcal{F}\Big{(}\mathcal{R}_{j}(f)\Big{)}(au)=\] \[=\mathcal{F}\Bigg{(}\mathcal{R}_{j}(f)\Big{(}\frac{\cdot}{a} \Big{)}\Bigg{)}(u).\] Fig. 3 provides an illustration of the scale equivariance. It shows four rectangles with length-to-width ratio 20 and varying width (3, 5, 7, and 11 pixels) together with the gray value profile of the second order Riesz transform \(R^{(2,0)}\) along a linear section through the centers of the rectangles. In spite of the different widths, the Riesz transform yields equal filter responses for each rectangle (up to rescaling). In contrast, to achieve the same behaviour in Gaussian scale space, the scale space has to be sampled (i.e. a subset of scales has to be selected), the \(\gamma\)-normalized derivative [1] has to be calculated for every scale, and finally the scale yielding the maximal absolute value has to be selected. In comparison, the simplicity of the Riesz transform achieving the same in just one transform without sampling scale space and without the need for a scale parameter is striking. ## 3 Riesz transform neural networks In the spirit of structured receptive fields [34] and Gaussian derivative networks [27], we use the Riesz transforms of first and second order instead of standard convolutions to define Riesz layers. As a result, Riesz layers are scale equivariant in a single forward pass. Replacing standard derivatives with the Riesz transform has been motivated by [16], while using a linear combination of Riesz transforms of several order follows [21]. ### Riesz layers The base layer of the Riesz networks is defined as a linear combination of Riesz transforms of several orders implemented as 1d convolution across feature channels (Fig. 4). Here, we limit ourselves to first and second order Riesz transforms. Thus, the linear combination reads as \[J_{\mathcal{R}}(f) =C_{0}+\sum_{k=1}^{d}C_{k}\cdot\mathcal{R}_{k}(f)+\] \[+ \sum_{k,l\in\mathbb{N}_{0},k+l=2}C_{k,l}\cdot\mathcal{R}^{(k,l)} (f), \tag{2}\] where \(\{C_{0},C_{k}|k\in\{1,\cdots,d\}\}\cup\{C_{k,l}|l,k\in\mathbb{N}_{0},l+k=2\}\) are parameters that are learned during training. Now we can define the general layer of the network (Fig. 4). Let us assume that the \(K\)th network layer takes input \(F^{(K)}=(F^{(K)}_{1},\cdots,F^{(K)}_{c^{(K)}})\in\mathbb{R}^{H\times W\times c ^{(K)}}\) with \(c^{(K)}\) feature channels and has output \(F^{(K+1)}=(F^{(K+1)}_{1},\cdots,F^{(K+1)}_{c^{(K+1)}})\in\mathbb{R}^{H\times W \times c^{(K+1)}}\) with \(c^{(K+1)}\) channels. Then the output in channel \(j\in\{1,\cdots,c^{(K+1)}\}\) is given by \[F^{(K+1)}_{j}=\sum_{i=1}^{c^{(K)}}J^{(j,i)}_{K}(F^{(K)}_{i}). \tag{3}\] Here, \(J^{(j,i)}_{K}\) is defined in the same way as \(J_{\mathcal{R}}(f)\) from Eq. (2), but trainable parameters may vary for different input channels \(i\) and output channels Figure 3: Illustration of the Riesz transform on a mock example of \(550\times 550\) pixels: aligned rectangles with equal aspect ratio and constant gray value \(255\) (left) and response of the second order Riesz transform \(\mathcal{R}^{(2,0)}\) of the left image sampled horizontally through the centers of the rectangles (right). \(j\), i.e. \[J_{K}^{(j,i)}(f) =C_{0}^{(j,i,K)}+\sum_{k=1}^{d}C_{k}^{(j,i,K)}\cdot\mathcal{R}_{k}( f)+\] \[+\sum_{k,l\in\mathbb{N}_{0},k+l=2}C_{k,l}^{(j,i,K)}\cdot\mathcal{R }^{(k,l)}(f). \tag{4}\] In practice, the offset parameters \(C_{0}^{(i,j,K)}\), \(i=1,\cdots,c^{(K)}\) are replaced with a single parameter defined as \(C_{0}^{(j,K)}:=\sum_{i=1}^{c^{(K)}}C_{0}^{(j,i,K)}\). ### Proof of scale equivariance We prove the scale equivariance for \(J_{\mathcal{R}}(f)\). That implies scale equivariance for \(J_{K}^{(j,i)}(f)\) and consequently for \(F_{j}^{(K+1)}\) for arbitrary layers of the network. By construction (see Section 3.3), this will result in provable scale invariance for the whole network. Formally, we show that \(J_{\mathcal{R}}(f)\) from equation (2) is scale equivariant, i.e. \[J_{\mathcal{R}}\Bigg{(}f(\frac{\cdot}{a})\Bigg{)}(x)=J_{\mathcal{R}}(f)(\frac {x}{a}),\] for \(f\in L_{2}(\mathbb{R}^{d})\) and every \(x\in\mathbb{R}^{d}\). Proof.: For any scaling parameter \(a>0\) and \(x\in\mathbb{R}^{d}\), we have \[J_{\mathcal{R}}\Bigg{(}f(\frac{\cdot}{a})\ \Bigg{)}\ \ (x)=C_{0}+\sum_{k=1}^{d}C_{k}\cdot \mathcal{R}_{k}\Bigg{(}f(\frac{\cdot}{a})\Bigg{)}(x)+\] \[\ \ Generally, a layer consists of the following sequence of transformations: batch normalization, Riesz layer, and ReLU. Batch normalization improves the training capabilites and avoids overfitting, ReLUs introduce non-linearity, and the Riesz layers extract scale equivariant spatial features. For every layer, the number of feature channels has to be selected. Hence, our network with \(K\in\mathbb{N}\) layers can be simply defined by a \(K\)-tuple specifying the channel sizes e.g. \((c^{(0)},c^{(1)},\cdots c^{(K)})\). The final layer is defined as a linear combination of the features from the previous layer followed by a sigmoid function yielding the desired probability map as output. The four layer Riesz network we apply here can be schematically written as \(1\to 16\to 32\to 40\to 48\to 1\) and has \((1\cdot 5\cdot 16+16)+(16\cdot 5\cdot 32+32)+(32\cdot 5\cdot 40+40)+(40\cdot 5 \cdot 48+48)+(48\cdot 1+1)=18\,825\) trainable parameters. ## 4 Experiments and applications In this section we evaluate the four layer Riesz network defined above on the task of segmenting cracks in 2d slices from CT images of concrete. Particular emphasis is put on the network's ability to segment multiscale cracks and to generalize to crack scales unseen during training. To quantify these properties, we use images with simulated cracks. Being accompanied by an unambiguous ground truth, they allow for an objective evaluation of the segmentation results. Figure 4: Building blocks of Riesz networks: the base Riesz layer from equation (2) (left) and the full Riesz layer from equation (3) (right). Additionally, in Appendix A scale equivariance of the Riesz network is experimentally validated on the MNIST Large Scale data set [5]. _Data generation:_ Cracks are generated by the fractional Brownian motion (Experiment 1) or minimal surfaces induced by the facet system of a Voronoi tessellation (Experiment 2). Dilated cracks are then integrated into CT images of concrete without cracks. As pores and cracks are both air-filled, their gray value distribution should be similar. Hence, the gray value distribution of crack pixels is estimated from the gray value distribution observed in air pores. The crack thickness is kept fixed (Experiment 1) or varies (Experiment 2) depending on the objective of the experiment. As a result, realistic semi-synthetic images can be generated (see Fig. 5). For more details on the simulation procedure, we refer to [6, 10]. Details on number and size of the images can be found below. Finally, we show applicability of the Riesz network for real data containing cracks generated by tensile and pull-out tests. _Quality metrics:_ As metrics for evaluation of the segmentation results we use precision (P), recall (R), F1-score (or dice coefficient, Dice), and Intersection over Union (IoU). The first three quality metrics are based on true positives _tp_ - the number of pixels correctly predicted as crack, true negatives _tn_ - the number of pixels correctly predicted as background, false positives _fp_ - the number of pixels wrongly predicted as crack, and false negatives _fn_ - the number of pixels falsely predicted as background. Precision, recall, and dice coefficient are then defined via \[P=tp/(tp+fp),\quad R=tp/(tp+fn),\] \[\text{Dice}=2PR/(P+R).\] IoU compares union and intersection of the foregrounds \(X\) and \(Y\) in the segmented image and the corresponding ground truth, respectively. That is \[IoU(X,Y)=\frac{|X\cap Y|}{|X\cup Y|}.\] All these metrics have values in the range \([0,1]\) with values closer to 1 indicating a better performance. _Training parameters:_ If not specified otherwise, all models are trained on cracks of fixed width of 3 pixels. Cracks for the training are generated in the same way as for Experiment 1 on \(256\times 256\) sections of CT images of concrete. Then, 16 images of size \(64\times 64\) are cropped without overlap from each of the generated images. In this way, images without cracks are present in the training set. After data augmentation by flipping and rotation, the training set consists of \(1\,947\) images of cracks. Some examples are shown in Fig. 5. For validation, another set of images with cracks of width \(3\) is used. The validation data set's size is a third of the size of the training set. All models are trained for \(50\) epochs with initial learning rate \(0.001\) which is halved every \(20\) epochs. ADAM optimization [43] is used, while the cost function is set to binary cross entropy loss. Crack pixels are labelled with \(1\), while background is labelled with \(0\). As there are far more background than crack pixels, we deal with a highly imbalanced data set. Therefore, crack and pore pixels are given a weight of \(40\) to compensate for class imbalance and to help distinguishing between these two types of structures which hardly differ in their gray values. ### Measuring scale equivariance Measures for assessing scale equivariance have been introduced in [36, 38]. For an image or feature map \(f\), a mapping function \(\Phi\) (e.g. a neural network or a subpart of a network), and a scaling function \(L_{a}\) we define \[\Delta_{a}(\Phi):=\frac{||L_{a}(\Phi(f))-\Phi(L_{a}(f))||_{2}}{||L_{a}(\Phi(f) )||_{2}}.\] Ideally, this measure should be \(0\) for perfect scale equivariance. In practice, due to scaling and discretization errors we expect it to be positive yet very small. To measure scale equivariance of the full Riesz network with randomly initialized weights, we use a data set consisting of \(85\) images of size \(512\times 512\) pixels with crack width \(11\) and use downscaling factors \(a\in\{2,4,8,16,32,64\}\). The evaluation was Figure 5: Cracks of width \(3\) used for training: before (first row) and after cropping (second row). Image sizes are \(256\times 256\) (first row) and \(64\times 64\) (second row). repeated for 20 randomly initialized Riesz networks. The resulting values of \(\Delta_{a}\) are given in Fig. 6. The measure \(\Delta_{a}\) was used to validate the scale equivariance of Deep Scale-spaces (DSS) in [36] and scale steerable equivariant networks in [38]. In both works, a steep increase in \(\Delta_{a}\) is observed for downscaling factors larger than 16, while for very small downscaling factors, \(\Delta_{a}\) is reported to be below 0.01. In [38], \(\Delta_{a}\) reaches 1 for downscaling factor 45. The application scenario studied here differs from those of [36, 38]. Results are thus not directly comparable but can be considered only as an approximate baseline. For small downscaling factors, we find \(\Delta_{a}\) to be higher than in [38] (up to 0.075). However, for larger downscaling factors (\(a>32\)), \(\Delta_{a}\) increases more slowly e.g. \(\Delta_{64}=0.169\). This proves the resilience of Riesz networks to very high downscaling factors, i.e. large changes in scale. ### Experiment 1: Generalization to unseen scales Our models are trained on images of fixed crack width 3. To investigate their behaviour on crack widths outside of the training set, we generate cracks of widths \(\{1,3,5,7,9,11\}\) pixels in images of size \(512\times 512\), see Fig. 9. Each class contains 85 images. Besides scale generalization properties of the Riesz network, we check how well it generalizes to random variations in crack topology or shapes, too. #### 4.2.1 Ablation study on the Riesz network We investigate how network parameters and composition of the training set affect the quality of the results, in order to learn how to design this type of neural networks efficiently. #### Size of training set: First, we investigate robustness of the Riesz network to the size of the training set. Literature [34] suggests that neural networks based on _structure receptive fields_ are less data hungry, i.e. their performance with respect to the size of the training set is more stable than that of conventional CNNs. Since the Riesz network uses the Riesz transform instead of a Gaussian derivative as in [34], it is expected that the same would hold here, too. Figure 6: Measure of scale equivariance \(\Delta_{a}\) for the four layer Riesz network with randomly initialized parameters w.r.t. the downscaling factor a. Mean (black), minimum, and maximum (gray) of 20 repetitions. Points on the line correspond to \(a\in\{1,2,4,8,16,32,64\}\). The use of smaller training sets has two main benefits. First, obviously, smaller data sets reduce the effort for data collection, i.e. annotation or simulation. Second, smaller data sets reduce the training time for the network if we do not increase the number of epochs during training. We constrain ourselves to three sizes of training sets: 1 947, 975, and 489. These numbers refer to the sets after data augmentation by flipping and rotation. Hence, the number of original images is three times smaller. In all three cases we train the Riesz network for 50 epochs and with similar batch sizes (\(11,13\), and \(11\), respectively). Results on unseen scales with respect to data set size are shown in Table 1 and Fig. 7 (left). We observe that the Riesz network trained on the smallest data set is competitive with counterparts trained on larger data sets albeit featuring generally \(1-2\%\) lower Dice and IoU. very thin cracks should be considered a special case which requires somewhat different treatment. Rather surprisingly, using the mixed training data set does not improve the metrics. Diversity with respect to scale in the training set seems not to be a decisive factor when designing Riesz networks. #### Number of layers: Finally, we investigate the explanatory power of the Riesz network depending on network depth and the number of parameters. We train four networks with \(2-5\) layers and \(2\,721\), \(9\,169\), \(18\,825\), and \(34\,265\) parameters, respectively, on the same data set for \(50\) epochs. The network with \(5\) layers has structure \(16\to 32\to 40\to 48\to 64\) and every other network is constructed from this one by removing the required number of layers at the end. Results are shown in Table 1 and in Fig. 7 (right). The differences between the networks with \(3\), \(4\), and \(5\) layers are rather subtle. For the Riesz network with only \(2\) layers, performance deteriorates considerably (\(3-5\%\) in Dice and IoU). In general, Riesz networks appear to be robust with respect to training set size, depth of network, and number of parameters. Hence, it is not necessary to tune many parameters or to collect thousands of images to achieve good performance, in particular for generalization to unseen scales. For the choice of crack width, \(3\) and \(5\) seem appropriate while crack width \(1\) should be avoided. #### Comparison with competing methods #### Competing methods: The four layer Riesz network is compared to two other methods - Gaussian derivative networks [27] and U-net [44] on either rescaled images [5] Figure 7: Experiment 1. Effect of the training set size (left), the crack width in the training set (center), and the network depth (right) on generalization to unseen scales. The baseline Riesz network is marked with \(1\,947\) (left), w3 (center), and layer \(4\) (right) and with square symbol \(\square\). Quality metric: IoU. or an image pyramid [8]. The Gaussian derivative network uses scale space theory based on the Gaussian kernel and the diffusion equation. Using the \(\gamma\)-normalized Gaussian derivatives from [1], layers of first and second order Gaussian derivatives are constructed [27]. U-net has around 2.7 million parameters, while the Gaussian derivative network has the same architecture as the Riesz network and hence the same number of parameters (18k). We design an experiment for detailed analysis and comparison of the ability of the methods to generalize to scales unseen during training. In typical applications, the thickness of the cracks would not be known. Here, crack thickness is kept fixed such that the correct scale of cracks is a priori known. This allows for a selection of an optimal scale (or range of scales) such that we have a best case comparison. For the Gaussian derivative network, scale is controlled by the standard deviation parameter \(\sigma\) which is set to the half width of the crack. For the U-net, scale is adjusted by downscaling the image to match the crack width used in the training data. Here, we restrict the downscaling to discrete factors in the set \(\{2,4,8,...\}\) that were determined during validation. For widths 1 and 3, no downscaling is needed. For width 5, the images are downscaled by 2, for width 7 by 4, and \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & w1 & w3 & w5 & w7 & w9 & w11 \\ \cline{2-7} & Dice & Dice & Dice & Dice & Dice & Dice \\ \hline U-net plain & 0.391 & 0.904 & 0.715 & 0.555 & 0.440 & 0.401 \\ U-net scale adj. & 0.391 & 0.904 & 0.853 & 0.833 & 0.809 & 0.810 \\ U-net-mix scale adj. & **0.420** & **0.917** & 0.929 & 0.916 & 0.921 & 0.921 \\ \hline Gaussian network & 0.004 & 0.765 & 0.764 & 0.709 & 0.759 & 0.843 \\ \hline Riesz network & 0.352 & 0.895 & **0.941** & **0.954** & **0.962** & **0.964** \\ \hline \end{tabular} \end{table} Table 2: Experiment 1. Comparison with competing methods: Dice coefficients for segmentation of cracks of differing width. Training was performed on crack width 3. Best performing method bold. Figure 8: Experiment 1. Comparison of the competing methods. Results of the simulation study with respect to crack width. Training on crack width 3. Quality metrics (from left to right): precision, recall, and IoU. Figure 9: Experiment 1. Columns (from left to right): crd@s of widths 3, 5, 7, and 11. Rows (from top to bottom): input image, ground truth, Riesz network, plain U-net, U-net with scale adjustment, and Gaussian derivative network. All images have size \(512\times 512\) pixels. for widths 9 and 11 by 8. For completeness, we include results for the U-net without downscaling denoted by "U-net plain". Table 2 yields the prediction quality measured by the Dice coefficient, while the other quality measures are shown in Fig. 8. Exemplary segmentation results are shown in Fig. 9. As expected, the performance of the plain U-net decreases with increasing scale. Scale adjustment stabilizes U-net's performance but requires manual selection of scales. Moreover, the interpolation in upsampling and downsampling might induce additional errors. The decrease in performance with growing scale is still apparent (\(10-15\%\)) but significantly reduced compared to the plain U-net (\(55\%\)). To get more insight into performance and characteristics of the U-net, we add an experiment similar to the one from [5]: We train the U-net on crack widths 1, 3, and 5 on the same number of images as for one single crack width. This case is referred to "U-net-mix scale adj." in Table 2. Scales are adjusted similarly: w5 and w7 are downscaled by factor 2, w9 and w11 are downscaled by factor 4. The results are significantly better than those obtained by the U-net trained on the single width (\(10-15\%\) in Dice and IoU on unseen scales), but still remain worse than the Riesz network trained on a single scale (around \(7\%\) in Dice and IoU on unseen scales). The Gaussian derivatives network is able to generalize steadily across the scales (Dice and IoU \(74\%\)) but nevertheless performs worse than the scale adjusted U-net (around \(10\%\) in IoU). Moreover, it is very sensitive to noise and typical CT imaging artifacts (Fig. 9). On the other hand, the Riesz network's performance is very steady with growing scale. We even observe improving performance in IoU and Dice with increase in crack thickness. This is due to pixels at the edge of the crack influencing the performance metrics less and less the thicker the crack gets. The Riesz network is unable to precisely localize cracks of width 1 as, due to the partial volume effect, such thin cracks appear to be discontinuous. With the exception of the thinnest crack, the Riesz network has Dice coefficients above \(94\%\) and IoU over \(88\%\) for completely unseen scales. This even holds for the cases when the crack is more than 3 times thicker than the one used for training. ### Experiment 2: Performance on multiscale data Since cracks are naturally multiscale structures, i.e. crack thickness varies as the crack propagates, the performance of the considered methods on multiscale data is analyzed as well. On the one hand, we want to test on data with an underlying ground truth without relying on manual annotation prone to errors and subjectivity. On the other hand, the experiment should be conducted in a more realistic and less controlled setting than the previous one, with cracks as similar as possible to real ones. We therefore use again simulated cracks, this time however with varying width. The thickness is modeled by an adaptive dilation. See Fig. 10 for realization examples. The change rendering our experiment more realistic than the first one is to exploit no prior information about the scale. The Riesz network does not have to be adjusted for this experiment while the competing methods require scale selection as described in Section 4.2.2. Without knowing the scale, testing several configurations is the only option. See Appendix B for examples. Note that in this experiment we used a different crack simulation technique [10] than in Experiment 1. In principle, we cannot claim that either of the two techniques generates more realistic cracks. However, this change serves as an additional goodness check for the methods since these simulation techniques can be seen as independent. We adjust the U-net as follows: We downscale the image by several factors from \(\{2,4,8,16...\}\). The forward pass of the U-net is applied to the original and every downscaled image. Subsequently, the downscaled images are upscaled back to the original size. All predictions are joined by the maximum operator. We report results for several downscaling factor combinations specified by a number \(N\), which is the number of consecutive downscaling factors used, starting at the smallest factor 2. Similarly as in Experiment 1, we report results of two U-net models: the first model is trained on cracks of width 3 as the other models in the comparison. The second model is trained on cracks with mixed widths. Including more crack \begin{table} \begin{tabular}{|c|c c|c c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Multiscale cracks} \\ \cline{2-5} & Precision & Recall & Dice & IoU \\ \hline U-net, plain & 0.655 & 0.322 & 0.432 & 0.275 \\ U-net pyramid 2 & 0.598 & 0.518 & 0.555 & 0.384 \\ U-net pyramid 3 & 0.553 & 0.623 & 0.586 & 0.414 \\ U-net pyramid 4 & 0.496 & 0.705 & 0.582 & 0.411 \\ \hline U-net-mix, plain & 0.471 & 0.288 & 0.358 & 0.218 \\ U-net-mix pyramid 2 & 0.626 & 0.646 & 0.635 & 0.466 \\ U-net-mix pyramid 3 & 0.624 & 0.804 & 0.703 & 0.542 \\ U-net-mix pyramid 4 & 0.583 & 0.899 & 0.707 & 0.547 \\ \hline Gaussian network 2 & 0.553 & 0.503 & 0.527 & 0.358 \\ Gaussian network 3 & 0.418 & 0.735 & 0.533 & 0.364 \\ Gaussian network 4 & 0.306 & 0.857 & 0.451 & 0.291 \\ \hline Riesz network & **0.901** & **0.902** & **0.902** & **0.821** \\ \hline \end{tabular} \end{table} Table 3: Experiment 2. Performance on simulated multiscale cracks. The highest overall value is given in bold. For each competing method, the highest value is underlined. widths in the training set has proven to improve the scale generalization ability in Experiment 1. Hence, the second model represents a more realistic setting that would be used in practice where the crack width is typically unknown. We denote the respective networks as "U-net pyramid" \(N\) and "U-net-mix pyramid" \(N\). For the Gaussian network, we vary the standard deviation parameter \(\sigma\) in the set \(\{1.5,3,6,12\}\). This selection of scales is motivated by the network having been trained on crack width 3 with \(\sigma=1.5\). We start with the original \(\sigma\) and double it in each step. As for the U-net, we test several configurations, now specified by the number \(N\) of consecutive \(\sigma\) values used, starting at the smallest (1.5). We denote the respective network "Gaussian network" \(N\). Results are reported in Table 3 and Fig. 10. We observe a clear weakness of the Riesz network in segmenting thin cracks (Fig. 10, first and last row). Despite of this, the recall is still quite high (90%). However, this could be due to thicker cracks - which are handled very well - contributing stronger to these statistics as they occupy more pixels. Nevertheless, the Riesz network deals with the problem of the wide range scales in an elegant way, just with a single forward pass of the network. The performance of the U-net improves with including more levels in the pyramid, too. However, this applies only up to a certain number of levels after which the additional gain becomes minimal. Moreover, applying the U-net on down-scaled images seems to induce oversegmentation of the cracks (Fig. 10, second and third row). Including a variety of crack widths in the training set improves the overall performance of U-net in all metrics. This confirms the hypothesis that U-net significantly benefits from variations in the training set. However, this model of U-net is still outperformed by the Riesz network trained on a single crack width. The Gaussian network behaves similarly as the U-net, with slightly worse performance (according to Dice or IoU) but better crack coverage (Recall). As the number of \(\sigma\) values grows, the recall increases but at the same time artifacts accumulate across scales reducing precision. The best balance on this data set is found to be three scales. ### Experiment 3: Application to cracks in CT images of concrete Finally, we check the methods' performance on real data: cracks in concrete samples generated by tensile and pull-out tests. In these examples, the crack thickness varies from 1 or 2 pixels to more than 20 pixels (Fig. 11). This motivates the need for methods that automatically generalize to completely unseen scales. Here, we can assess the segmentation results qualitatively, only, as no ground truth is available. Manual segmentation Figure 10: Experiment 2. Cracks with varying width. From left to right: input image, results of the Riesz network and the U-net with 4 pyramid levels. Image size \(400\times 400\) pixels. of cracks in high resolution images is time consuming and prone to individual biases. Additional experiments on real cracks in the different types of concrete are shown in Appendix C. The first sample (Fig. 11, first row) is a concrete cylinder with a glass fiber reinforced composite bar embedded along the center line. A force is applied to this bar to pull it out of the sample and thus initiate cracking. Cracking starts around the bar and branches in three directions: left, right diagonal, and down (very subtle, thin crack). Crack thicknesses and thus scales vary depending on the crack branch. As before, our Riesz network is able to handle all but the finest crack thicknesses efficiently in a single forward pass without specifying the scale range. The U-net on the image pyramid requires a selection of downsampling steps (Appendix B), accumulates artifacts from all levels of the pyramid, and slightly oversegments thin cracks (left branch). The second sample (Fig. 11, second row) features a horizontal crack induced by a tensile test. Here we observe permanently changing scales, similar to our simulated multiscale data. The crack thickness varies from a few to more than 20 pixels. Once more, the Riesz network handles the scale variation well and segments almost all cracks with minimal artifacts. In this example, U-net covers the cracks well, too, even the very subtle ones. However, it accumulates more false positives in the areas of concrete without any cracks than the Riesz network. Figure 11: Experiment 3. Real cracks in concrete: slice from input CT image, results of the Riesz network and of U-net with 2 pyramid levels. Image sizes are \(832\times 1\,088\) (1st row) and \(544\times 992\) (2nd row). ## 5 Conclusion In this paper we introduced a new type of scale invariant neural network based on the Riesz transform as filter basis instead of standard convolutions. Our Riesz neural network is scale invariant in one forward pass without specifying scales or discretizing and sampling the scale dimension. Its ability to generalize to scales differing from those trained on is tested and validated in segmenting cracks in 2d slices from CT images of concrete. Usefulness and elegance of the method become manifest in the fact that only one fixed scale is needed for training, while preserving generalization to completely unseen scales. This reduces the effort for data collection, generation or simulation. Furthermore, our network has relatively few parameters (around 18k) which reduces the danger of overfitting. Experiments on simulated yet realistic multi-scale cracks as well as on real cracks corroborate the Riesz network's potential. Compared to other deep learning methods that can generalize to unseen scales, the Riesz network yields improved, more robust, and more stable results. A detailed ablation study on the network parameters reveals several interesting features: This type of networks requires relatively few data to generalize well. The Riesz network proves to perform well on a data set of approximately 200 images before augmentation. This is particularly useful for deep learning tasks where data acquisition is exceptionally complex or expensive. The performance based on the depth of the network and the number of parameters has been analyzed. Only three layers of the network suffice to achieve good performance on cracks in 2d slices of CT images. Furthermore, the choice of crack thickness in the training set is found to be not decisive for the performance. Training sets with crack widths 3 and 5 yield very similar results. The two main weaknesses of our approach in the crack segmentation task are undersegmentation of thin cracks and edge effects around pores. In CT images, thin cracks appear brighter than thicker cracks due to the partial volume effect reducing the contrast between the crack and concrete. For the same reason thin cracks look discontinued. Thin cracks might therefore require special treatment. In some situations, pore edge regions get erroneously segmented as crack. These can however be removed by a post-processing step and are no serious problem. To unlock the full potential of the Riesz transform, validation on other types of problems is needed. In the future, the method should be applied in 3d since CT data is originally 3d. In this case, memory issues might occur during discretization of the Riesz kernel in frequency space. An interesting topic for further research is to join translation and scale invariance with rotation invariance to design a new generation of neural networks with encoded basic computer vision properties [40]. This type of neural network could be very efficient because it would have even less parameters and hence would require less training data, too. **Acknowledgments.** We thank Christian Jung (RPTU) for generating the multiscale crack images. This work was supported by the German Federal Ministry of Education and Research (BMBF) [grant number 05M2020 (DAnoBi)]. ## Declarations The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. The authors have no competing interests to declare that are relevant to the content of this article. ## Appendix A Experiment on MNIST Large Scale Dataset We test the Riesz networks on a classification task on the MNIST Large Scale [5] to test wider applicability of Riesz networks outside of crack segmentation task. This data set was derived from the MNIST data set [45] and it consists of images of digits between 0 and 9 belonging to one of ten classes (Fig. 12) which are rescaled to a wide range of scales to test scale generalization abilities of neural networks (Fig. 13). Our Riesz network has the channel structure 12-16-24-32-80-10 with the softmax function at the end. In total, it has 20,882 parameters. Following [27], only the central pixel in the image is used for classification. We use the standard CNN described in [5] but without any scale adjustments as a baseline to illustrate limited scale generalization property. This CNN has the channel structure 16-16-32-32-100-10 with the softmax function at the end and in total 574,278 parameters. The training set has 50,000 images of the single scale 1. We used a validation set of 1,000 images. The test set consists of scales ranging in \([0.5,8]\) with 10,000 images per scale. All images have size \(112\times 112\). Models are trained using the ADAM optimizer [43] with default parameters for 20 epochs with learning rate 0.001 which is halved every 3 epochs. Cross-entropy is used as loss function. Fig. 14 shows validation and training loss during 20 epochs. Interestingly, the Riesz network converges faster and even its validation loss remains lower than the training loss of CNN. Accuracies for the different scales are shown in Table 4. The Riesz network shows stable accuracy for scales in the range \([0.5,4]\). The CNN, which has way more degrees of freedom, is only competitive for scales close to the training scale. Results for two scale adjusted versions of the CNN as reported in [5] are also given in Table 4. Their performance is slightly superior to the Riesz network (around \(1-2\%\)). However, it is important to note that this approach uses (max or average) pooling over 17 scales. Further works considering the MNIST Large Scale data set are [27, 37]. Unfortunately, no numeric values of the accuracies are provided, so we can compare the results only qualitatively. The Riesz network's accuracy varies less on a larger range of scales than those of the scale-equivariant networks on Gaussian or morphological scale spaces from [37] that were trained on scale 2. The Gaussian derivative network [27] trained on scale 1 yields results in a range between \(98\%\) and \(99\%\) for medium scales \([0.7,4.7]\) using pooling over 8 Figure 14: Train and validation loss for Riesz network and CNN (as a baseline). Figure 12: 10 classes in MNIST Large Scale data set. All images have size \(112\times 112\). Figure 13: Variation of scales in MNIST Large Scale data set (from left to right): scales 0.5, 1, 2, 4 and 8. All images have size \(112\times 112\). scales. The Riesz network yields similar values but without the need for scale selection. On the smallest scale of 0.5, the Riesz network seems to give a better result than [27], while it is outperformed on the largest scales. The reason for the latter is that digits start to reach the boundary of the image. To reduce that effect, we pad the images by 20 and 40 pixels with the minimal gray value. Indeed, this improves the accuracy significantly for larger scales (Table 4), while it remains equal for the rest of the scales. For example, for scale 8, accuracy increases from 51.8% to 79.8% (padding 20) and 83.6% (padding 40). This is a better accuracy than that reported in [27] and [5] for models trained on scale 1. ## Appendix B Experiments on scale selection for competing methods related to Riesz network The largest benefit of the Riesz network is avoiding the sampling of the scale dimension. Here, we give more detailed insight into scale sampling in practice for competing methods: U-net applied on rescaled images and Gaussian derivative networks. We show how segmentation results change as we add additional scales to the output. As we add new scales, cracks that belong (or are close) to the added scales get segmented. However, additional noise gets segmented, too. These noise pixels that are misclassified as cracks originate from two sources: interpolation error and high frequency noise. For simulated data this is shown in Fig. 15 and Fig. 16. For real cracks see Fig. 17. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline scale & 0.5 & 0.595 & 0.707 & 0.841 & 1 & 1.189 & 1.414 & 1.682 \\ \hline CNN & 40.74 & 64.49 & 88.35 & 96.87 & 97.77 & 96.08 & 80.06 & 38.68 \\ \hline Riesz & 96.34 & 97.59 & 98.06 & 98.54 & 98.58 & 98.50 & 98.45 & 98.40 \\ Riesz-pad20 & 96.33 & 97.57 & 98.07 & 98.48 & 98.63 & 98.54 & 98.49 & 98.46 \\ Riesz-pad40 & 96.34 & 97.55 & 98.07 & 98.47 & 98.63 & 98.58 & 98.53 & 98.44 \\ \hline FovAvg 17ch tr1 [5] & 98.58 & 99.05 & **99.33** & **99.39** & **99.40** & **99.39** & **99.38** & **99.36** \\ FovMax 17ch tr1 [5] & **98.71** & **99.07** & 99.27 & 99.34 & 99.37 & 99.35 & 99.36 & 99.34 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline scale & 2 & 2.378 & 2.828 & 3.364 & 4 & 4.757 & 5.657 & 6.727 & 8 \\ \hline CNN & 25.90 & 24.91 & 23.64 & 21.34 & 19.91 & 18.87 & 18.04 & 15.64 & 11.79 \\ \hline Riesz & 98.39 & 98.24 & 98.01 & 97.51 & 96.42 & 93.5 & 81.58 & 67.66 & 51.82 \\ Riesz-pad20 & 98.39 & 98.35 & 98.33 & 98.16 & 97.78 & 97.08 & 95.48 & 91.10 & 79.78 \\ Riesz-pad40 & 98.46 & 98.39 & 98.34 & 98.29 & 98.16 & 97.80 & 96.82 & **93.75** & **83.6** \\ \hline FovAvg 17ch tr1 [5] & **99.35** & 99.31 & 99.22 & 99.12 & 98.94 & 98.47 & 96.20 & 89.17 & 71.31 \\ FovMax 17ch tr1 [5] & 99.33 & **99.35** & **99.34** & **99.35** & **99.34** & **99.27** & **97.88** & 92.76 & 79.23 \\ \hline \end{tabular} \end{table} Table 4: Classification accuracy (in %) of MNIST Large Scale data set. Best performing method bold. The main drawback is that one needs to select the range of scales on which to apply these methods. Since the scale dimension in the images is bounded from above by the size of the view window, when having images of different sizes scale sampling needs to be adjusted or recalibrated. It is not trivial how to achieve this in a general manner. In contrast, the Riesz transform enables simultaneous, continuous, and equal treatment of all scales automatically adapting to the image size. ## Appendix C Experiments on different types of concrete: fiber reinforced concrete It is a well-known weakness of concrete that it has low tensile strength, i.e. under high tensile force it fails abruptly and explosively. For that reason, reinforcement material is mixed with the cement paste creating a composite material. Most common reinforcements are steel rebars. Nowadays, fibers have become widely used as reinforcement in concrete creating a new class of reinforced concrete materials, e.g. ultra high performance fiber-reinforced concrete [46, 47, 48]. A variety of materials can be used as fiber material, including glass, carbon, and basalt. Since all of these materials have different mechanical properties, the properties of fiber reinforced concrete are connected to the properties of the concrete mixture, including the fiber material. Hence, a lot of effort has recently been invested in the investigation of fiber reinforced concrete samples with various material configurations. In the context of CT imaging, different materials mean different energy absorption properties, i.e. fibers can appear both brighter or darker than concrete, which can result in very different images. In the context of crack segmentation, this means that our methods should be able to efficiently handle these variations. This section compares the performance of the Riesz network, U-net, and U-net-mix from the previous sections on three different fiber reinforced concrete images. We comment on possible post-processing steps to improve results and discuss the robustness of the methods in the context of fiber reinforced concrete. Fig. 18 shows a sample of high performance concrete (HPC) with polypropylene fibers as reinforcement. See [49] for more details on sample and crack initiation. In this image, fibers are long and appear dark and hence interfere with the crack in the center. All three methods are able to extract the central and dominant crack in the middle. The Riesz network is not able to segment the thin crack on the left from the main crack, contrary to both U-nets. However, both U-nets accumulate a much larger amount of misclassified noise compared to the Riesz network. Fig. 19 features a sample reinforced with steel fibers. For more details see [50, 51]. In this image, fibers appear bright and create uneven illumination effects. We use a simple pre-processing step to understand if we can reduce this effect and improve the performance of the methods. Simple morphological openings with square structuring elements of half-sizes 2 and 5 are used for that purpose. As the size of the structuring element increases, segmentation results improve for all three methods. While the Riesz network struggles Figure 15: Experiment 2. Cracks with varying width. First row: input image and ground truth image. Second row: U-net applied to several levels of pyramids \(\{1,2,3,4\}\) (from left to right). Third row: U-net-mix applied to several levels of pyramids \(\{1,2,3,4\}\) (from left to right). Fourth row: Gaussian derivative networks aggregated on growing subsets of scale set \(\{1.5,3,6,12\}\) (from left to right). Image size \(400\times 400\) pixels. with low contrast cracks on the right, both types of U-net segment falsely many non-crack voxels. The CT image from Fig. 20 originates from ultra high performance concrete reinforced with steel fibers [47]. Again, fibers turn out to be bright structures in the images. These extremely highly X-ray absorbing fibers affect the gray value dynamics of the CT images. Morphological openings with square structuring elements of half-sizes 2 and 5 are applied to reduce this effect. As we Figure 16: Experiment 2. Cracks with varying width. First row: input image and ground truth image. Second row: U-net applied to several levels of pyramids \(\{1,2,3,4\}\) (from left to right). Third row: U-net-mix applied to several levels of pyramids \(\{1,2,3,4\}\) (from left to right). Fourth row: Gaussian derivative networks aggregated on growing subsets of scale set \(\{1.5,3,6,12\}\) (from left to right). Image size \(400\times 400\) pixels. increase the size, crack segmentation improves for the Riesz networks. Both types of U-net segment large amounts of noise, even with opening as a pre-processing step, rendering them ineffective for this sample. Figure 17: Experiment 3. Real cracks in concrete: slice from input CT image, results of the Riesz network and of U-net and U-net-mix with ranging pyramid levels (from 1 to 4). Image sizes are \(832\times 1\,088\) (1st row) and \(544\times 992\) (4th row). Figure 19: Cracks in concrete with steel fibers. Rows: input image, segmentation results from the Riesz network, U-net and U-net mix, respectively. Columns: original images, images after applying square closing of half-size 2, and images after applying square opening of half-size 5. Image size is \(1\,295\times 336\). Figure 18: Cracks in high-performance concrete with polypropylene fibers. First row: input image (left), segmentation results from the Riesz network (right). Second row: U-net (left) and U-net mix (right). Image size is \(933\times 764\). ## 6 Conclusion Figure 20: Cracks in samples of ultra high performance concrete reinforced with steel fibers. Rows (from left to right): input image, segmentation results from the Riesz network, U-net and U-net mix, respectively. Column: original images, images after applying square opening of half-size 2, and images after applying square closing of half-size 5. Image size is \(1\,579\times 772\).
2303.11858
Modeling Relational Patterns for Logical Query Answering over Knowledge Graphs
Answering first-order logical (FOL) queries over knowledge graphs (KG) remains a challenging task mainly due to KG incompleteness. Query embedding approaches this problem by computing the low-dimensional vector representations of entities, relations, and logical queries. KGs exhibit relational patterns such as symmetry and composition and modeling the patterns can further enhance the performance of query embedding models. However, the role of such patterns in answering FOL queries by query embedding models has not been yet studied in the literature. In this paper, we fill in this research gap and empower FOL queries reasoning with pattern inference by introducing an inductive bias that allows for learning relation patterns. To this end, we develop a novel query embedding method, RoConE, that defines query regions as geometric cones and algebraic query operators by rotations in complex space. RoConE combines the advantages of Cone as a well-specified geometric representation for query embedding, and also the rotation operator as a powerful algebraic operation for pattern inference. Our experimental results on several benchmark datasets confirm the advantage of relational patterns for enhancing logical query answering task.
Yunjie He, Mojtaba Nayyeri, Bo Xiong, Yuqicheng Zhu, Evgeny Kharlamov, Steffen Staab
2023-03-21T13:59:15Z
http://arxiv.org/abs/2303.11858v2
# Modeling Relational Patterns for Logical Query Answering over Knowledge Graphs ###### Abstract Answering first-order logical (FOL) queries over knowledge graphs (KG) remains a challenging task mainly due to KG incompleteness. Query embedding approaches this problem by computing the low-dimensional vector representations of entities, relations, and logical queries. KGs exhibit relational patterns such as symmetry and composition and modeling the patterns can further enhance the performance of query embedding models. However, the role of such patterns in answering FOL queries by query embedding models has not been yet studied in the literature. In this paper, we fill in this research gap and empower FOL queries reasoning with pattern inference by introducing an inductive bias that allows for learning relation patterns. To this end, we develop a novel query embedding method, RoConE, that defines query regions as geometric cones and algebraic query operators by rotations in complex space. RoConE combines the advantages of Cone as a well-specified geometric representation for query embedding, and also the rotation operator as a powerful algebraic operation for pattern inference. Our experimental results on several benchmark datasets confirm the advantage of relational patterns for enhancing logical query answering task. ## 1 Introduction Answering first-order logical (FOL) queries over knowledge graphs (KGs) has been an important and challenging problem (Ren and Leskovec, 2020). Among various approaches, logical query embedding (Hamilton et al., 2018; Ren et al., 2020; Zhang et al., 2021; Ren and Leskovec, 2020) has received huge attention due to its great efficiency and effectiveness. Logical query embeddings take as input the KGs and a set of first-order logical queries that include existential quantification (\(\exists\)), conjunction (\(\land\)), disjunction (\(\lor\)), and negation (\(\neg\)). Figure 1 shows a concrete FOL query example that corresponds to the natural language query "List the capitals of non-European countries that have held either World Cup or Olympics". The methods model logic operations by neural operators that act in the vector space. In particular, they represent a set of entities as geometric shapes and design neural-network-based logical operators to compute the embedding of the logical queries. The similarity between the embedded logical query and a candidate answer is calculated to measure plausibility. Relations in KGs may form particular patterns, e.g., some relations are symmetric (e.g., spouse) while others are anti-symmetric (e.g., parent_of); some relations are the inverse of other relations (e.g., son_of and father_of). Modeling relational patterns can potentially improve the generalization capability and has been extensively studied in link prediction tasks (Sun et al., 2019; Nayyeri et al., 2021). This has been shown particularly in Sun et al. (2019) where the RotatE model utilizes the rotation operation in Complex space to model relational patterns and enhance link prediction task. However, current logical query embedding models adopt deep neural logical operators and are not able to model relational patterns, which do matter for logical query answering. Figure 2 shows con Figure 1: An example of FOL query corresponds to “List the capitals of non-European countries that have held either World Cup or Olympics”. crete examples of how relation patterns can impact complex query reasoning in KGs. To model and infer various KG relational patterns in logical queries answering process, we propose a novel method called RoConE that combines the advantages of Cone as a well-specified geometric representation for query embedding (Zhang et al., 2021), and also the rotation operator (Sun et al., 2019) as a powerful algebraic operation for pattern inference. We define each relation as a rotation from the source entity set to the answer/intermediate entity set and perform neural logical operators upon the selected entity sets in the complex vector space. We provide theoretical proof of its ability to model relational patterns, as well as experimental results on how the relational patterns influence logical queries answering over three established benchmark datasets. ## 2 Related Work To answer more complex queries, a number of path-based (Xiong et al., 2017; Lin et al., 2018), neural (Hamilton et al., 2018; Ren et al., 2020; Kotnis et al., 2021), and neural-symbolic (Arakelyan et al., 2021; Zhu et al., 2022) methods have been developed. Among these methods, geometric and probabilistic query embedding approaches (Hamilton et al., 2018; Ren et al., 2020; Zhang et al., 2021; Ren and Leskovec, 2020) provide a way to tractably handle first-order logic operators in queries and equip excellent computing efficiency. This is done by representing entity sets as geometric objects or probability distributions, such as boxes (Ren et al., 2020), cones (Zhang et al., 2021), or Beta distribution (Ren and Leskovec, 2020), and performing neural logical operations directly on them. In this way, the expensive search for intermediate variables in multi-hop queries is avoided. All of the above query embedding methods commonly apply multi-layer perceptron networks for selecting answer entities of atomic queries by relation and performing logical operations. Thus, their ability to capture relation patterns in KGs remains unclear. Our proposed method RoConE fills in this gap and combines the benefits of both worlds (KG embedding and complex query answering) together. ## 3 Preliminaries Knowledge GraphA KG \(\mathcal{G}\subseteq\mathcal{E}\times\mathcal{R}\times\mathcal{E}\), where \(\mathcal{E}\) and \(\mathcal{R}\) represent the set of entities and relations respectively, can be defined as a set of subject-predicate-object \(\left\langle s,p,o\right\rangle\) triples. For each triple \(\left\langle e_{i},r,e_{j}\right\rangle\), \(e_{i,j}\in\mathcal{E}\) and \(r\in\mathcal{R}\), it exists if and only if \(e_{i}\) is linked to \(e_{j}\) by relation \(r\). First-Order Logical Queries involving constantsFirst-Order Logical Queries are broad and here we consider answering a subset, i.e., multi-hop queries with constants and first-order logical operations including conjunction (\(\wedge\)), disjunction (\(\vee\)), existential quantification (\(\exists\)), and negation (\(\neg\)). The query consists of a set of constants (anchor entities) \(\mathcal{E}_{a}\subset\mathcal{E}\), a set of existentially quantified bound variables \(V_{1},...,V_{m}\) and a single target answer variable \(V_{?}\). The disjunctive normal form of this subset of FOL queries is namely the disjunction of conjunctive formulas, and can be expressed as \[q[V_{t}]=V_{t}.\exists V_{1},...,V_{m}:c_{1}\lor c_{2}\vee...\lor c_{n} \tag{1}\] Figure 2: (_top_) An example showing how relation patterns influence query answering over incomplete KGs: the intermediate variable Judy’s father-in-law in the query cannot be directly extracted from the given facts; (_bottom_) An illustration of cone rotation. Based on the existing relations between Judy, Justin, and Ryan, and the learned potential relation patterns from other parts of the graph, the model is able to derive the following information by relational rotation: (i) Justin is Judy’s spouse (symmetric rotation) (ii) Justin is the child of Ryan (inversion rotation) (iii) Ryan is Judy’s father in law (compositional rotation). With the predicted query embedding on \(V\), the model is able to derive where \(V\) graduated from by another relational rotation. where \(c_{i}\), \(i\in\{1,...,n\}\) corresponds to a conjunctive query with one or more atomic queries \(d\) i.e. \(c_{i}=d_{i1}\wedge d_{i2}\wedge...\wedge d_{im}\). For each atomic formula, \(d_{ij}\) = \((e_{a},r,V)\) or \(\neg(e_{a},r,V)\) or \((V^{{}^{\prime}},r,V)\) or \(\neg(V^{{}^{\prime}},r,V)\) or \(\neg(V^{{}^{\prime}},r,V)\), where \(e_{a}\in\mathcal{E}_{a}\), \(V\in\{V_{t},V_{1},...,V_{k}\}\), \(V^{{}^{\prime}}\in\{V_{1},...,V_{k}\}\), \(r\in\mathcal{R}\). The goal of logical query embedding is to find a set of answer entities \(\{e_{t1},e_{t2},...\}\) for \(V_{t}\), such that \(q[V_{t}]=\mathrm{True}\). ## 4 Methodology To accommodate the learning of relation patterns in query answering over KGs, we propose a new model RoConE, which models entity sets as cones and relations as anti-clockwise angular rotations on cones in the complex plane. Each cone \(\mathbf{q}\) is parameterized by \(\mathbf{q}=(\mathbf{h}_{U},\mathbf{h}_{L})\), where \(|\mathbf{h}_{\{U,L\}}|=\mathbf{1}\), and \(\mathbf{h}_{U}\), \(\mathbf{h}_{L}\in\mathbb{C}^{d}\) represent the counter-clockwise upper and lower boundaries of a cone, such that \[\begin{split}\mathbf{h}_{U}&\equiv e^{i\mathbf{\theta }_{U}}\equiv e^{i(\mathbf{\theta}_{ax}+\mathbf{\theta}_{ap}/2)},\\ \mathbf{h}_{L}&\equiv e^{i\mathbf{\theta}_{L}}\equiv e^{i (\mathbf{\theta}_{ax}-\mathbf{\theta}_{ap}/2)},\end{split} \tag{2}\] where \(\mathbf{\theta}_{ax}\in[-\pi,\pi)^{d}\) represents the angle between the symmetry axis of the cone and \(\mathbf{\theta}_{ap}\in[0,2\pi]^{d}\) represents the cone aperture, and \(d\) is the embedding dimension. The query and the set of entities are modeled as cones and each entity instance is modeled as a vector such that \(\mathbf{h}_{U}=\mathbf{h}_{L}\). ### Logical Operators As illustrated in Figure 1, each logical query can be represented as a directed acyclic graph (DAG) tree, where the tree nodes correspond to constants/anchor node entities or variables, and the edges correspond to atom relations or logical operations in a query. Logical operations are performed along the DAG tree from constants to the target answer variable. Figure 3 visualizes these operations on 2D complex space. The logical operators can be defined as follows Relational rotating projectionGiven a set of entities \(\mathcal{S}\subset\mathcal{E}\) and a relation \(r\in\mathcal{R}\), the projection operator selects the neighbouring entities \(\mathcal{S}^{{}^{\prime}}\subset\mathcal{E}\) by relation such that \(\mathcal{S}^{{}^{\prime}}=\{e\in\mathcal{S},e^{{}^{\prime}}\in\mathcal{S}^{{}^ {\prime}}:r(e,e^{{}^{\prime}})=True\}\). Existing query embedding methods Zhang et al. (2021); Ren et al. (2020); Hamilton et al. (2018); Ren and Leskovec (2020) apply multi-layer perceptron networks to accomplish this task. They do not accommodate the learning of potential KG relational patterns which might help in reasoning logical queries. Motivated by RotatE Sun et al. (2019), we represent each relation \(\mathbf{r}\) as a counterclockwise relational rotation on query embeddings about the origin of the complex plane such that \(\mathbf{r}=(\mathbf{r}_{U},\mathbf{r}_{L})\), where \(|\mathbf{r}_{\{U,L\}}|=\mathbf{1}\), and \(\mathbf{r}_{U},\mathbf{r}_{L}\in\mathbb{C}^{d}\). Given the query embedding \(\mathbf{q}=(\mathbf{h}_{U},\mathbf{h}_{L})\) and a relation \(\mathbf{r}\), the selected query embedding \(\mathbf{q}^{{}^{\prime}}=(\mathbf{h}^{{}^{\prime}}_{U},\mathbf{h}^{{}^{\prime} }_{L})\) is \[\begin{split}\mathbf{h}^{{}^{\prime}}_{U}&=\mathbf{ h}_{U}\circ\mathbf{r}_{U}\equiv e^{i(\mathbf{\theta}_{ax}+\mathbf{\theta}_{ax,r}+(\mathbf{ \theta}_{ap}+\mathbf{\theta}_{ap,r})/2)},\\ \mathbf{h}^{{}^{\prime}}_{L}&=\mathbf{h}_{L}\circ \mathbf{r}_{L}\equiv e^{i(\mathbf{\theta}_{ax}+\mathbf{\theta}_{ax,r}-(\mathbf{\theta}_{ ap}+\mathbf{\theta}_{ap,r})/2)},\end{split} \tag{3}\] where \(\circ\) is the Hadmard (element-wise) product, and \(\mathbf{\theta}_{ax,r}\), \(\mathbf{\theta}_{ap,r}\) correspond to the equivalent relational rotation on \(\mathbf{\theta}_{ax}\), \(\mathbf{\theta}_{ap}\). Specifically, for each element of the cone embeddings, we have \(h^{{}^{\prime}}_{U,i}=h_{U,i}r_{U,i}\) and \(h^{{}^{\prime}}_{L,i}=h_{L,i}r_{L,i}\). Each element \(r_{i}\) of the relational rotation \(\mathbf{r}_{\{U,L\}}\) corresponds to a counterclockwise rotation on the matching element of upper or lower boundaries by \(\theta_{r,i}\) radians about the origin of the complex plane. By modeling the projection as relational rotation on a cone in the complex space, RoConE is shown to model and infer all three types of relation patterns introduced in Section 3. The lemmas and their proofs are in Appendix and the rotations corresponding to different relation patterns are visualized in Figure 4. IntersectionFor the input cone embeddings of entity sets \(\{\mathbf{q}_{1},...,\mathbf{q}_{n}\}\), the intersection operator selects the intersection \(\mathbf{q}^{{}^{\prime}}=\cap_{j=1}^{n}\mathbf{q}_{j}\) with the **SemanticAverage1**\((\cdot)\) and **CardMin**\((\cdot)\)Zhang et al. (2021), which calculate the semantic centers and apertures of cones respectively. Since we have each cone \(\mathbf{q}_{j}=(\mathbf{h}_{j,U},\mathbf{h}_{j,L})\equiv(e^{i(\mathbf{\theta}_{j,ax} +\mathbf{\theta}_{j,ap}/2)},e^{i(\mathbf{\theta}_{j,ax}-\mathbf{\theta}_{j,ap}/2)})\), the intersection \(\mathbf{q}^{{}^{\prime}}=(\mathbf{h}^{{}^{\prime}}_{U},\mathbf{h}^{{}^{\prime} }_{L})\) can be defined as follows Footnote 1: **SemanticAverage** and **CardMin** are explained in Appendix B \[\begin{split}\mathbf{h}^{{}^{\prime}}_{U}&=e^{i(\mathbf{ \theta}^{{}^{\prime}}_{ax}+\mathbf{\theta}^{{}^{\prime}}_{ap}/2)},\\ \mathbf{h}^{{}^{\prime}}_{L}&=e^{i(\mathbf{\theta}^{{}^{ \prime}}_{ax}-\mathbf{\theta}^{{}^{\prime}}_{ap}/2)},\end{split} \tag{4}\] where \[\begin{split}\mathbf{\theta}^{{}^{\prime}}_{ax}&=\textbf{ SemanticAverage}(\{(\mathbf{\theta}_{j,ax},\mathbf{\theta}_{j,ap})\}_{j=1}^{n}),\\ \mathbf{\theta}^{{}^{\prime}}_{ap}&=\textbf{CardMin}(\{(\mathbf{ \theta}_{j,ax},\mathbf{\theta}_{j,ap})\}_{j=1}^{n}).\end{split} \tag{5}\] DisjunctionGiven the input cone embeddings of entity sets \(\{\mathbf{q}_{1},...,\mathbf{q}_{n}\}\) where \(\mathbf{q}_{j}=(\mathbf{h}_{j,U},\mathbf{h}_{j,L})\), the disjunction operator finds the union set \(\mathbf{q}^{\prime}=\cup_{j=1}^{n}\mathbf{q}_{j}=\{\mathbf{q}_{1},\ldots, \mathbf{q}_{n}\}\), which is equivalent to \[(((\mathbf{h}_{1,U}^{\mathsf{d}},\mathbf{h}_{1,L}^{\mathsf{d}}),\ldots,( \mathbf{h}_{n,U}^{\mathsf{d}},\mathbf{h}_{n,L}^{\mathsf{d}})),\ldots,((\mathbf{ h}_{1,U}^{\mathsf{d}},\mathbf{h}_{1,L}^{\mathsf{d}}),\ldots,(\mathbf{h}_{n,U}^{ \mathsf{d}},\mathbf{h}_{n,L}^{\mathsf{d}})))\] Following Ren et al. (2020), we also adopt DNF technique to translate FOL queries into the disjunction of conjunctive queries and only perform the disjunction operator in the last step in the computation graph. NegationGiven a set of entities \(\mathcal{S}\subset\mathcal{E}\), the negation operator finds its complementary negation \(\bar{S}=\mathcal{E}\setminus\mathcal{S}\). Given the cone embedding of entity set \(\mathcal{S}\), \(\mathbf{q}^{\mathcal{S}}=(\mathbf{h}_{U}^{\mathcal{S}},\mathbf{h}_{L}^{ \mathcal{S}})\), its corresponding complementary negation \(\bar{\mathbf{q}}=(\mathbf{h}_{L}^{\mathcal{S}},\mathbf{h}_{U}^{\mathcal{S}})\). ### Optimization Given a set of training samples, our goal is to minimize the distance between the query cone embedding \(\mathbf{q}=(\mathbf{h}_{U}^{q},\mathbf{h}_{L}^{q})\) and the answer entity vector \(\mathbf{h}^{*}\) while maximizing the distance between this query and negative samples. Thus, we define our training objective, the negative sample loss as \[L=-log\sigma(\gamma-d(\mathbf{h}^{*},\mathbf{q}))-\tfrac{1}{k}\sum_{i=1}^{k}log \sigma(d(\mathbf{h}_{i}^{*},\mathbf{q})-\gamma) \tag{6}\] where \(d(\cdot)\) is the combined distance specifically defined in Appendix C, \(\gamma\) is a margin, \(\mathbf{h}^{*}\) is a positive entity and \(\mathbf{h}_{i}^{*}\) is the i-th negative entity, k is the number of negative samples, and \(\sigma(\cdot)\) represents the sigmoid function. ## 5 Experiments Experiment setupWe evaluate RoConE on two benchmark datasets NELL995 Xiong et al. (2017) and FB15k-237 Toutanova and Chen (2015). RoConE is compared with various state-of-the-art query embedding models. Mean reciprocal rank (MRR) is used as the metric. More experimental details are in Appendix D. Main resultsTable 1 summarizes the performance of all methods on answering various query types without negation. RoConE outperforms baseline methods on the majority of query types while achieving competitive results on the others. We also observed that RoConE shows better performances on NELL-995 than those on FB15k-237. We conjecture that this is due to the discrepancy in the distribution of relation patterns between these two datasets. As Table 2 shows, RoConE does not bring many improvements for answering query types involving negation. There are two folds of possible reasons that might lead to this result. Firstly, the traditional modeling of negation as complements may be problematic, which can be reflected in the poor performance of all existing QE models. This largely brings too much uncertainty into the query embedding and leads to severe bias in prediction. Secondly, the influence of relation patterns on negation queries is limited when we model negation as complements. Ablation studyTo investigate the influence of relation patterns on the query answering model, we designed an ablation study for RoConE on NELL995. The results are reported in Table 3. RoConE (Base) denotes the neural baseline model without the relational rotating projection module. RoConE (S.E) and RoConE (Trunc) correspond to two variants of RoConE with different rotation strategies. More details are elaborated in Appendix D.4. The overperformance of RoConE and RoConE (truncation) reconfirms the efficiency of relation patterns in logical query reasoning tasks. ## 6 Conclusion In this paper, we theoretically and experimentally investigate the influence of relation patterns on enhancing the logical query reasoning task. By com \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline **Dataset** & **Model** & **19** & **20** & **20** & **31** & **34** & **9** & **19** & **20** & **9** \\ \hline \multirow{3}{*}{FB15k-237} & \multirow{3}{*}{GoE} & 35.2 & 4.7 & 35.3 & 35.7 & 16.7 & 10.9 & **4.4** & 5.8 \\ & & Query200 & 41.3 & 9.9 & 7.2 & 31.1 & 45.4 & 21.9 & 13.3 & 11.9 & 8.1 \\ & & RoConE & 39.0 & 10.0 & 20.8 & 24.5 & 24.2 & 12.6 & 12.4 & 9.2 \\ & & ConE & 41.8 & **12.8** & 11.0 & 32.6 & 4.7 & **32.5** & **15.0** & **14.0** & **14.5** & **14.8** \\ \cline{2-11} & \multirow{3}{*}{RoConE} & **4.22** & 10.5 & **3.5** & **33.8** & **23.4** & **23.5** & **14.0** & **14.5** & **12.8** \\ \cline{2-11} & & \multirow{3}{*}{GoE} & 33.1 & 11.1 & 9.9 & 72.3 & 35.1 & 35.1 & 34.5 & 45.5 & 9.0 \\ \cline{2-11} & & Query200 & & 42.7 & 14.5 & 17.7 & 34.7 & 34.6 & 32.2 & 17.6 & 12.0 & 10.7 \\ \cline{2-11} & & RoConE & 53.0 & 11.0 & 14.7 & 37.6 & 47.5 & 24.1 & 14.3 & 12.2 & 8.5 \\ \cline{2-11} & & ConE & 53.1 & 16.1 & 13.9 & 00.0 & 50.6 & **26.3** & 17.5 & 15.3 & 11.3 \\ \cline{2-11} & & RoConE & **54.5** & 17.7 & **14.4** & **41.9** & **53.0** & 26.1 & **20.7** & **16.5** & **12.8** \\ \hline \hline \end{tabular} \end{table} Table 1: MRR results (%) of RoConE, BETAE, Q2B, and GQE on answering EPFOL (\(\exists,\wedge,\vee\)) queries. The best statistic is highlighted in bold, while the second best is highlighted in underline. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **Dataset** & **Model** & **2in** & **3in** & **inp** & **pin** & **pni** \\ \hline \multirow{3}{*}{FB15k-237} & \multirow{3}{*}{GoE} & 5.1 & 7.9 & 7.4 & 3.6 & 3.4 \\ & RoConE & **5.4** & **8.6** & **7.8** & **4.0** & **3.6** \\ & RoConE & 4.1 & 7.9 & 6.9 & 3.1 & 2.8 \\ \hline \multirow{3}{*}{NELL995} & \multirow{3}{*}{GoE} & 5.1 & 7.8 & 10 & 3.1 & 3.5 \\ & ConE & **5.7** & **8.1** & **10.8** & **3.5** & **3.9** \\ \cline{1-1} & RoConE & 5.2 & 7.7 & 9.4 & 3.2 & 3.7 \\ \hline \hline \end{tabular} \end{table} Table 2: MRR results (%) of RoConE, BETAE, and ConE on answering queries with negation (\(\neg\)). bining the relational rotating projection with the cone query embedding model in complex space, we improve FOL queries reasoning with relation pattern inference. ## 7 Limitations RoConE incorporates the learning of potential KG relation patterns into the existing query embedding model for solving logical queries. We provide initial proof of the efficiency of relation patterns to complex reasoning via both theoretical explanations and experiments. One limitation of RoConE is that the relational rotating projection can not be generalized to other geometric query embedding methods, except for cone embedding, due to the restriction of natural geometry properties. For future work, we will propose more general and effective strategies to enhance the learning of relation patterns in the complex query reasoning task. ## 8 Ethics Statement The authors declare that we have no conflicts of interest. This article does not contain any studies involving business data and personal information.